





技术领域technical field
本公开的实施例涉及计算机技术领域,具体涉及用于图像识别的方法和装置。Embodiments of the present disclosure relate to the field of computer technology, and in particular to methods and devices for image recognition.
背景技术Background technique
图像识别技术是信息时代的一门重要的技术,其产生目的是为了让计算机代替人类去处理大量的物理信息。随着计算机技术的发展,人类对图像识别技术的认识越来越深刻。Image recognition technology is an important technology in the information age, and its purpose is to allow computers to replace humans to process a large amount of physical information. With the development of computer technology, human beings have a deeper and deeper understanding of image recognition technology.
图像识别技术可应用于多种物体的识别,例如,人脸、动物、植物。目前估计生存在地球上的植物有50多万种,种类纷繁,样式各异,中国植物志收入的植物品类有3.7万多种,每种植物又包涵根、茎、叶、花、果实、种子6个部分,各个部位受环境,地域,季节气候等影响,表现出不同的形态,使得植物识别成为依赖大量专业性知识人才才能解决的问题。在环境监控,植物物种入侵识别等方面,都需要投入大量人力和财力,且依靠人工识别的方式,实效性无法得到满足。大规模高质量植物物种识别需求应运而生。目前我们主要从科-属-种粒度对植物进行分类识别,为更好的掌握植物种群分布状态,为进出口植物鉴别生物物种入侵,环境监测,地质生态变化等工作带来基础研究的依据。也可以大大减轻识别的时间和减少人力投入。Image recognition technology can be applied to the recognition of various objects, such as human faces, animals, and plants. At present, it is estimated that there are more than 500,000 species of plants living on the earth, with various types and styles. There are more than 37,000 plant categories included in the Flora of China, and each plant includes roots, stems, leaves, flowers, fruits, and seeds. There are 6 parts, each part is affected by the environment, region, seasonal climate, etc., showing different forms, making plant identification a problem that can only be solved by a large number of professional knowledge and talents. In terms of environmental monitoring and identification of plant species invasion, a lot of manpower and financial resources are needed, and relying on manual identification, the effectiveness cannot be satisfied. The need for large-scale and high-quality plant species identification arises at the historic moment. At present, we mainly classify and identify plants from the granularity of family-genus-species, in order to better grasp the distribution of plant populations, and to provide basic research basis for identification of biological species invasion, environmental monitoring, and geological and ecological changes in imported and exported plants. It can also greatly reduce the identification time and reduce human input.
通常采用以下方式进行图像识别:Image recognition is usually performed in the following ways:
1、经验识别,人工识别:每个物体通过人工识别,大量依赖专业知识和人力,识别效率低,对专业技能要求较高,完全依赖人员的专业素质和经验,扩展性实效性差。1. Experience recognition, manual recognition: Each object is manually recognized, relying heavily on professional knowledge and manpower, low recognition efficiency, high requirements for professional skills, completely dependent on the professional quality and experience of personnel, and poor scalability and effectiveness.
2、图像特征识别:生物,特别是植物受到环境气候影响,在生长形态和标志上存在较大的区别,用图像特征并不能完整代表植物的特性,且无法体现植物各个部分的特征关系。(如特征矩,面积比,边缘等),且不同植物的表征特征区域存在不同,各个植物区分复杂度高,种类大幅度增加之后,识别性能也会急剧下降。2. Image feature recognition: organisms, especially plants, are affected by the environment and climate, and there are great differences in growth forms and signs. Image features cannot fully represent the characteristics of plants, and cannot reflect the feature relationship of various parts of plants. (such as characteristic moment, area ratio, edge, etc.), and the characteristic feature areas of different plants are different, and the complexity of distinguishing each plant is high. After a large increase in species, the recognition performance will also drop sharply.
3、机器学习识别:利用设计特征加分类器的方式,首先人为设计特征的存在先天缺点,对参数和图像质量敏感,其次分类器强依赖于数据标准的精准性,并且参数调整复杂,多分类任务时,对分类器挑战大。3. Machine learning identification: Using the method of adding design features to classifiers, first of all, artificially designed features have inherent shortcomings and are sensitive to parameters and image quality. Secondly, classifiers are strongly dependent on the accuracy of data standards, and parameter adjustment is complicated, and multi-classification The task poses a great challenge to the classifier.
发明内容Contents of the invention
本公开的实施例提出了用于图像识别的方法和装置。Embodiments of the present disclosure propose methods and devices for image recognition.
第一方面,本公开的实施例提供了一种用于图像识别的方法,包括:获取待识别的物体的图像;将图像输入预先训练的分类模型,得到分类预测结果和物体的特征;基于分类预测结果计算分类预测结果的可信度;若分类预测结果的可信度小于等于第一可信度阈值,则将特征输入预先构建的检索模型,得到检索预测结果;基于检索预测结果计算检索预测结果的可信度;若检索预测结果的可信度大于第二可信度阈值,则输出检索预测结果。In the first aspect, an embodiment of the present disclosure provides a method for image recognition, including: acquiring an image of an object to be recognized; inputting the image into a pre-trained classification model to obtain a classification prediction result and the characteristics of the object; The prediction result calculates the credibility of the classification prediction result; if the reliability of the classification prediction result is less than or equal to the first credibility threshold, then input the features into the pre-built retrieval model to obtain the retrieval prediction result; calculate the retrieval prediction based on the retrieval prediction result The credibility of the result; if the credibility of the retrieval prediction result is greater than the second credibility threshold, the retrieval prediction result is output.
在一些实施例中,该方法还包括:若分类预测结果的可信度大于第一可信度阈值,则输出分类预测结果。In some embodiments, the method further includes: if the reliability of the classification prediction result is greater than a first reliability threshold, outputting the classification prediction result.
在一些实施例中,该方法还包括:若检索预测结果的可信度小于等于第二可信度阈值,基于分类预测结果和检索预测结果计算图像的各种预测结果的可信度;各种预测结果的可信度最高的预定数目个预测结果作为最终预测结果输出。In some embodiments, the method further includes: if the credibility of the retrieval prediction result is less than or equal to a second credibility threshold, calculating the credibility of various prediction results of the image based on the classification prediction result and the retrieval prediction result; A predetermined number of prediction results with the highest reliability of the prediction results are output as final prediction results.
在一些实施例中,获取待识别的物体的图像,包括:将包括待识别的物体和背景的图像输入预先训练的主体检测模型,得到待识别的物体的图像。In some embodiments, acquiring the image of the object to be recognized includes: inputting the image including the object to be recognized and the background into a pre-trained subject detection model to obtain the image of the object to be recognized.
在一些实施例中,将图像输入预先训练的分类模型,包括:将待识别的物体的图像输入预先训练的二分类模型,判断物体是否为目标类别;若为目标类别,则将图像输入预先训练的用于识别目标类别的图像的分类模型。In some embodiments, inputting the image into the pre-trained classification model includes: inputting the image of the object to be recognized into the pre-trained binary classification model to determine whether the object is the target category; if it is the target category, inputting the image into the pre-trained A classification model for recognizing images of target classes.
在一些实施例中,分类模型为Inception-ResNetv2模型,采用类别均匀采样的方式进行样本选择,并加入标签平滑和混合策略,采用cosine学习率衰减策略,训练损失采用交叉熵损失函数。In some embodiments, the classification model is the Inception-ResNetv2 model, which adopts the method of category uniform sampling for sample selection, and adds label smoothing and mixing strategies, adopts the cosine learning rate attenuation strategy, and uses the cross-entropy loss function for training loss.
在一些实施例中,检索模型通过如下步骤构建:提取预定图像库中各图像的特征,其中,图像库中各图像与类别相对应;将各图像的特征进行降维处理;对降维后的各图像的特征构建索引,其中,索引包括正排索引和/或倒排索引。In some embodiments, the retrieval model is constructed through the following steps: extracting the features of each image in a predetermined image library, wherein each image in the image library corresponds to a category; performing dimensionality reduction processing on the features of each image; The features of each image construct an index, wherein the index includes a forward index and/or an inverted index.
在一些实施例中,主体检测模型采用Faster-RCNN。In some embodiments, the subject detection model employs Faster-RCNN.
在一些实施例中,二分类模型采用ResNet-34模型。In some embodiments, the binary classification model adopts the ResNet-34 model.
第二方面,本公开的实施例提供了一种用于图像识别的装置,包括:获取单元,被配置成获取待识别的物体的图像;分类单元,被配置成将图像输入预先训练的分类模型,得到分类预测结果和物体的特征;第一计算单元,被配置成基于分类预测结果计算分类预测结果的可信度;检索单元,被配置成若分类预测结果的可信度小于等于第一可信度阈值,则将特征输入预先构建的检索模型,得到检索预测结果;第二计算单元,被配置成基于检索预测结果计算检索预测结果的可信度;输出单元,被配置成若检索预测结果的可信度大于第二可信度阈值,则输出检索预测结果。In a second aspect, an embodiment of the present disclosure provides an apparatus for image recognition, including: an acquisition unit configured to acquire an image of an object to be identified; a classification unit configured to input the image into a pre-trained classification model , to obtain the classification prediction result and the characteristics of the object; the first calculation unit is configured to calculate the reliability of the classification prediction result based on the classification prediction result; the retrieval unit is configured to be less than or equal to the first possible Reliability threshold, the features are input into the pre-built retrieval model to obtain the retrieval prediction result; the second calculation unit is configured to calculate the credibility of the retrieval prediction result based on the retrieval prediction result; the output unit is configured to if the retrieval prediction result The credibility of is greater than the second credibility threshold, then the retrieval prediction result is output.
在一些实施例中,输出单元进一步被配置成:若分类预测结果的可信度大于第一可信度阈值,则输出分类预测结果。In some embodiments, the output unit is further configured to: output the classification prediction result if the reliability of the classification prediction result is greater than a first reliability threshold.
在一些实施例中,该装置还包括融合单元,被配置成:若检索预测结果的可信度小于等于第二可信度阈值,基于分类预测结果和检索预测结果计算图像的各种预测结果的可信度;各种预测结果的可信度最高的预定数目个预测结果作为最终预测结果输出。In some embodiments, the device further includes a fusion unit configured to: if the reliability of the retrieval prediction result is less than or equal to the second credibility threshold, calculate the various prediction results of the image based on the classification prediction result and the retrieval prediction result Credibility: A predetermined number of prediction results with the highest reliability of various prediction results are output as the final prediction results.
在一些实施例中,获取单元进一步被:将包括待识别的物体和背景的图像输入预先训练的主体检测模型,得到待识别的物体的图像。In some embodiments, the acquisition unit is further configured to: input the image including the object to be recognized and the background into the pre-trained subject detection model to obtain the image of the object to be recognized.
在一些实施例中,分类单元进一步被配置成:将待识别的物体的图像输入预先训练的二分类模型,判断物体是否为目标类别;若为目标类别,则将图像输入预先训练的用于识别目标类别的图像的分类模型。In some embodiments, the classification unit is further configured to: input the image of the object to be recognized into the pre-trained binary classification model to determine whether the object is the target category; if it is the target category, input the image into the pre-trained model for recognition A classification model for images of the target category.
在一些实施例中,分类模型为Inception-ResNetv2模型,采用类别均匀采样的方式进行样本选择,并加入标签平滑和混合策略,采用cosine学习率衰减策略,训练损失采用交叉熵损失函数。In some embodiments, the classification model is the Inception-ResNetv2 model, which adopts the method of category uniform sampling for sample selection, and adds label smoothing and mixing strategies, adopts the cosine learning rate attenuation strategy, and uses the cross-entropy loss function for training loss.
在一些实施例中,该装置还包括构建单元,被配置成:提取预定图像库中各图像的特征,其中,图像库中各图像与类别相对应;将各图像的特征进行降维处理;对降维后的各图像的特征构建索引,其中,索引包括正排索引和/或倒排索引。In some embodiments, the device further includes a construction unit configured to: extract features of each image in a predetermined image library, wherein each image in the image library corresponds to a category; perform dimensionality reduction processing on the features of each image; The features of each image after dimensionality reduction construct an index, wherein the index includes a forward index and/or an inverted index.
在一些实施例中,主体检测模型采用Faster-RCNN。In some embodiments, the subject detection model employs Faster-RCNN.
在一些实施例中,二分类模型采用ResNet-34模型。In some embodiments, the binary classification model adopts the ResNet-34 model.
第三方面,本公开的实施例提供了一种用于图像识别的电子设备,包括:一个或多个处理器;存储装置,其上存储有一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现如第一方面中任一的方法。In a third aspect, an embodiment of the present disclosure provides an electronic device for image recognition, including: one or more processors; a storage device, on which one or more programs are stored, when the one or more programs are Execution by one or more processors, so that the one or more processors implement the method according to any one of the first aspect.
第三方面,本公开的实施例提供了一种计算机可读介质,其上存储有计算机程序,其中,程序被处理器执行时实现如第一方面中任一的方法。In a third aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, wherein, when the program is executed by a processor, the method according to any one of the first aspect is implemented.
本公开的实施例提供的用于图像识别的方法和装置,无需特殊的采集设备,图像适应性好。多种识别预测结果结合,提高了最终预测的准确性。无需特殊设计特征,用深度网络模型,提高层图像识别特征,分别应用于分类和检索识别。采用检索预测方式,可支持上万类植物识别,准确率高,扩展性强。整个结果对阈值,图像类别鲁棒性强,发生新类别和新增图像时,调整识别效果时间短,时效性强。The method and device for image recognition provided by the embodiments of the present disclosure do not require special acquisition equipment and have good image adaptability. The combination of multiple identification and prediction results improves the accuracy of the final prediction. No need for special design features, use the deep network model to improve the layer image recognition features, and apply them to classification and retrieval recognition respectively. Using the retrieval and prediction method, it can support tens of thousands of types of plant identification, with high accuracy and strong scalability. The whole result is robust to the threshold value and image category. When a new category or new image is added, the time to adjust the recognition effect is short and the timeliness is strong.
附图说明Description of drawings
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:Other characteristics, objects and advantages of the present disclosure will become more apparent by reading the detailed description of non-limiting embodiments made with reference to the following drawings:
图1是本公开的一个实施例可以应用于其中的示例性系统架构图;FIG. 1 is an exemplary system architecture diagram to which an embodiment of the present disclosure can be applied;
图2是根据本公开的用于图像识别的方法的一个实施例的流程图;FIG. 2 is a flowchart of an embodiment of a method for image recognition according to the present disclosure;
图3是根据本公开的用于图像识别的方法的一个应用场景的示意图;FIG. 3 is a schematic diagram of an application scenario of a method for image recognition according to the present disclosure;
图4是根据本公开的用于图像识别的方法的又一个实施例的流程图;FIG. 4 is a flowchart of another embodiment of a method for image recognition according to the present disclosure;
图5是根据本公开的用于图像识别的装置的一个实施例的结构示意图;Fig. 5 is a schematic structural diagram of an embodiment of a device for image recognition according to the present disclosure;
图6是适于用来实现本公开的实施例的电子设备的计算机系统的结构示意图。FIG. 6 is a structural schematic diagram of a computer system suitable for implementing an electronic device according to an embodiment of the present disclosure.
具体实施方式Detailed ways
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。The present disclosure will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain related inventions, rather than to limit the invention. It should also be noted that, for the convenience of description, only the parts related to the related invention are shown in the drawings.
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本公开。It should be noted that, in the case of no conflict, the embodiments in the present disclosure and the features in the embodiments can be combined with each other. The present disclosure will be described in detail below with reference to the accompanying drawings and embodiments.
图1示出了可以应用本公开的用于图像识别的方法或用于图像识别的装置的实施例的示例性系统架构100。FIG. 1 shows an
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。As shown in FIG. 1 , a
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103上可以安装有各种通讯客户端应用,例如图像识别类应用、网页浏览器应用、购物类应用、搜索类应用、即时通信工具、邮箱客户端、社交平台软件等。Users can use
终端设备101、102、103可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是具有摄像头并且支持图像浏览的各种电子设备,包括但不限于智能手机、平板电脑、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中。其可以实现成多个软件或软件模块(例如用来提供分布式服务),也可以实现成单个软件或软件模块。在此不做具体限定。The
服务器105可以是提供各种服务的服务器,例如对终端设备101、102、103上显示的图像提供支持的后台图像识别服务器。后台图像识别服务器可以对接收到的图像识别请求等数据进行分析等处理,并将处理结果(例如植物的类别)反馈给终端设备。The
需要说明的是,服务器可以是硬件,也可以是软件。当服务器为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器。当服务器为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务的多个软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。It should be noted that the server may be hardware or software. When the server is hardware, it can be implemented as a distributed server cluster composed of multiple servers, or as a single server. When the server is software, it can be implemented as multiple software or software modules (for example, multiple software or software modules for providing distributed services), or can be implemented as a single software or software module. No specific limitation is made here.
需要说明的是,本公开的实施例所提供的用于图像识别的方法一般由服务器105执行,相应地,用于图像识别的装置一般设置于服务器105中。It should be noted that the method for image recognition provided by the embodiments of the present disclosure is generally executed by the
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。It should be understood that the numbers of terminal devices, networks and servers in Fig. 1 are only illustrative. According to the implementation needs, there can be any number of terminal devices, networks and servers.
继续参考图2,示出了根据本公开的用于图像识别的方法的一个实施例的流程200。该用于图像识别的方法,包括以下步骤:Continuing to refer to FIG. 2 , a
步骤201,获取待识别的物体的图像。
在本实施例中,用于图像识别的方法的执行主体(例如图1所示的服务器)可以通过有线连接方式或者无线连接方式从用户利用其进行图像识别的终端接收待识别的物体的图像。该图像可以是移动终端的摄像头拍摄的实物照片,也可以是卫星拍摄的遥感图像。物体可以是人类、动物、植物等。下文中主要以植物为例,具体介绍图像识别的过程。In this embodiment, the executing subject of the method for image recognition (for example, the server shown in FIG. 1 ) may receive the image of the object to be recognized from the terminal through which the user performs image recognition through a wired connection or a wireless connection. The image may be a physical photo taken by a camera of the mobile terminal, or a remote sensing image taken by a satellite. Objects can be humans, animals, plants, etc. The following mainly takes plants as an example to introduce the process of image recognition in detail.
在本实施例的一些可选的实现方式中,获取待识别的物体的图像,包括:将包括待识别的物体和背景的图像输入预先训练的主体检测模型,得到待识别的物体的图像。In some optional implementation manners of this embodiment, acquiring the image of the object to be recognized includes: inputting the image including the object to be recognized and the background into a pre-trained subject detection model to obtain the image of the object to be recognized.
主体检测模型实现了主体检测方法,主体检测方法的实现方式,包括但是不限于下面的方法。The agent detection model implements the agent detection method, and the implementation methods of the agent detection method include but are not limited to the following methods.
主体检测模型可以在一张图片中自动检测出主体区域,去除背景区域的干扰,对于物体分类具有重大意义。主体检测方法主要依靠目标检测方法,从过去的十多年来看,自然图像的目标检测算法大体上可以分为基于传统手工特征的时期(2013之前)以及基于深度学习的目标检测时期(2013至今)。早期的目标检测算法大多是基于手工特征所构建的。由于在深度学习诞生之前缺乏有效的图像特征表达方法,人们不得不尽其所能设计更加多元化的检测算法以弥补手工特征表达能力上的缺陷。同时,由于计算资源的缺乏,人们不得不同时寻找更加精巧的计算方法对模型进行加速。随着深度学习的兴起和卷积神经网络层数的不断加深,网络的抽象能力、抗平移能力和抗尺度变化能力越来越强,产生了一批代表性的深度学习目标检测方法,这些方法又可以分为两个大类,其一称为单阶段(one stage)目标检测方法,包括YOLO系列方法(YOLO v2,YOLO9000,YOLOv3等),G-CNN,SSD系列方法(R-SSD,DSSD,DSOD,FSSD等);其二称为两阶段(two stage)目标检测方法,包括R-CNN,SPPNet,Fast-RCNN,Faster-RCNN,FPN等。两阶段目标检测方法相比单阶段目标检测方法可以获得更高的检测准确率,同时Faster-RCNN是两阶段目标检测方法中的较为稳定的模型,所以本方法主要采用Faster-RCNN作为主体检测方法。The subject detection model can automatically detect the subject area in a picture and remove the interference of the background area, which is of great significance for object classification. Subject detection methods mainly rely on target detection methods. From the perspective of the past ten years, target detection algorithms for natural images can be roughly divided into the period based on traditional manual features (before 2013) and the period of target detection based on deep learning (2013-present). ). Most of the early object detection algorithms were constructed based on manual features. Due to the lack of effective image feature expression methods before the birth of deep learning, people have to do their best to design more diversified detection algorithms to make up for the shortcomings of manual feature expression capabilities. At the same time, due to the lack of computing resources, people have to find more sophisticated computing methods to accelerate the model. With the rise of deep learning and the continuous deepening of convolutional neural network layers, the abstraction ability, anti-translation ability and anti-scale change ability of the network are getting stronger and stronger, and a number of representative deep learning object detection methods have been produced. These methods It can be divided into two categories, one is called single-stage (one stage) target detection method, including YOLO series methods (YOLO v2, YOLO9000, YOLOv3, etc.), G-CNN, SSD series methods (R-SSD, DSSD , DSOD, FSSD, etc.); the second is called two-stage target detection methods, including R-CNN, SPPNet, Fast-RCNN, Faster-RCNN, FPN, etc. The two-stage target detection method can obtain higher detection accuracy than the single-stage target detection method. At the same time, Faster-RCNN is a relatively stable model in the two-stage target detection method, so this method mainly uses Faster-RCNN as the subject detection method. .
Faster-RCNN是第一个真正意义上的端到端的深度学习检测算法,也是第一个准实时(17帧/秒,640×480像素)的深度学习目标检测算法。Faster-RCNN的检测过程,主要分为三部分:第一部分利用VGG网络结构进行基础的特征提取;第二部分是RPN(regionproposal network,区域推荐网络),负责计算可能存在目标的区域的坐标以及判断是前景/背景;对于输入特征图,首先经过一个3*3卷积得到proposal层所需要的特征图,之后利用两个1*1卷积分别计算生成anchor(锚点)的类别分数和边框回归量,边框回归量和anchor在图像的对应坐标可以计算出预测的proposal坐标,计算公式如下:Faster-RCNN is the first true end-to-end deep learning detection algorithm, and also the first quasi-real-time (17 frames per second, 640×480 pixels) deep learning target detection algorithm. The detection process of Faster-RCNN is mainly divided into three parts: the first part uses the VGG network structure for basic feature extraction; the second part is RPN (region proposal network, region recommendation network), which is responsible for calculating the coordinates of areas where there may be targets and judging It is the foreground/background; for the input feature map, first obtain the feature map required by the proposal layer through a 3*3 convolution, and then use two 1*1 convolutions to calculate and generate the category score and frame regression of the anchor (anchor point) respectively The predicted coordinates of the proposed proposal can be calculated from the corresponding coordinates of the bounding box regressor and the anchor in the image. The calculation formula is as follows:
dx(P),dy(P),dw(P),dh(P)分别表示模型学习到的x轴平移变换,y轴平移变换,宽度缩放变换和高度缩放变换。Px,Py表示原始anchor中心的x坐标和y坐标,Pw,Ph表示原始anchor的宽度和高度。G表示得到的预测。之后利用RPN得到的目标区域proposal坐标再经过ROI((Region of Interest,感兴趣区)池化层得到相同长度的特征向量;第三部分,最后经过两个全连接层接入softmax实现具体分类和更精确的回归坐标。dx (P), dy (P), dw (P), and dh (P) respectively represent the x-axis translation transformation, y-axis translation transformation, width scaling transformation and height scaling transformation learned by the model. Px , Py represent the x-coordinate and y-coordinate of the original anchor center, and Pw , Phh represent the width and height of the original anchor. G represents the obtained prediction. Afterwards, the proposal coordinates of the target area obtained by RPN are then passed through the ROI ((Region of Interest) pooling layer to obtain a feature vector of the same length; in the third part, finally, two fully connected layers are connected to softmax to achieve specific classification and More precise regression coordinates.
如果在此步骤中未检测出主体区域,则返回“图像无主体”,否则将检测到的单/多主体区域,进入后续步骤进行识别。If no subject area is detected in this step, return to "image without subject", otherwise, the detected single/multi-subject area will enter the next step for identification.
步骤202,将图像输入预先训练的分类模型,得到分类预测结果和物体的特征。
在本实施例中,将步骤201经主体检测后得到的图像输入预先训练的分类模型,进行分类预测。可选地,在进行植物细分类之前,需要先构建一个相对简单的二分类模型(用于粗分类),用来判断输入图像是否为植物,如果是植物,则将输入图像输入到植物分类模型,如果不是植物,则直接输出预测结果“图像非植物”,若为植物将进入植物细分类模型中进行,细分类识别计算。此模型可以选用包括VGG,ResNet,GoogleNet等常用的卷积神经网络进行特征提取,并在模型末端接入特征维度到分类维度(二分类)的分类层进行分类预测。使用此模型可以提高大规模植物识别的效率,降低计算量,快速过滤掉非植物输入图像。同理,在识别其它物体的图像之前可使用相应类别的二分类模型,例如,识别动物、车辆的二分类模型。In this embodiment, the image obtained after subject detection in
本实施例中,上述二分类模型可以是深度神经网络模型,例如ResNet-34模型,当然也可以为其他的深度神经网络模型,本实施例对所采用的深度神经网络模型的具体形式不作限定,本实施例以ResNet-34模型为例进行说明。In this embodiment, the above-mentioned two-category model can be a deep neural network model, such as a ResNet-34 model, and of course it can also be other deep neural network models. This embodiment does not limit the specific form of the deep neural network model used. This embodiment uses the ResNet-34 model as an example for illustration.
本实施例使用的ResNet-34分类网络对主体区域进行二分类,包含7*7卷积核一层,maxpooling,另外3*3卷积核构成四个卷积层,分别为3,4,6,3层,另外需要averagepooling(平均池化)和fc层操作。后面分类模型使用ImageNet预训练的权重进行初始化并去掉ResNet-34模型的最后一个分类层更换为适用于本例的二分类分类层并进行随机初始化。The ResNet-34 classification network used in this example performs binary classification on the subject area, including a layer of 7*7 convolution kernels, maxpooling, and 3*3 convolution kernels to form four convolution layers, respectively 3, 4, and 6 , 3 layers, and average pooling (average pooling) and fc layer operations are also required. The subsequent classification model is initialized with ImageNet pre-trained weights and the last classification layer of the ResNet-34 model is removed and replaced with a binary classification layer suitable for this example and initialized randomly.
在本实施例的一些可选的实现方式中,特征层使用的学习率为分类层学习率的1/10。In some optional implementation manners of this embodiment, the learning rate used by the feature layer is 1/10 of the learning rate of the classification layer.
在本实施例的一些可选的实现方式中,采用指数衰减学习率的策略对预训练模型进行微调。In some optional implementations of this embodiment, a strategy of exponentially decaying learning rate is used to fine-tune the pre-trained model.
α=0.95epoch-numα0α = 0.95epoch-num α0
其中epoch是训练周期数,num设为0,α0是初始学习率0.1Where epoch is the number of training cycles, num is set to 0, and α0 is the initial learning rate of 0.1
在本实施例的一些可选的实现方式中,训练损失采用交叉熵损失函数。In some optional implementation manners of this embodiment, the training loss uses a cross-entropy loss function.
其中,m为样本个数,y(i)为样本标签,为概率。Among them, m is the number of samples, y(i) is the sample label, for the probability.
对于判断为植物的图片,设计一个细分类的分类模型进行分类预测。假设输入图片为植物,由于植物类别繁多,所以需要一个可以识别细微特征差异的分类模型。本实例中针对的植物类别多达2.4万种以上,采用人工设计的特征进行分类在实践中不具备可行性,近年随着深度学习的进一步发展,在诸多图像分类任务上,以卷积神经网络为代表的图像分类方法已经超越了传统上人工标注特征的分类方法,而且学术界已经证明卷积神经网络提取的特征相比人工标注的特征更好,所以在本实例的大规模植物分类场景需要可以自动产生分类特征的深度神经网络模型。For pictures that are judged to be plants, a classification model for subdivided classification is designed for classification prediction. Assuming that the input picture is a plant, since there are many types of plants, a classification model that can identify subtle feature differences is needed. In this example, there are more than 24,000 types of plants. It is not feasible to use artificially designed features for classification in practice. In recent years, with the further development of deep learning, convolutional neural networks have been used in many image classification tasks. The representative image classification method has surpassed the traditional manual labeling feature classification method, and the academic community has proved that the features extracted by the convolutional neural network are better than the manual labeling features, so the large-scale plant classification scene in this example needs A deep neural network model that can automatically generate categorical features.
本实施例中,上述用于细分类的分类模型可以是深度神经网络模型,例如Inception-ResNetv2模型,当然也可以为其他的深度神经网络模型,本实施例对所采用的深度神经网络模型的具体形式不作限定,本实施例以Inception-ResNetv2模型为例进行说明。In this embodiment, the above-mentioned classification model for fine classification may be a deep neural network model, such as the Inception-ResNetv2 model, and of course it may also be other deep neural network models. The specific details of the deep neural network model used in this embodiment The form is not limited, and this embodiment uses the Inception-ResNetv2 model as an example for illustration.
Inception-ResNetv2模型是对Inception-V4模型的改进,改进的重点是将传统的Inception模块更换为Inception-ResNet,它在ILSVRC图像分类基准测试中实现了当时最好的成绩。The Inception-ResNetv2 model is an improvement to the Inception-V4 model. The focus of the improvement is to replace the traditional Inception module with Inception-ResNet, which achieved the best results at the time in the ILSVRC image classification benchmark test.
在本实施例的一些可选的实现方式中,针对的待识别物体的数据集中存在数据类别易混淆,分布不平衡的问题。可采用类别均匀采样的方式进行样本选择,并加入labelsmoothing(标签平滑)和mixup(混合)策略,可以提高识别类别的准确率。In some optional implementation manners of this embodiment, there are problems of confusing data categories and unbalanced distribution in the data set of the object to be recognized. The sample selection can be carried out by means of category uniform sampling, and the label smoothing (label smoothing) and mixup (mixing) strategies can be added to improve the accuracy of class recognition.
本实施例使用的Inception-ResNetv2模型用ImageNet预训练好的权重进行初始化,并去掉了最后一个分类层更换为适用于本例的2.4万类分类层并进行随机初始化。The Inception-ResNetv2 model used in this example is initialized with ImageNet pre-trained weights, and the last classification layer is removed and replaced with the 24,000-class classification layer suitable for this example and randomly initialized.
本实施例的模型可使用的初始学习率是0.05,max-epoch设置为70;The initial learning rate that the model of this embodiment can use is 0.05, and the max-epoch is set to 70;
在本实施例的一些可选的实现方式中,可采用cosine学习率衰减策略,利用此策略降低学习率。In some optional implementations of this embodiment, a cosine learning rate decay strategy may be used to reduce the learning rate.
在本实施例的一些可选的实现方式中,可使用L2权值衰减进行正则化,衰减指数是1e-4。In some optional implementation manners of this embodiment, L2 weight decay may be used for regularization, and the decay index is 1e-4.
在本实施例的一些可选的实现方式中,训练损失可采用交叉熵损失函数:In some optional implementations of this embodiment, the training loss can use the cross-entropy loss function:
分类预测结果可包括物体属于各类别的概率,即分数,可将分数最高的预定数目个类别作为最终的分类预测结果,例如top5结果。在神经网络计算过程中,最后一层的特征图也要输出,作为检测预测模型的输入。The classification prediction result may include the probability that the object belongs to each category, that is, the score, and a predetermined number of categories with the highest scores may be used as the final classification prediction result, such as the top5 result. In the neural network calculation process, the feature map of the last layer is also output as the input of the detection prediction model.
步骤203,基于分类预测结果计算分类预测结果的可信度。
在本实施例中,分类预测结果为分数最高的预定数目个类别和分数以及卷积特征作为输出。首先判断识别结果是否可信,如果可信直接输出分类预测结果,否则将利用模型的输出特征进行步骤204的特征检索。判断结果如果如下:In this embodiment, the classification prediction results are a predetermined number of categories with the highest scores and the convolutional features are output. Firstly, it is judged whether the recognition result is credible, and if it is credible, the classification prediction result is output directly, otherwise, the feature retrieval in
其中,n为预设输出的类别的数目,αi为每个分类预测结果的权值,可根据分数排名设置权值,例如分数最高的分类预测结果的权值最大,Si为每个分类结果返回的分数。Sth为返回结果的综合分数。th为第一可信度阈值。Among them, n is the number of preset output categories, αi is the weight of each classification prediction result, and the weight can be set according to the score ranking, for example, the classification prediction result with the highest score has the largest weight, Si returns the score for each classification result. Sth is the composite score of the returned result. th is the first reliability threshold.
步骤204,若分类预测结果的可信度大于第一可信度阈值,则输出分类预测结果。
在本实施例中,当Sth满足阈值要求时,T=1表示分类预测结果可信,则输出分类结果,否则进入步骤205。In this embodiment, when Sth satisfies the threshold requirement, T=1 indicates that the classification prediction result is credible, and the classification result is output; otherwise, go to step 205 .
步骤205,若分类预测结果的可信度小于等于第一可信度阈值,则将特征输入预先构建的检索模型,得到检索预测结果。
在本实施例中,当步骤203的分类预测结果不可信时,增加检索模型,统计物体的召回情况和可信度,进行结果融合输出。In this embodiment, when the classification prediction result in
本实施例实际可预测的图片类别可达到2.4万余类,因为数据集中存在很多相似和混淆情况,且存在数据严重不平衡的问题。所以对于分类模型给出的不可信预测,也可以使用检索模型作为补充方案,进一步提高本实例的预测能力。这里的检索模型是对分类模型的一个重要补充,并可以大大提高物体种类预测结果的准确率。In this embodiment, the actual predictable picture categories can reach more than 24,000 categories, because there are many similarities and confusions in the data set, and there is a problem of serious data imbalance. Therefore, for the unreliable prediction given by the classification model, the retrieval model can also be used as a supplementary solution to further improve the prediction ability of this example. The retrieval model here is an important supplement to the classification model, and can greatly improve the accuracy of object type prediction results.
检索模型的构建方案如下:首先可使用本实施例根据步骤203中Inception-ResNetv2的主干卷积网络作为特征提取器,对图像库中的图像进行特征提取,利用检索模型返回的检索图像和相似度,得到检索的类别和相似度分数。特征提取器的实现方式不限于Inception-ResNetv2的主干卷积网络。检索模型存储了物体的特征和类别的对应关系。检索过程中,将待识别的物体的图像的特征与图像库中的图像的特征进行相似度计算,图像库中相似度最高的预定数目个图像对应的类别即为检索到的类别,计算出的相似度为检索预测结果的分数。The construction scheme of the retrieval model is as follows: firstly, the backbone convolutional network of Inception-ResNetv2 in
在本实施例的一些可选的实现方式中,实际构建检索库时,为了加速检索效率,可根据需要对特征进行降维处理,可选的降维方法包括PCA,LDA,LLE等,降维后的特征相对原特征具有更强的表现力,信息丢失极少,但却可以大大加快检索的效率,提升整体识别系统的流畅度。In some optional implementations of this embodiment, when actually building a retrieval library, in order to speed up retrieval efficiency, dimensionality reduction processing can be performed on features as required. Optional dimensionality reduction methods include PCA, LDA, LLE, etc., dimensionality reduction Compared with the original features, the latter features are more expressive, and the information loss is very small, but it can greatly speed up the retrieval efficiency and improve the fluency of the overall recognition system.
然后可对从图像库提取的特征构建索引,得到检索模块所需要的检索库。检索方法包括顺序检索,树形检索等正排检索的方法,检索速度相对较慢;以Hash检索为代表的倒排检索方法可以大大提升检索速率。哈希表(Hash Table)把特征通过一个固定的算法函数即所谓的哈希函数转换成一个整型数字,然后就将该数字对数组长度进行取余,取余结果就当作索引数组的下标,将value存储在以该数字为下标的数组空间里。而当使用哈希表进行检索的时候,就是再次使用哈希函数将特征转换为对应的数组下标,并定位到该空间获取value,这样就可以充分利用到数组的定位性能进行数据检索。Then, an index can be built on the features extracted from the image database to obtain the retrieval library required by the retrieval module. Retrieval methods include sequential search, tree search and other forward search methods, and the search speed is relatively slow; the reverse search method represented by Hash search can greatly improve the search speed. The hash table (Hash Table) converts the feature into an integer number through a fixed algorithm function, the so-called hash function, and then modifies the number to the length of the array, and the result of the remainder is used as the next index of the array. Subscript, store the value in the array space subscripted by this number. When using the hash table for retrieval, the hash function is used again to convert the feature into the corresponding array subscript, and locate the space to obtain the value, so that the positioning performance of the array can be fully utilized for data retrieval.
步骤206,基于检索预测结果计算检索预测结果的可信度。
在本实施例中,对于分类模型给出的预测不可信的情况,检索模型会提取待检索图片的特征,并在检索库中进行检索。每次检索,会先检索出TOP N预测并统计各类别出现次数和相似度,依据出现次数与分数进行加权结果索引排序,并最后给出最高的预定数目个(例如TOP10)预测结果和分数,预测分别包含预测的类别和可信度。可信度计算公式如下,其中k和ki表示TOP N预测中,第i类图像的个数,αi为每个返回结果的权值(与分类模型中的权值相同)。Si为检测模型返回结果的相似度,Pi表示待检索图片预测为第i类的类别可信度,Scorei为相似度阈值,Li为最终结果的可信度,Lth1为检索模型返回结果的综合分数:In this embodiment, when the prediction given by the classification model is unreliable, the retrieval model will extract the features of the image to be retrieved, and perform retrieval in the retrieval database. For each search, TOP N predictions will be retrieved first and the number of occurrences and similarities of each category will be counted, and the weighted result index will be sorted according to the number of occurrences and scores, and finally the highest predetermined number (such as TOP10) prediction results and scores will be given. Predictions contain the predicted category and confidence level, respectively. The credibility calculation formula is as follows, where k andki represent the number of images of the i-th category in the TOP N prediction, and αi is the weight of each returned result (the same as the weight in the classification model). Si is the similarity of the results returned by the detection model, Pi represents the category credibility of the i-th category of the image to be retrieved, Scorei is the similarity threshold, Li is the credibility of the final result, and Lth1 is the retrieval model Returns the composite score of the result:
Li=α*Pi+(1-α)ScoreiLi =α*Pi +(1-α)Scorei
同样按照步骤203中的类别预测可信度判断的方式,判断该可信度是否符合要求,其中,th1为第二可信度阈值,th1选择上应该选择与分类预测的th不同的范围。Also judge whether the reliability meets the requirements according to the method of category prediction reliability judgment in
步骤207,若检索预测结果的可信度大于第二可信度阈值,则输出检索预测结果。
在本实施例中,如果检索预测结果的可信度满足要求,T=1即检索预测结果是可信的,则输出检索预测结果,如果不满足则输出分类预测结果和检索预测结果的融合结果。In this embodiment, if the credibility of the retrieval prediction result meets the requirements, T=1, that is, the retrieval prediction result is credible, then the retrieval prediction result is output, and if not, the fusion result of the classification prediction result and the retrieval prediction result is output .
继续参见图3,图3是根据本实施例的用于图像识别的方法的应用场景的一个示意图。在图3的应用场景中,服务器获取到包括背景的图像后检测出图像的主体,即待识别的物体的图像,将其输入到分类模型,得到分类预测结果。计算出分类预测结果不可信,则将图像输入检索模型。得到的检索预测结果为与该图像相似度最高的预定数目个已知分类的图像和相似度(分数)。根据检索预测结果计算检索预测结果的可信度,如果可信则将检索预测结果作为最终结果,否则,将检索预测结果和分类预测结果相融合。Continuing to refer to FIG. 3 , FIG. 3 is a schematic diagram of an application scenario of the method for image recognition according to this embodiment. In the application scenario of FIG. 3 , after acquiring the image including the background, the server detects the subject of the image, that is, the image of the object to be recognized, and inputs it into the classification model to obtain a classification prediction result. If the classification prediction result is calculated to be untrustworthy, the image is fed into the retrieval model. The retrieved prediction result obtained is a predetermined number of images of known categories with the highest similarity to the image and the similarity (score). The credibility of the retrieval prediction result is calculated according to the retrieval prediction result, and if it is credible, the retrieval prediction result is taken as the final result; otherwise, the retrieval prediction result and classification prediction result are fused.
本公开的上述实施例提供的方法通过将分类预测结果和检索预测结果结合,提高了最终预测的准确性。The method provided by the above embodiments of the present disclosure improves the accuracy of the final prediction by combining the classification prediction result and the retrieval prediction result.
进一步参考图4,其示出了用于图像识别的方法的又一个实施例的流程400。该用于图像识别的方法的流程400,包括以下步骤:Further referring to FIG. 4 , it shows a
步骤401,获取待识别的物体的图像。
步骤402,将图像输入预先训练的分类模型,得到分类预测结果和物体的特征。
步骤403,基于分类预测结果计算分类预测结果的可信度。
步骤404,若分类预测结果的可信度大于第一可信度阈值,则输出分类预测结果。
步骤405,若分类预测结果的可信度小于等于第一可信度阈值,则将特征输入预先构建的检索模型,得到检索预测结果。
步骤406,基于检索预测结果计算检索预测结果的可信度。
步骤407,若检索预测结果的可信度大于第二可信度阈值,则输出检索预测结果。
步骤401-407与步骤201-207基本相同,因此不再赘述。Steps 401-407 are basically the same as steps 201-207, so they are not repeated here.
步骤408,基于分类预测结果和检索预测结果计算图像的各种预测结果的可信度。
在本实施例中,在预测时,如果分类模型得到的预测可信则使用分类模型的预测结果,分类模型的预测结果不可信而检索模型的检索结果可信,则使用检索模型的检索结果;如果二者都不可信,则需要融合二者的预测结果。融合方案是预测结果投票,如果二者的预测结果存在交集,则交集部分的可信度加权求和,之后将预测结果按照可信度大小进行排序,最后给出排序结果的最高的预定数据个(例如,TOP3)类别作为预测结果。如下所示为分类结果的标签,/>为分类结果可信度,α为分类预测结果的权重,/>为检索标签,为检索结果可信度。最终的可信度为/>In this embodiment, when predicting, if the prediction obtained by the classification model is credible, the prediction result of the classification model is used; if the prediction result of the classification model is not credible but the retrieval result of the retrieval model is credible, the retrieval result of the retrieval model is used; If neither is credible, the prediction results of the two need to be fused. The fusion scheme is to vote on the prediction results. If there is an intersection between the two prediction results, the credibility of the intersection part is weighted and summed, and then the prediction results are sorted according to the credibility, and finally the highest predetermined data of the sorting result is given. (eg, TOP3) category as the prediction result. As follows is the label of the classification result, /> is the credibility of the classification result, α is the weight of the classification prediction result, /> To retrieve tags, for the reliability of the search results. The final credibility is />
步骤409,各种预测结果的可信度最高的预定数目个预测结果作为最终预测结果输出。In
在本实施例中,根据各个分类结果的标签的最终可信度重新排序输出排序结果的最高的预定数据个(例如,TOP3)类别作为最终预测结果。In this embodiment, the highest predetermined data (for example, TOP3) categories of the sorting results are reordered and output according to the final credibility of the labels of each classification result as the final prediction result.
从图4中可以看出,与图2对应的实施例相比,本实施例中的用于图像识别的方法的流程400体现了将分类预测结果与检索预测结果进行融合的步骤。从而进一步提高图像识别的准确率和扩展性。整个结果对阈值,图像类别鲁棒性强,发生新类别和新增图像时,调整识别效果时间短,时效性强。It can be seen from FIG. 4 that, compared with the embodiment corresponding to FIG. 2 , the
进一步参考图5,作为对上述各图所示方法的实现,本公开提供了一种用于图像识别的装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。Further referring to FIG. 5 , as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of a device for image recognition, which corresponds to the method embodiment shown in FIG. 2 . The device can be specifically applied to various electronic devices.
如图5所示,本实施例的用于图像识别的装置500包括:获取单元501、分类单元502、第一计算单元503、检索单元504、第二计算单元505和输出单元506。其中,获取单元501,被配置成获取待识别的物体的图像;分类单元502,被配置成将图像输入预先训练的分类模型,得到分类预测结果和物体的特征;第一计算单元503,被配置成基于分类预测结果计算分类预测结果的可信度;检索单元504,被配置成若分类预测结果的可信度小于等于第一可信度阈值,则将特征输入预先构建的检索模型,得到检索预测结果;第二计算单元505,被配置成基于检索预测结果计算检索预测结果的可信度;输出单元506,被配置成若检索预测结果的可信度大于第二可信度阈值,则输出检索预测结果。As shown in FIG. 5 , the
在本实施例中,用于图像识别的装置500的获取单元501、分类单元502、第一计算单元503、检索单元504、第二计算单元505和输出单元506的具体处理可以参考图2对应实施例中的步骤201-207。In this embodiment, the specific processing of the
在本实施例的一些可选的实现方式中,输出单元506进一步被配置成:若分类预测结果的可信度大于第一可信度阈值,则输出分类预测结果。In some optional implementation manners of this embodiment, the
在本实施例的一些可选的实现方式中,装置500还包括融合单元(附图中未示出),被配置成:若检索预测结果的可信度小于等于第二可信度阈值,基于分类预测结果和检索预测结果计算图像的各种预测结果的可信度;各种预测结果的可信度最高的预定数目个预测结果作为最终预测结果输出。In some optional implementations of this embodiment, the
在本实施例的一些可选的实现方式中,获取单元501进一步被配置成:将包括待识别的物体和背景的图像输入预先训练的主体检测模型,得到待识别的物体的图像。In some optional implementations of this embodiment, the
在本实施例的一些可选的实现方式中,分类单元502进一步被配置成:将待识别的物体的图像输入预先训练的二分类模型,判断物体是否为目标类别;若为目标类别,则将图像输入预先训练的用于识别目标类别的图像的分类模型。In some optional implementations of this embodiment, the
在本实施例的一些可选的实现方式中,分类模型为Inception-ResNetv2模型,采用类别均匀采样的方式进行样本选择,并加入标签平滑和混合策略,采用cosine学习率衰减策略,训练损失采用交叉熵损失函数。In some optional implementations of this embodiment, the classification model is the Inception-ResNetv2 model, sample selection is performed by class uniform sampling, label smoothing and mixing strategies are added, cosine learning rate decay strategy is adopted, and training loss adopts crossover Entropy loss function.
在本实施例的一些可选的实现方式中,装置500还包括构建单元(附图中未示出),被配置成:提取预定图像库中各图像的特征,其中,图像库中各图像与类别相对应;将各图像的特征进行降维处理;对降维后的各图像的特征构建索引,其中,索引包括正排索引和/或倒排索引。In some optional implementations of this embodiment, the
在本实施例的一些可选的实现方式中,主体检测模型采用Faster-RCNN。In some optional implementation manners of this embodiment, the subject detection model adopts Faster-RCNN.
在本实施例的一些可选的实现方式中,二分类模型采用ResNet-34模型。In some optional implementation manners of this embodiment, the binary classification model adopts the ResNet-34 model.
下面参考图6,其示出了适于用来实现本公开的实施例的电子设备(例如图1中的服务器)600的结构示意图。图6示出的服务器仅仅是一个示例,不应对本公开的实施例的功能和使用范围带来任何限制。Referring now to FIG. 6 , it shows a schematic structural diagram of an electronic device (such as the server in FIG. 1 ) 600 suitable for implementing embodiments of the present disclosure. The server shown in FIG. 6 is only an example, and should not limit the functions and scope of use of the embodiments of the present disclosure.
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。As shown in FIG. 6, an
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。图6中示出的每个方框可以代表一个装置,也可以根据需要代表多个装置。Typically, the following devices can be connected to the I/O interface 605:
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开的实施例的方法中限定的上述功能。需要说明的是,本公开的实施例所述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开的实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开的实施例中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts can be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609 , or from storage means 608 , or from
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:获取待识别的物体的图像;将图像输入预先训练的分类模型,得到分类预测结果和物体的特征;基于分类预测结果计算分类预测结果的可信度;若分类预测结果的可信度小于等于第一可信度阈值,则将特征输入预先构建的检索模型,得到检索预测结果;基于检索预测结果计算检索预测结果的可信度;若检索预测结果的可信度大于第二可信度阈值,则输出检索预测结果。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device. The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires an image of an object to be recognized; inputs the image into a pre-trained classification model, and obtains The classification prediction result and the characteristics of the object; calculate the credibility of the classification prediction result based on the classification prediction result; if the reliability of the classification prediction result is less than or equal to the first credibility threshold, then input the feature into the pre-built retrieval model to obtain the retrieval Prediction result: calculating the credibility of the retrieval prediction result based on the retrieval prediction result; if the credibility of the retrieval prediction result is greater than a second credibility threshold, outputting the retrieval prediction result.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的实施例的操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, Also included are conventional procedural programming languages - such as the "C" language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In cases involving a remote computer, the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
描述于本公开的实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括获取单元、分类单元、第一计算单元、检索单元、第二计算单元、输出单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,获取单元还可以被描述为“获取待识别的物体的图像的单元”。The units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. The described units may also be set in a processor. For example, it may be described as: a processor includes an acquisition unit, a classification unit, a first calculation unit, a retrieval unit, a second calculation unit, and an output unit. Wherein, the names of these units do not constitute a limitation on the unit itself under certain circumstances, for example, the acquisition unit may also be described as "a unit that acquires an image of an object to be recognized".
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离所述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only a preferred embodiment of the present disclosure and an illustration of the applied technical principles. It should be understood by those skilled in the art that the scope of the invention involved in this disclosure is not limited to the technical solution formed by the specific combination of the above technical features, but also covers the inventions made by the above technical features without departing from the inventive concept. Other technical solutions formed by any combination of or equivalent features thereof. For example, a technical solution formed by replacing the above-mentioned features with technical features disclosed in this disclosure (but not limited to) having similar functions.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910744806.4ACN110458107B (en) | 2019-08-13 | 2019-08-13 | Method and device for image recognition |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910744806.4ACN110458107B (en) | 2019-08-13 | 2019-08-13 | Method and device for image recognition |
| Publication Number | Publication Date |
|---|---|
| CN110458107A CN110458107A (en) | 2019-11-15 |
| CN110458107Btrue CN110458107B (en) | 2023-06-16 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910744806.4AActiveCN110458107B (en) | 2019-08-13 | 2019-08-13 | Method and device for image recognition |
| Country | Link |
|---|---|
| CN (1) | CN110458107B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113127667A (en) | 2019-12-30 | 2021-07-16 | 阿里巴巴集团控股有限公司 | Image processing method and device, and image classification method and device |
| CN111353549B (en)* | 2020-03-10 | 2023-01-31 | 创新奇智(重庆)科技有限公司 | Image label verification method and device, electronic equipment and storage medium |
| CN111414946B (en)* | 2020-03-12 | 2022-09-23 | 腾讯科技(深圳)有限公司 | Artificial intelligence-based medical image noise data identification method and related device |
| CN111428649B (en)* | 2020-03-26 | 2021-02-09 | 自然资源部国土卫星遥感应用中心 | Remote sensing intelligent extraction method for wind power generation facility |
| CN111539438B (en)* | 2020-04-28 | 2024-01-12 | 北京百度网讯科技有限公司 | Text content identification method and device and electronic equipment |
| CN111639750A (en)* | 2020-05-26 | 2020-09-08 | 珠海格力电器股份有限公司 | Control method and device of intelligent flowerpot, intelligent flowerpot and storage medium |
| CN111833298B (en)* | 2020-06-04 | 2022-08-19 | 石家庄喜高科技有限责任公司 | Skeletal development grade detection method and terminal equipment |
| CN111678503A (en)* | 2020-06-15 | 2020-09-18 | 西安航空职业技术学院 | Unmanned aerial vehicle aerial survey control point arrangement and identification method and system |
| CN112163110B (en)* | 2020-09-27 | 2023-01-03 | Oppo(重庆)智能科技有限公司 | Image classification method and device, electronic equipment and computer-readable storage medium |
| CN112380372A (en) | 2020-11-13 | 2021-02-19 | 上海哔哩哔哩科技有限公司 | Method for searching image and computing equipment |
| CN112507158B (en)* | 2020-12-18 | 2025-01-03 | 北京百度网讯科技有限公司 | Image processing method and device |
| CN112396552B (en)* | 2020-12-28 | 2024-11-22 | 吉林大学 | A computer digital image fast processing system |
| CN112836744A (en)* | 2021-02-02 | 2021-05-25 | 北京小白世纪网络科技有限公司 | Multi-model false positive attenuation disease classification method and device based on CT slices |
| CN113326742A (en)* | 2021-05-08 | 2021-08-31 | 上海同温层智能科技有限公司 | Personal belonging identification comparison method and system |
| CN113177525A (en)* | 2021-05-27 | 2021-07-27 | 杭州有赞科技有限公司 | AI electronic scale system and weighing method |
| CN113313193A (en)* | 2021-06-15 | 2021-08-27 | 杭州睿胜软件有限公司 | Plant picture identification method, readable storage medium and electronic device |
| CN113850283A (en)* | 2021-06-16 | 2021-12-28 | 中国联合网络通信集团有限公司 | A method and device for identifying violations of RCS messages |
| CN113283396A (en)* | 2021-06-29 | 2021-08-20 | 艾礼富电子(深圳)有限公司 | Target object class detection method and device, computer equipment and storage medium |
| CN113688264B (en)* | 2021-09-07 | 2024-06-07 | 深兰机器人(上海)有限公司 | Method and device for identifying organism weight, electronic equipment and storage medium |
| CN114627411A (en)* | 2022-03-04 | 2022-06-14 | 水发智慧农业科技有限公司 | Crop growth period identification method based on parallel detection under computer vision |
| CN117115468B (en)* | 2023-10-19 | 2024-01-26 | 齐鲁工业大学(山东省科学院) | Image recognition method and system based on artificial intelligence |
| CN119418084A (en)* | 2024-07-03 | 2025-02-11 | 山西阳光三极科技股份有限公司 | A method and device for identifying the type of cargo on a coal mine vehicle based on improved yolov5 |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017024963A1 (en)* | 2015-08-11 | 2017-02-16 | 阿里巴巴集团控股有限公司 | Image recognition method, measure learning method and image source recognition method and device |
| WO2018119684A1 (en)* | 2016-12-27 | 2018-07-05 | 深圳前海达闼云端智能科技有限公司 | Image recognition system and image recognition method |
| CN109191453A (en)* | 2018-09-14 | 2019-01-11 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating image category detection model |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1806501B (en)* | 2005-01-17 | 2011-08-24 | 厦门市汇阳科技有限公司 | Marine phytoplankton automatic distinguishing method and apparatus |
| JP5660078B2 (en)* | 2012-05-31 | 2015-01-28 | カシオ計算機株式会社 | Multi-class classifier, method and program |
| CN105912611B (en)* | 2016-04-05 | 2019-04-26 | 中国科学技术大学 | A Fast Image Retrieval Method Based on CNN |
| CN106960219B (en)* | 2017-03-10 | 2021-04-16 | 百度在线网络技术(北京)有限公司 | Picture identification method and device, computer equipment and computer readable medium |
| US10607118B2 (en)* | 2017-12-13 | 2020-03-31 | Microsoft Technology Licensing, Llc | Ensemble model for image recognition processing |
| CN108846047A (en)* | 2018-05-30 | 2018-11-20 | 百卓网络科技有限公司 | A kind of picture retrieval method and system based on convolution feature |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2017024963A1 (en)* | 2015-08-11 | 2017-02-16 | 阿里巴巴集团控股有限公司 | Image recognition method, measure learning method and image source recognition method and device |
| WO2018119684A1 (en)* | 2016-12-27 | 2018-07-05 | 深圳前海达闼云端智能科技有限公司 | Image recognition system and image recognition method |
| CN109191453A (en)* | 2018-09-14 | 2019-01-11 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating image category detection model |
| Publication number | Publication date |
|---|---|
| CN110458107A (en) | 2019-11-15 |
| Publication | Publication Date | Title |
|---|---|---|
| CN110458107B (en) | Method and device for image recognition | |
| JP7331171B2 (en) | Methods and apparatus for training image recognition models, methods and apparatus for recognizing images, electronic devices, storage media, and computer programs | |
| CN112418292B (en) | Image quality evaluation method, device, computer equipment and storage medium | |
| US10936915B2 (en) | Machine learning artificial intelligence system for identifying vehicles | |
| US20210012198A1 (en) | Method for training deep neural network and apparatus | |
| CN113191241B (en) | Model training method and related equipment | |
| CN113361549B (en) | A model updating method and related device | |
| CN113807399A (en) | Neural network training method, neural network detection method and neural network detection device | |
| CN116310318B (en) | Interactive image segmentation method, device, computer equipment and storage medium | |
| CN111898675A (en) | Credit wind control model generation method and device, scoring card generation method, machine readable medium and equipment | |
| CN115131698B (en) | Video attribute determining method, device, equipment and storage medium | |
| CN113515669B (en) | Data processing method and related equipment based on artificial intelligence | |
| CN113254491A (en) | Information recommendation method and device, computer equipment and storage medium | |
| CN113568983B (en) | Scene graph generation method and device, computer readable medium and electronic equipment | |
| WO2022111387A1 (en) | Data processing method and related apparatus | |
| CN113159315A (en) | Neural network training method, data processing method and related equipment | |
| CN113705293A (en) | Image scene recognition method, device, equipment and readable storage medium | |
| WO2024002167A1 (en) | Operation prediction method and related apparatus | |
| CN114358109A (en) | Feature extraction model training method, feature extraction model training device, sample retrieval method, sample retrieval device and computer equipment | |
| CN115905680A (en) | Recommendation method and related device | |
| CN114359582A (en) | Small sample feature extraction method based on neural network and related equipment | |
| WO2024230757A1 (en) | Data processing method and related apparatus | |
| CN116204709A (en) | Data processing method and related device | |
| CN114743043B (en) | Image classification method, electronic device, storage medium and program product | |
| CN115438221A (en) | Recommendation method, device and electronic equipment based on artificial intelligence |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |