技术领域technical field
本发明属于图像处理技术领域,具体的说涉及一种显著性目标检测方法,可用于图像分割、目标识别、图像恢复和自适应图像压缩。The invention belongs to the technical field of image processing, and in particular relates to a salient object detection method, which can be used for image segmentation, object recognition, image recovery and adaptive image compression.
背景技术Background technique
显著性目标检测的目的是完整一致地检测出图像中最吸引人注意的目标区域。近年来,随着可视信息网络的扩张和电子商务行业的飞速增长,图像显著性目标检测技术的重要性日益突显。作为一种新的高维数据分析和处理工具,低秩矩阵恢复技术能够有效地从受强噪声污染或部分缺失的高维观测样本中发现它的本征低维空间,该技术已经被广泛应用于计算机视觉、机器学习、统计分析等领域,并出现了一些基于低秩矩阵恢复的显著性检测方法。The goal of salient object detection is to detect the most attractive object regions in an image completely and consistently. In recent years, with the expansion of visual information network and the rapid growth of e-commerce industry, the importance of image salient object detection technology has become increasingly prominent. As a new high-dimensional data analysis and processing tool, low-rank matrix recovery technology can effectively discover its intrinsic low-dimensional space from high-dimensional observation samples polluted by strong noise or partially missing, and this technology has been widely used In computer vision, machine learning, statistical analysis and other fields, some saliency detection methods based on low-rank matrix restoration have emerged.
Yan等基于稀疏编码和低秩矩阵恢复技术提出一种显著性检测方法(YanJ,ZhuM,LiuH,etal.Visualsaliencydetectionviasparsitypursuit[J].SignalProcessingLetters,2010,17(8):739-742.)。该方法的具体步骤是:首先,将图像分成8×8的小块。然后,提取图像块的特征并进行稀疏编码,得到表示输入图像的稀疏编码矩阵。最后,利用鲁棒主成分分析方法分解稀疏编码矩阵,并采用分解得到的稀疏矩阵定义相应图像块的显著性。该方法简单假设显著性目标的尺寸很小,故其特征具有稀疏特性,因此能够较为准确地检测出简单场景中尺寸较小的显著性目标。但是该方法仍然存在的不足之处是,难以完整一致地检测出尺寸较大的显著性目标。Yan et al. proposed a saliency detection method based on sparse coding and low-rank matrix recovery technology (YanJ, ZhuM, LiuH, et al. Visual saliency detection via asparsity pursuit [J]. Signal Processing Letters, 2010, 17(8): 739-742.). The specific steps of the method are: firstly, the image is divided into 8×8 small blocks. Then, the features of the image blocks are extracted and sparsely encoded to obtain a sparsely encoded matrix representing the input image. Finally, the sparse coding matrix is decomposed using the robust principal component analysis method, and the saliency of the corresponding image block is defined using the decomposed sparse matrix. This method simply assumes that the size of salient objects is small, so its features are sparse, so it can more accurately detect salient objects with smaller sizes in simple scenes. However, the disadvantage of this method is that it is difficult to detect the salient objects with large size completely and consistently.
Shen等基于低秩矩阵恢复技术提出了一种结合先验信息的显著性目标检测方法(ShenX,WuY.Aunifiedapproachtosalientobjectdetectionvialowrankmatrixrecovery[C].IEEEConferenceonComputerVisionandPatternRecognition,2012,23(10):853-860.)。该方法首先对输入图像进行超像素分割,并提取超像素的特征构建特征矩阵,然后利用MSRA数据库学习先验知识和特征变换矩阵,并对所有超像素特征组成的矩阵进行变换,最后采用低秩矩阵恢复方法分解变换后的特征矩阵。该方法通过特征变换使得显著性目标的特征更加稀疏,背景的特征更具有相似性。由于尺寸较大的显著性目标不再具有稀疏特性,该方法并没有从本质上解决较大尺寸显著性目标的检测问题。Shen et al. proposed a salient object detection method based on low-rank matrix recovery technology (ShenX, WuY. Aunified approach to salient object detection via low rank matrix recovery [C]. IEEE Conference on Computer Vision and Pattern Recognition, 2012, 23 (10): 853-860.). This method first performs superpixel segmentation on the input image, and extracts the features of superpixels to construct a feature matrix, then uses the MSRA database to learn prior knowledge and feature transformation matrix, and transforms the matrix composed of all superpixel features, and finally adopts low-rank The matrix restoration method decomposes the transformed feature matrix. This method makes the features of salient objects more sparse and the background features more similar through feature transformation. Since the salient objects with larger size are no longer sparse, this method does not essentially solve the detection problem of larger-sized salient objects.
发明内容Contents of the invention
本发明的目的在于针对上述已有技术的不足,提出一种基于稀疏子空间聚类和低秩表示的显著性目标检测方法,以更加完整一致地检测尺寸较大的显著性目标,且提高检测准确度和召回率。The purpose of the present invention is to address the deficiencies of the above-mentioned prior art, and propose a salient object detection method based on sparse subspace clustering and low-rank representation, so as to detect salient objects with larger sizes more completely and consistently, and improve detection efficiency. precision and recall.
本发明的技术思路是,对输入图像进行超像素分割,并对分割得到的超像素进行聚类,由于同一聚类内超像素的特征具有相似性,本发明假设同一聚类内的超像素特征处于相同的低维子空间,并根据颜色对比度构建字典,在此基础上构建联合低秩表示模型,求解该模型对聚类的特征矩阵进行低秩稀疏分解,最后采用矩阵分解得到的低秩表示系数定义相应聚类的显著性,并按照其空间位置将显著性值映射到输入图像中,得到输入图像的显著图。The technical idea of the present invention is to perform superpixel segmentation on the input image, and cluster the superpixels obtained by segmentation. Since the characteristics of the superpixels in the same cluster are similar, the present invention assumes that the superpixel features in the same cluster Be in the same low-dimensional subspace, and build a dictionary according to the color contrast, build a joint low-rank representation model on this basis, solve the model to perform low-rank sparse decomposition of the clustering feature matrix, and finally use the low-rank representation obtained by matrix decomposition The coefficients define the saliency of the corresponding clusters, and the saliency values are mapped into the input image according to their spatial location, resulting in a saliency map of the input image.
本发明的实现步骤如下:The realization steps of the present invention are as follows:
(1)将输入图像I分割成N个超像素{pi|i=1,2,...,N};(1) Divide the input image I into N superpixels {pi |i=1,2,...,N};
(2)对所有超像素进行聚类,得到输入图像I的J个聚类{Cj|j=1,2,...,J},其中每一个聚类Cj包含了mj个超像素,即Cj={pj,k|k=1,2,...,mj};(2) Cluster all superpixels to obtain J clusters {Cj |j=1,2,...,J} of the input image I, where each cluster Cj contains mj superpixels Pixel, that is, Cj ={pj,k |k=1,2,...,mj };
(3)构建聚类特征矩阵:(3) Construct the cluster feature matrix:
针对每一个聚类Cj={pj,k|k=1,2,...,mj}当中包含的第k个超像素pj,k,提取该超像素中每一个像素点的颜色、边缘和纹理特征构建该像素点的特征向量,其维数为M=53,并利用超像素pj,k中所有像素特征向量的均值向量xj,k作为该超像素的特征,构建聚类Cj的特征矩阵为Xj=[xj,1,xj,2,...,xj,k,...,xj,mj],k=1,2,...,mj;For the kth superpixel pj ,k contained in each cluster Cj ={pj,k |k=1,2,...,mj }, extract the The color, edge and texture features construct the feature vector of the pixel, whose dimension is M=53, and use the mean vector xj,k of all pixel feature vectors in the superpixel p j, k as the feature of the superpixel, construct The characteristic matrix of cluster Cj is Xj =[xj,1 ,xj,2 ,...,xj,k ,...,xj,mj ], k=1,2,... , mj ;
(4)计算所有超像素的颜色对比度,并根据超像素颜色对比度由大到小对超像素特征进行排序,得到低秩表示算法的字典D;(4) Calculate the color contrast of all superpixels, and sort the superpixel features according to the superpixel color contrast from large to small, and obtain the dictionary D of the low-rank representation algorithm;
(5)根据上述特征矩阵Xj和字典D构建联合低秩表示模型:(5) Construct a joint low-rank representation model based on the above feature matrix Xj and dictionary D:
s.t.Xj=DZj+EjstXj =DZj +Ej
其中,Zj为低秩表示系数,Ek为重构误差,λ是权衡低秩成分和稀疏成分之间的一个常量因子,||·||*为矩阵核范数,表示矩阵的所有奇异值之和,||E||2,1为重构误差矩阵E=[E1,E2,...,EJ]的范数,且E(u,v)表示E的第u行第v列元素;Among them, Zj is the low-rank representation coefficient, Ek is the reconstruction error, λ is a constant factor to balance the low-rank component and the sparse component, ||·||* is the matrix kernel norm, which represents all the singularities of the matrix The sum of values, ||E||2,1 is the reconstruction error matrix E=[E1 ,E2 ,...,EJ ] norm, and E(u,v) represents the uth row and vth column element of E;
求解上述联合低秩表示模型,对聚类Cj的特征矩阵Xj进行低秩稀疏分解,得到低秩表示系数的最优解集合Solve the above joint low-rank representation model, perform low-rank sparse decomposition on the characteristic matrix Xj of cluster Cj , and obtain the optimal solution set of low-rank representation coefficients
(6)利用聚类Cj对应的低秩表示系数计算该聚类的显著性因子L(Cj):(6) Use the low rank representation coefficient corresponding to the cluster Cj Calculate the significance factor L(Cj ) for this cluster:
其中,是低秩表示系数矩阵的前m行,是低秩表示系数矩阵的后m行,||·||1,1表示矩阵的范数,即
(7)将每一个聚类Cj的显著性因子L(Cj)按照其空间位置映射到输入图像I中,得到输入图像I的显著图。(7) Map the saliency factor L(Cj ) of each cluster Cj to the input image I according to its spatial position, and obtain the saliency map of the input image I.
本发明与现有的技术相比具有以下优点:Compared with the prior art, the present invention has the following advantages:
第一,解决了传统方法难以检测较大尺寸显著性目标的问题First, it solves the problem that traditional methods are difficult to detect large-scale salient objects
本发明对输入图像进行超像素分割,并对分割得到的超像素聚类,然后采用低秩表示算法对聚类特征矩阵进行低秩稀疏分解,并利用低秩表示系数定义聚类的显著性,克服了传统基于低秩矩阵恢复的显著性检测方法中难以完整一致地检测较大尺寸显著性目标的问题。The invention performs superpixel segmentation on the input image, and clusters the superpixels obtained by segmentation, then uses a low-rank representation algorithm to perform low-rank sparse decomposition on the clustering feature matrix, and uses the low-rank representation coefficient to define the significance of clustering, It overcomes the problem that the traditional saliency detection method based on low-rank matrix recovery is difficult to completely and consistently detect large-scale saliency targets.
第二,提升了复杂背景图像显著性检测的鲁棒性Second, the robustness of saliency detection in complex background images is improved
传统基于低秩矩阵恢复的显著性检测方法简单假设背景特征属于同一低维子空间,并对整幅图像的特征进行低秩稀疏分解,由于复杂背景中包含不同的纹理区域,在这种情况下背景的特征不具有低秩特性,因而传统方法的假设不再成立。鉴于同一聚类内超像素的特征具有相似性,本发明假设其属于相同的低维子空间,并利用低秩表示算法对聚类特征进行低秩稀疏分解,因此与传统基于低秩矩阵恢复的显著性检测方法相比,本发明提升了复杂背景图像显著性检测的鲁棒性。Traditional saliency detection methods based on low-rank matrix recovery simply assume that the background features belong to the same low-dimensional subspace, and perform low-rank sparse decomposition on the features of the entire image. Since the complex background contains different texture regions, in this case The features of the background do not have low-rank properties, so the assumptions of traditional methods no longer hold. In view of the similarity of the features of superpixels in the same cluster, the present invention assumes that they belong to the same low-dimensional subspace, and uses a low-rank representation algorithm to perform low-rank sparse decomposition on the cluster features, so it is different from the traditional low-rank matrix recovery method. Compared with the saliency detection method, the invention improves the robustness of the saliency detection of complex background images.
附图说明Description of drawings
图1为本发明的实现流程图;Fig. 1 is the realization flowchart of the present invention;
图2为用本发明对包含大尺寸显著性目标图像检测的仿真图;Fig. 2 is the emulation figure that uses the present invention to include the large-scale salient target image detection;
图3为用本发明对包含复杂背景图像显著性目标检测的仿真图;Fig. 3 is the emulation figure that uses the present invention to comprise complex background image salient object detection;
图4为用本发明对显著性目标检测结果的客观评价图。Fig. 4 is an objective evaluation diagram of the detection results of salient objects by the present invention.
具体实施方法Specific implementation method
下面结合附图对本发明的实施例和效果作进一步的详细描述。The embodiments and effects of the present invention will be further described in detail below in conjunction with the accompanying drawings.
参照附图1,本发明的实现步骤如下:With reference to accompanying drawing 1, the realization step of the present invention is as follows:
步骤1,输入图像并进行超像素分割。Step 1, input an image and perform superpixel segmentation.
(1a)从MSRA-1000数据库中选取一幅图像作为输入图像I;(1a) Select an image from the MSRA-1000 database as the input image I;
(1b)将输入图像I分割成N个超像素{pi|i=1,2,...,N}。已有的超像素分割算法有SuperpixelLattice、Normalizedcuts、Turbopixels和SLIC等,其中SLIC超像素分割算法在超像素形状、边界保持以及算法的运算速度上具有明显的优势,本发明选用SLIC超像素分割算法对输入图像进行分割。(1b) Segment the input image I into N superpixels {pi |i=1,2,...,N}. Existing superpixel segmentation algorithm has SuperpixelLattice, Normalizedcuts, Turbopixels and SLIC etc., and wherein SLIC superpixel segmentation algorithm has obvious advantage on superpixel shape, boundary preservation and the computing speed of algorithm, the present invention selects SLIC superpixel segmentation algorithm for use Input image for segmentation.
步骤2,对超像素进行聚类。Step 2, clustering the superpixels.
拉普拉斯稀疏子空间聚类算法在稀疏子空间聚类算法的基础上加入约束数据特征一致性的Laplacian项,具有更加优越的性能,因此本发明选择拉普拉斯稀疏子空间算法对所有超像素进行聚类,得到输入图像I的J个聚类{Cj|j=1,2,...,J},其中每一个聚类Cj包含了mj个超像素,即Cj={pj,k|k=1,2,...,mj}。The Laplacian sparse subspace clustering algorithm adds the Laplacian item that constrains the consistency of data features on the basis of the sparse subspace clustering algorithm, and has more superior performance. Therefore, the present invention selects the Laplacian sparse subspace algorithm for all Superpixels are clustered to obtain J clusters {Cj |j=1,2,...,J} of the input image I, where each cluster Cj contains mj superpixels, that is, Cj ={pj,k |k=1,2,...,mj }.
步骤3,构建聚类特征矩阵。Step 3, construct the cluster feature matrix.
(3a)提取聚类中超像素特征:(3a) Extract superpixel features in clustering:
(3a1)针对每一个聚类Cj={pj,k|k=1,2,...,mj}当中包含的第k个超像素pj,k,分别提取pj,k中每个像素点的颜色、边缘和纹理特征,每个像素点的颜色特征的维数为5,用3个尺度4个方向的金字塔滤波器提取每个像素点的边缘特征,其维数为12,用3个尺度12个方向的Gabor滤波器提取每个像素点的纹理特征,其维数为36,将上述颜色、边缘和纹理特征组合得到每个像素点的特征向量,其维数M=53;(3a1) For the kth superpixel pj ,k contained in each cluster Cj ={pj,k |k=1,2,...,mj }, extract pj,k respectively The color, edge and texture features of each pixel, the color feature of each pixel has a dimension of 5, and the edge feature of each pixel is extracted by a pyramid filter with 3 scales and 4 directions, and its dimension is 12 , use the Gabor filter with 3 scales and 12 directions to extract the texture feature of each pixel, its dimension is 36, combine the above color, edge and texture features to get the feature vector of each pixel, its dimension M= 53;
(3a2)根据(3a1)提取的超像素pj,k中每个像素点的53维特征向量,对超像素pj,k包含的所有像素点的特征向量取均值得到pj,k的特征xj,k,k=1,2,...,mj;(3a2) According to the 53-dimensional feature vector of each pixel in the superpixel pj, k extracted in (3a1), take the mean value of the feature vectors of all pixels contained in the superpixel pj, k to obtain the feature of p j, k xj,k , k=1,2,...,mj ;
(3b)构建特征矩阵:(3b) Construct feature matrix:
根据(3a)中提取的聚类Cj中mj个超像素的特征,构建聚类Cj的特征矩阵为
步骤4,构建基于颜色对比度的字典。Step 4, build a dictionary based on color contrast.
(4a)计算超像素的颜色对比度:(4a) Calculate the color contrast of the superpixel:
其中,表示超像素pi中所有像素的CIElab空间颜色特征的均值,常用的颜色空间模型有RGB、HSI和CIElab,本发明采用的颜色特征取自于CIElab空间,表示超像素pi中任一像素点Im的颜色特征,|pi|是超像素pi中像素点的总数,表示超像素pj中所有像素的颜色特征的均值,表示超像素pj中任一像素点In的颜色特征,|pj|是超像素pj中像素点的总数,||ci-cj||2为ci和cj欧氏距离的平方;
(4b)根据超像素颜色对比度由大到小对超像素特征进行排序,得到低秩表示算法的字典其中,是超像素的特征向量,1≤si≤N,且满足
步骤5,根据特征矩阵Xj和字典D构建联合低秩表示模型。Step 5, construct a joint low-rank representation model according to the feature matrix Xj and the dictionary D.
由于同一聚类内超像素的特征具有相似性,因此Zj具有低秩特性,重构误差Ej具有稀疏特性,且核范数是矩阵秩的最优凸近似,基于上述原理,根据特征矩阵Xj和字典D构建联合低秩表示模型,对聚类{Cj|j=1,2,...,J}的低秩表示系数{Zj|j=1,2,...,J}进行联合低秩约束,对重构误差矩阵E进行范数约束,其表示如下:Due to the similarity of the features of superpixels in the same cluster, Zj has low-rank characteristics, and the reconstruction error Ej has sparse characteristics, and the kernel norm is the optimal convex approximation of the matrix rank. Based on the above principles, according to the feature matrix Xj and dictionary D construct a joint low-rank representation model, and the low-rank representation coefficient {Zj |j =1,2,..., J} for joint low-rank constraints, and for the reconstruction error matrix E Norm constraints, which are expressed as follows:
s.t.Xj=DZj+EjstXj =DZj +Ej
其中,为特征矩阵在字典D下低秩表示系数,为聚类Cj的重构误差,λ是权衡低秩成分和稀疏成分之间的一个常量因子,D∈RM×N是根据颜色对比度构建的字典,||·||*为矩阵核范数,表示矩阵的所有奇异值之和,||E||2,1为重构误差矩阵
求解上述联合低秩表示模型,对聚类Cj的特征矩阵Xj进行低秩稀疏分解,得到低秩表示系数的最优解集合Solve the above joint low-rank representation model, perform low-rank sparse decomposition on the characteristic matrix Xj of cluster Cj , and obtain the optimal solution set of low-rank representation coefficients
步骤6,利用特征矩阵Xj的低秩稀疏分解得到的低秩表示系数计算聚类Cj的显著性因子L(Cj)。Step 6: Calculate the significance factor L(Cj ) of the cluster Cj by using the low-rank representation coefficient obtained from the low-rank sparse decomposition of the feature matrix Xj .
假设图像中所有超像素的特征共同构成字典其中每一列对应一个超像素特征,则字典中任意一个超像素特征可以表示为字典D中所有超像素特征的线性组合,即:Assume that the features of all superpixels in the image together form a dictionary Each column corresponds to a superpixel feature, then any superpixel feature in the dictionary can be expressed as a linear combination of all superpixel features in the dictionary D, namely:
其中是特征在特征下的表示系数,较大的表示特征和特征在由字典D列向量张成的特征空间中拥有相似的映射。因此,能够衡量超像素与超像素的相似程度,即超像素特征的表示系数能够揭示该特征与字典中对应特征的相似程度。in is a feature in feature The lower express coefficient, the larger Express features and features It has a similar mapping in the feature space spanned by the column vectors of the dictionary D. therefore, Capable of measuring superpixels with superpixels The degree of similarity, that is, the representation coefficient of the superpixel feature can reveal the similarity between the feature and the corresponding feature in the dictionary.
综上所述,由于字典D是利用超像素颜色对比度由大到小对超像素特征进行排序得到的,低秩表示系数矩阵的前m行用表示,后m行用表示,能够反映该聚类特征与高对比度超像素特征的相似程度,表示该聚类特征与低对比度超像素特征的相似程度,则L(Cj)可以用来衡量聚类Cj的显著性:To sum up, since the dictionary D is obtained by sorting the superpixel features from large to small by using the superpixel color contrast, the low rank represents the coefficient matrix The first m lines of Indicates that the next m lines use express, It can reflect the similarity between the clustering feature and the high-contrast superpixel feature, Indicates the similarity between the cluster feature and the low-contrast superpixel feature, then L(Cj ) can be used to measure the significance of the cluster Cj :
其中,||·||1,1表示矩阵的范数,即|A(u,v)|表示A的第u行第v列元素的绝对值。Among them, ||·||1,1 represents the matrix Norm, ie |A(u,v)| represents the absolute value of the element in row u and column v of A.
步骤7,获得输入图像的显著图。Step 7, obtain the saliency map of the input image.
将每一个聚类Cj的显著性因子L(Cj)按照其空间位置映射到输入图像中,得到输入图像I的显著图。The saliency factor L(Cj ) of each cluster Cj is mapped to the input image according to its spatial position, and the saliency map of the input image I is obtained.
本发明的效果可通过以下仿真进一步说明:Effect of the present invention can be further illustrated by following simulation:
1.仿真方法与条件1. Simulation method and conditions
仿真使用的方法包括本发明方法和现有的如下10种方法,它们分别是:The method that emulation uses comprises the inventive method and existing following 10 kinds of methods, and they are respectively:
基于稀疏编码与鲁棒主成分分析的显著性检测方法SP,A saliency detection method SP based on sparse coding and robust principal component analysis,
基于特征变换与鲁棒主成分分析的显著性检测方法UA,UA, a saliency detection method based on feature transformation and robust principal component analysis,
基于区域对比度的显著性检测方法RC,A region-contrast-based saliency detection method RC,
基于直方图对比度的显著性检测方法HC,Histogram contrast-based saliency detection method HC,
基于频率调制的显著性检测方法FT,A frequency modulation based saliency detection method FT,
基于图论的显著性检测方法GBVS,Graph theory-based saliency detection method GBVS,
基于局部对比度的显著性检测方法ITTI,The local contrast-based saliency detection method ITTI,
基于编码长度增量的显著性检测方法DVA,DVA, a saliency detection method based on code length increments,
基于自信息的显著性检测方法AIM,Self-information-based saliency detection method AIM,
基于图像签名的显著性区域检测方法SIG。Salient region detection method based on image signature SIG.
实验数据利用MSRA-1000数据库,仿真实验均在Windows7操作系统下采用Matlab2010软件实现。The experimental data uses the MSRA-1000 database, and the simulation experiments are all implemented using Matlab2010 software under the Windows7 operating system.
仿真1Simulation 1
从MSRA-1000数据库中选取一组包含大尺寸显著性目标的图像,选取图像中目标的尺寸均大于整幅图像大小的25%,用本发明与现有的SP和UA方法分别对这一组输入图像进行显著性检测,结果如图2,其中:Select a group of images containing large-scale salient objects from the MSRA-1000 database, and the size of the objects in the selected images is greater than 25% of the size of the entire image. Use the present invention and the existing SP and UA methods to analyze this group The input image is subjected to saliency detection, and the result is shown in Figure 2, where:
图2的第(a)列为一组待检测图像,Column (a) of Figure 2 is a set of images to be detected,
图2的第(b)列为采用SP方法的显著性检测结果,Column (b) of Figure 2 is the significance detection result using the SP method,
图2的第(c)列为采用UA方法的显著性检测结果,Column (c) of Figure 2 is the significance detection result using the UA method,
图2的第(d)列为采用本发明方法的显著性检测结果,The (d) column of Fig. 2 is the significance detection result adopting the method of the present invention,
图2的第(e)列为Ground-truth图像。Column (e) of Figure 2 is the Ground-truth image.
由图2的第(b)列可以看出,SP方法检测得到的显著图会在显著性目标的边缘处分配较大的显著值,该方法不能完整一致地检测出尺寸较大的显著性目标。UA方法得到的显著图如图2的第(c)列所示,该方法虽然能够较为准确地检测出尺寸较大的显著性目标,但目标区域难以分配一致的显著值。如图2的第(d)列所示,本发明提出的显著性目标检测方法不仅能够准确检测出显著性目标,而且能够在目标区域内分配一致的显著值。From the column (b) of Figure 2, it can be seen that the saliency map detected by the SP method will assign a larger saliency value at the edge of the salient object, and this method cannot detect the salient object with a larger size completely and consistently . The saliency map obtained by the UA method is shown in column (c) of Figure 2. Although this method can more accurately detect large-sized saliency objects, it is difficult to assign consistent saliency values to the target area. As shown in column (d) of Fig. 2, the salient object detection method proposed by the present invention can not only detect the salient objects accurately, but also assign consistent saliency values within the object area.
仿真2Simulation 2
从MSRA-1000数据库中选取一组包含复杂背景的图像,用本发明与现有的SP和UA方法分别对这一组输入图像进行显著性检测,结果如图3,其中:Select a group of images containing complex backgrounds from the MSRA-1000 database, and use the present invention and the existing SP and UA methods to carry out saliency detection on this group of input images, the results are shown in Figure 3, where:
图3的第(a)列为一组待检测图像,Column (a) of Figure 3 is a set of images to be detected,
图3的第(b)列为采用SP方法的显著性检测结果,Column (b) of Figure 3 is the significance detection result using the SP method,
图3的第(c)列为采用UA方法的显著性检测结果,Column (c) of Figure 3 is the significance detection result using the UA method,
图3的第(d)列为采用本发明方法的显著性检测结果,The (d) column of Fig. 3 is the significance detection result adopting the method of the present invention,
图3的第(e)列为Ground-truth图像。Column (e) of Figure 3 is the Ground-truth image.
由图3的第(b)列可以看出,SP方法得到的显著图会给复杂背景分配较大的显著值,不能准确检测出显著性目标。UA方法得到的显著图如图3的第(c)列所示,该方法虽然能够较为准确地检测出复杂背景中的显著性目标,但难以抑制复杂背景中的噪声。如图3的第(d)列所示,本发明提出的显著性目标检测方法不仅能够完整一致地检测出显著性目标,而且能够充分抑制复杂背景中的噪声。From the column (b) of Figure 3, it can be seen that the saliency map obtained by the SP method will assign a large saliency value to the complex background, and cannot accurately detect the saliency target. The saliency map obtained by the UA method is shown in column (c) of Fig. 3. Although this method can detect the salient objects in the complex background more accurately, it is difficult to suppress the noise in the complex background. As shown in column (d) of Fig. 3, the salient object detection method proposed by the present invention can not only detect the salient objects completely and consistently, but also fully suppress the noise in the complex background.
仿真3,基于准确度-召回率曲线、F-measure曲线、平均绝对误差以及自适应阈值下的平均准确度、召回率和F-measure这四种常用的显著性目标检测评价标准,用本发明与现有的UA、RC、HC、FT、GBVS、ITTI、DVA、AIM、SIG显著性检测方法对MSRA-1000数据库中的1000幅图像作对比实验,结果如图4,其中:Simulation 3, based on the accuracy-recall rate curve, F-measure curve, mean absolute error and the average accuracy, recall rate and F-measure under the adaptive threshold, these four commonly used salient target detection evaluation criteria, using the present invention Compared with the existing UA, RC, HC, FT, GBVS, ITTI, DVA, AIM, and SIG saliency detection methods on 1,000 images in the MSRA-1000 database, the results are shown in Figure 4, where:
图4(a)为准确度-召回率曲线,Figure 4(a) is the accuracy-recall curve,
图4(b)为F-measure曲线,Figure 4(b) is the F-measure curve,
图4(c)为自适应阈值下的平均准确度、召回率和F-measure直方图,Figure 4(c) is the average accuracy, recall and F-measure histogram under the adaptive threshold,
图4(d)为平均绝对误差。Figure 4(d) is the mean absolute error.
由图4(a)可以看出,本发明方法能够达到主流算法的准确度-召回率曲线指标。由图4(b)可知,本发明方法能够在较大的阈值范围内得到最大的F-measure值。由图4(c)可以看出,本发明方法能够得到最高的平均准确度、召回率和F-measure值。由图4(d)可知,本发明方法与Ground-truth图像的差异最小,即最接近理想的显著性目标检测结果。It can be seen from Fig. 4(a) that the method of the present invention can reach the accuracy-recall curve index of mainstream algorithms. It can be seen from Fig. 4(b) that the method of the present invention can obtain the maximum F-measure value within a relatively large threshold range. It can be seen from Fig. 4(c) that the method of the present invention can obtain the highest average accuracy, recall rate and F-measure value. It can be seen from Figure 4(d) that the difference between the method of the present invention and the Ground-truth image is the smallest, that is, it is the closest to the ideal salient object detection result.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510951934.8ACN105574534B (en) | 2015-12-17 | 2015-12-17 | Conspicuousness object detection method based on sparse subspace clustering and low-rank representation |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201510951934.8ACN105574534B (en) | 2015-12-17 | 2015-12-17 | Conspicuousness object detection method based on sparse subspace clustering and low-rank representation |
| Publication Number | Publication Date |
|---|---|
| CN105574534Atrue CN105574534A (en) | 2016-05-11 |
| CN105574534B CN105574534B (en) | 2019-03-26 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201510951934.8AActiveCN105574534B (en) | 2015-12-17 | 2015-12-17 | Conspicuousness object detection method based on sparse subspace clustering and low-rank representation |
| Country | Link |
|---|---|
| CN (1) | CN105574534B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106056070A (en)* | 2016-05-26 | 2016-10-26 | 重庆大学 | SAR target identification method based on low-rank matrix recovery and sparse representation |
| CN106407927A (en)* | 2016-09-12 | 2017-02-15 | 河海大学常州校区 | Salient visual method based on polarization imaging and applicable to underwater target detection |
| CN106447667A (en)* | 2016-10-31 | 2017-02-22 | 郑州轻工业学院 | Visual significance detection method based on self-learning characteristics and matrix low-rank recovery |
| CN106529549A (en)* | 2016-10-31 | 2017-03-22 | 郑州轻工业学院 | Visual saliency detection method based on adaptive features and discrete cosine transform |
| CN106683074A (en)* | 2016-11-03 | 2017-05-17 | 中国科学院信息工程研究所 | Image tampering detection method based on haze characteristic |
| CN106846377A (en)* | 2017-01-09 | 2017-06-13 | 深圳市美好幸福生活安全系统有限公司 | A kind of target tracking algorism extracted based on color attribute and active features |
| CN107085725A (en)* | 2017-04-21 | 2017-08-22 | 河南科技大学 | A Method for Clustering Image Regions via Adaptive Codebook Based LLC |
| CN107301643A (en)* | 2017-06-06 | 2017-10-27 | 西安电子科技大学 | Well-marked target detection method based on robust rarefaction representation Yu Laplce's regular terms |
| CN107358590A (en)* | 2017-07-19 | 2017-11-17 | 南京邮电大学 | Three-dimensional video-frequency method for shielding error code based on super-pixel segmentation and similar group rarefaction representation |
| CN107423668A (en)* | 2017-04-14 | 2017-12-01 | 山东建筑大学 | Eeg signal classification System and method for based on wavelet transformation and sparse expression |
| CN107977661A (en)* | 2017-10-13 | 2018-05-01 | 天津工业大学 | The region of interest area detecting method decomposed based on full convolutional neural networks and low-rank sparse |
| CN107992875A (en)* | 2017-12-25 | 2018-05-04 | 北京航空航天大学 | A kind of well-marked target detection method based on image bandpass filtering |
| CN108053372A (en)* | 2017-12-01 | 2018-05-18 | 北京小米移动软件有限公司 | The method and apparatus for handling depth image |
| CN108109153A (en)* | 2018-01-12 | 2018-06-01 | 西安电子科技大学 | SAR image segmentation method based on SAR-KAZE feature extractions |
| CN108171279A (en)* | 2018-01-28 | 2018-06-15 | 北京工业大学 | A kind of adaptive product Grassmann manifold Subspace clustering methods of multi-angle video |
| CN108460412A (en)* | 2018-02-11 | 2018-08-28 | 北京盛安同力科技开发有限公司 | A kind of image classification method based on subspace joint sparse low-rank Structure learning |
| CN108460379A (en)* | 2018-02-06 | 2018-08-28 | 西安电子科技大学 | Well-marked target detection method based on refinement Space Consistency two-stage figure |
| CN108491883A (en)* | 2018-03-26 | 2018-09-04 | 福州大学 | A kind of conspicuousness inspection optimization method based on condition random field |
| CN108734174A (en)* | 2018-04-21 | 2018-11-02 | 大连海事大学 | A kind of complex background image conspicuousness detection method based on low-rank representation |
| CN108805136A (en)* | 2018-03-26 | 2018-11-13 | 中国地质大学(武汉) | A kind of conspicuousness detection method towards waterborne contaminant monitoring |
| CN109086775A (en)* | 2018-07-19 | 2018-12-25 | 南京信息工程大学 | A kind of collaboration conspicuousness detection method of quick manifold ranking and low-rank constraint |
| CN109101978A (en)* | 2018-07-06 | 2018-12-28 | 中国地质大学(武汉) | Conspicuousness object detection method and system based on weighting low-rank matrix Restoration model |
| CN109242854A (en)* | 2018-07-14 | 2019-01-18 | 西北工业大学 | A kind of image significance detection method based on FLIC super-pixel segmentation |
| CN109691055A (en)* | 2016-10-07 | 2019-04-26 | 赫尔实验室有限公司 | System for anomaly detection on CAN bus data using sparse and low-rank decomposition of transfer entropy matrix |
| CN109697197A (en)* | 2018-12-25 | 2019-04-30 | 四川效率源信息安全技术股份有限公司 | A method of carving multiple Access database file |
| CN110097530A (en)* | 2019-04-19 | 2019-08-06 | 西安电子科技大学 | Based on multi-focus image fusing method super-pixel cluster and combine low-rank representation |
| CN110321787A (en)* | 2019-05-13 | 2019-10-11 | 仲恺农业工程学院 | Disease identification method, system and storage medium based on joint sparse representation |
| CN110490206A (en)* | 2019-08-20 | 2019-11-22 | 江苏建筑职业技术学院 | A kind of quick conspicuousness algorithm of target detection based on low-rank matrix dualistic analysis |
| CN110675412A (en)* | 2019-09-27 | 2020-01-10 | 腾讯科技(深圳)有限公司 | Image segmentation method, training method, device and equipment of image segmentation model |
| CN114220058A (en)* | 2021-12-17 | 2022-03-22 | 桂林电子科技大学 | Significance detection method based on MCP sparse representation dictionary learning model |
| CN114240788A (en)* | 2021-12-21 | 2022-03-25 | 西南石油大学 | Robustness and self-adaptability background restoration method for complex scene |
| CN115272705A (en)* | 2022-07-29 | 2022-11-01 | 北京百度网讯科技有限公司 | Training method, device and device for salient object detection model |
| CN115690418A (en)* | 2022-10-31 | 2023-02-03 | 武汉大学 | Unsupervised image waypoint automatic detection method |
| CN115984137A (en)* | 2023-01-04 | 2023-04-18 | 上海人工智能创新中心 | Method, system, device and storage medium for image restoration in dark light |
| CN117576493A (en)* | 2024-01-16 | 2024-02-20 | 武汉明炀大数据科技有限公司 | Cloud storage compression method and system for large sample data |
| CN117636162A (en)* | 2023-11-21 | 2024-03-01 | 中国地质大学(武汉) | A sparse unmixing method, device, equipment and storage medium for hyperspectral images |
| CN119672061A (en)* | 2024-11-29 | 2025-03-21 | 西北工业大学 | A moving target detection method based on sparse decomposition and block multi-channel background subtraction |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104392463A (en)* | 2014-12-16 | 2015-03-04 | 西安电子科技大学 | Image salient region detection method based on joint sparse multi-scale fusion |
| CN104392231A (en)* | 2014-11-07 | 2015-03-04 | 南京航空航天大学 | Block and sparse principal feature extraction-based rapid collaborative saliency detection method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104392231A (en)* | 2014-11-07 | 2015-03-04 | 南京航空航天大学 | Block and sparse principal feature extraction-based rapid collaborative saliency detection method |
| CN104392463A (en)* | 2014-12-16 | 2015-03-04 | 西安电子科技大学 | Image salient region detection method based on joint sparse multi-scale fusion |
| Title |
|---|
| 刘甜甜: "基于稀疏和低秩表示的显著性目标检测", 《电子科技》* |
| 张艳邦等: "基于颜色对比度及局部信息熵的目标检测算法", 《内江科技》* |
| 李小平等: "图像分割的改进稀疏子空间聚类方法", 《系统工程与电子技术》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106056070A (en)* | 2016-05-26 | 2016-10-26 | 重庆大学 | SAR target identification method based on low-rank matrix recovery and sparse representation |
| CN106056070B (en)* | 2016-05-26 | 2019-05-10 | 重庆大学 | SAR target recognition method based on low-rank matrix recovery and sparse representation |
| CN106407927A (en)* | 2016-09-12 | 2017-02-15 | 河海大学常州校区 | Salient visual method based on polarization imaging and applicable to underwater target detection |
| CN106407927B (en)* | 2016-09-12 | 2019-11-05 | 河海大学常州校区 | The significance visual method suitable for underwater target detection based on polarization imaging |
| CN109691055A (en)* | 2016-10-07 | 2019-04-26 | 赫尔实验室有限公司 | System for anomaly detection on CAN bus data using sparse and low-rank decomposition of transfer entropy matrix |
| CN106447667A (en)* | 2016-10-31 | 2017-02-22 | 郑州轻工业学院 | Visual significance detection method based on self-learning characteristics and matrix low-rank recovery |
| CN106529549A (en)* | 2016-10-31 | 2017-03-22 | 郑州轻工业学院 | Visual saliency detection method based on adaptive features and discrete cosine transform |
| CN106447667B (en)* | 2016-10-31 | 2017-09-08 | 郑州轻工业学院 | The vision significance detection method restored based on self study feature and matrix low-rank |
| CN106683074A (en)* | 2016-11-03 | 2017-05-17 | 中国科学院信息工程研究所 | Image tampering detection method based on haze characteristic |
| CN106683074B (en)* | 2016-11-03 | 2019-11-05 | 中国科学院信息工程研究所 | A kind of distorted image detection method based on haze characteristic |
| CN106846377A (en)* | 2017-01-09 | 2017-06-13 | 深圳市美好幸福生活安全系统有限公司 | A kind of target tracking algorism extracted based on color attribute and active features |
| CN107423668A (en)* | 2017-04-14 | 2017-12-01 | 山东建筑大学 | Eeg signal classification System and method for based on wavelet transformation and sparse expression |
| CN107085725B (en)* | 2017-04-21 | 2020-08-14 | 河南科技大学 | Method for clustering image areas through LLC based on self-adaptive codebook |
| CN107085725A (en)* | 2017-04-21 | 2017-08-22 | 河南科技大学 | A Method for Clustering Image Regions via Adaptive Codebook Based LLC |
| CN107301643A (en)* | 2017-06-06 | 2017-10-27 | 西安电子科技大学 | Well-marked target detection method based on robust rarefaction representation Yu Laplce's regular terms |
| CN107301643B (en)* | 2017-06-06 | 2019-08-06 | 西安电子科技大学 | Salient Object Detection Method Based on Robust Sparse Representation and Laplacian Regularization |
| CN107358590A (en)* | 2017-07-19 | 2017-11-17 | 南京邮电大学 | Three-dimensional video-frequency method for shielding error code based on super-pixel segmentation and similar group rarefaction representation |
| CN107977661A (en)* | 2017-10-13 | 2018-05-01 | 天津工业大学 | The region of interest area detecting method decomposed based on full convolutional neural networks and low-rank sparse |
| CN107977661B (en)* | 2017-10-13 | 2022-05-03 | 天津工业大学 | Region-of-interest detection method based on FCN and low-rank sparse decomposition |
| CN108053372A (en)* | 2017-12-01 | 2018-05-18 | 北京小米移动软件有限公司 | The method and apparatus for handling depth image |
| CN107992875A (en)* | 2017-12-25 | 2018-05-04 | 北京航空航天大学 | A kind of well-marked target detection method based on image bandpass filtering |
| CN108109153A (en)* | 2018-01-12 | 2018-06-01 | 西安电子科技大学 | SAR image segmentation method based on SAR-KAZE feature extractions |
| CN108109153B (en)* | 2018-01-12 | 2019-10-11 | 西安电子科技大学 | SAR Image Segmentation Method Based on SAR-KAZE Feature Extraction |
| CN108171279A (en)* | 2018-01-28 | 2018-06-15 | 北京工业大学 | A kind of adaptive product Grassmann manifold Subspace clustering methods of multi-angle video |
| CN108171279B (en)* | 2018-01-28 | 2021-11-05 | 北京工业大学 | A Multi-view Video Adaptive Product Grassmann Manifold Subspace Clustering Method |
| CN108460379A (en)* | 2018-02-06 | 2018-08-28 | 西安电子科技大学 | Well-marked target detection method based on refinement Space Consistency two-stage figure |
| CN108460379B (en)* | 2018-02-06 | 2021-05-04 | 西安电子科技大学 | A Salient Object Detection Method Based on Refined Spatial Consistency Two-Stage Graph |
| CN108460412A (en)* | 2018-02-11 | 2018-08-28 | 北京盛安同力科技开发有限公司 | A kind of image classification method based on subspace joint sparse low-rank Structure learning |
| CN108460412B (en)* | 2018-02-11 | 2020-09-04 | 北京盛安同力科技开发有限公司 | Image classification method based on subspace joint sparse low-rank structure learning |
| CN108805136A (en)* | 2018-03-26 | 2018-11-13 | 中国地质大学(武汉) | A kind of conspicuousness detection method towards waterborne contaminant monitoring |
| CN108491883A (en)* | 2018-03-26 | 2018-09-04 | 福州大学 | A kind of conspicuousness inspection optimization method based on condition random field |
| CN108491883B (en)* | 2018-03-26 | 2022-03-22 | 福州大学 | Saliency detection optimization method based on conditional random field |
| CN108734174A (en)* | 2018-04-21 | 2018-11-02 | 大连海事大学 | A kind of complex background image conspicuousness detection method based on low-rank representation |
| CN109101978A (en)* | 2018-07-06 | 2018-12-28 | 中国地质大学(武汉) | Conspicuousness object detection method and system based on weighting low-rank matrix Restoration model |
| CN109242854A (en)* | 2018-07-14 | 2019-01-18 | 西北工业大学 | A kind of image significance detection method based on FLIC super-pixel segmentation |
| CN109086775A (en)* | 2018-07-19 | 2018-12-25 | 南京信息工程大学 | A kind of collaboration conspicuousness detection method of quick manifold ranking and low-rank constraint |
| CN109086775B (en)* | 2018-07-19 | 2020-10-27 | 南京信息工程大学 | Rapid manifold ordering and low-rank constraint cooperative significance detection method |
| CN109697197B (en)* | 2018-12-25 | 2023-05-02 | 四川效率源信息安全技术股份有限公司 | Method for engraving and restoring Access database file |
| CN109697197A (en)* | 2018-12-25 | 2019-04-30 | 四川效率源信息安全技术股份有限公司 | A method of carving multiple Access database file |
| CN110097530A (en)* | 2019-04-19 | 2019-08-06 | 西安电子科技大学 | Based on multi-focus image fusing method super-pixel cluster and combine low-rank representation |
| CN110097530B (en)* | 2019-04-19 | 2023-03-24 | 西安电子科技大学 | Multi-focus image fusion method based on super-pixel clustering and combined low-rank representation |
| CN110321787A (en)* | 2019-05-13 | 2019-10-11 | 仲恺农业工程学院 | Disease identification method, system and storage medium based on joint sparse representation |
| CN110490206A (en)* | 2019-08-20 | 2019-11-22 | 江苏建筑职业技术学院 | A kind of quick conspicuousness algorithm of target detection based on low-rank matrix dualistic analysis |
| CN110490206B (en)* | 2019-08-20 | 2023-12-26 | 江苏建筑职业技术学院 | Rapid saliency target detection algorithm based on low-rank matrix binary decomposition |
| CN110675412A (en)* | 2019-09-27 | 2020-01-10 | 腾讯科技(深圳)有限公司 | Image segmentation method, training method, device and equipment of image segmentation model |
| CN110675412B (en)* | 2019-09-27 | 2023-08-01 | 腾讯科技(深圳)有限公司 | Image segmentation method, training method, device and equipment of image segmentation model |
| CN114220058A (en)* | 2021-12-17 | 2022-03-22 | 桂林电子科技大学 | Significance detection method based on MCP sparse representation dictionary learning model |
| CN114220058B (en)* | 2021-12-17 | 2024-10-22 | 桂林电子科技大学 | Significance detection method based on MCP sparse representation dictionary learning model |
| CN114240788A (en)* | 2021-12-21 | 2022-03-25 | 西南石油大学 | Robustness and self-adaptability background restoration method for complex scene |
| CN114240788B (en)* | 2021-12-21 | 2023-09-08 | 西南石油大学 | A robust and adaptive background restoration method for complex scenes |
| CN115272705B (en)* | 2022-07-29 | 2023-08-29 | 北京百度网讯科技有限公司 | Salient object detection model training method, device and equipment |
| CN115272705A (en)* | 2022-07-29 | 2022-11-01 | 北京百度网讯科技有限公司 | Training method, device and device for salient object detection model |
| CN115690418A (en)* | 2022-10-31 | 2023-02-03 | 武汉大学 | Unsupervised image waypoint automatic detection method |
| CN115690418B (en)* | 2022-10-31 | 2024-03-12 | 武汉大学 | Unsupervised automatic detection method for image waypoints |
| CN115984137A (en)* | 2023-01-04 | 2023-04-18 | 上海人工智能创新中心 | Method, system, device and storage medium for image restoration in dark light |
| CN115984137B (en)* | 2023-01-04 | 2024-05-14 | 上海人工智能创新中心 | A dark light image restoration method, system, device and storage medium |
| CN117636162A (en)* | 2023-11-21 | 2024-03-01 | 中国地质大学(武汉) | A sparse unmixing method, device, equipment and storage medium for hyperspectral images |
| CN117576493A (en)* | 2024-01-16 | 2024-02-20 | 武汉明炀大数据科技有限公司 | Cloud storage compression method and system for large sample data |
| CN117576493B (en)* | 2024-01-16 | 2024-04-02 | 武汉明炀大数据科技有限公司 | Cloud storage compression method and system for large sample data |
| CN119672061A (en)* | 2024-11-29 | 2025-03-21 | 西北工业大学 | A moving target detection method based on sparse decomposition and block multi-channel background subtraction |
| Publication number | Publication date |
|---|---|
| CN105574534B (en) | 2019-03-26 |
| Publication | Publication Date | Title |
|---|---|---|
| CN105574534A (en) | Significant object detection method based on sparse subspace clustering and low-order expression | |
| CN106203430B (en) | A kind of conspicuousness object detecting method based on foreground focused degree and background priori | |
| CN106682598B (en) | Multi-pose face feature point detection method based on cascade regression | |
| Yuan et al. | Factorization-based texture segmentation | |
| CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
| CN107358260B (en) | A Multispectral Image Classification Method Based on Surface Wave CNN | |
| CN105528595A (en) | Method for identifying and positioning power transmission line insulators in unmanned aerial vehicle aerial images | |
| CN107977661B (en) | Region-of-interest detection method based on FCN and low-rank sparse decomposition | |
| CN112200121B (en) | A hyperspectral unknown target detection method based on EVM and deep learning | |
| CN106650744B (en) | Image object co-segmentation method guided by local shape transfer | |
| CN107194937A (en) | Tongue image partition method under a kind of open environment | |
| CN104778457A (en) | Video face identification algorithm on basis of multi-instance learning | |
| CN107784288A (en) | A kind of iteration positioning formula method for detecting human face based on deep neural network | |
| CN105184772A (en) | A Superpixel-Based Adaptive Color Image Segmentation Method | |
| CN107977660A (en) | Region of interest area detecting method based on background priori and foreground node | |
| CN106886754B (en) | Object identification method and system under a kind of three-dimensional scenic based on tri patch | |
| CN108932518A (en) | A kind of feature extraction of shoes watermark image and search method of view-based access control model bag of words | |
| CN105931241A (en) | Automatic marking method for natural scene image | |
| Dong et al. | Feature extraction through contourlet subband clustering for texture classification | |
| CN106250811A (en) | Unconfinement face identification method based on HOG feature rarefaction representation | |
| CN104361096A (en) | Image retrieval method based on characteristic enrichment area set | |
| CN113011506A (en) | Texture image classification method based on depth re-fractal spectrum network | |
| CN104463091B (en) | A kind of facial image recognition method based on image LGBP feature subvectors | |
| CN104778683B (en) | A Multimodal Image Segmentation Method Based on Functional Mapping | |
| CN112668662B (en) | Target detection method in wild mountain forest environment based on improved YOLOv3 network |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |