Movatterモバイル変換


[0]ホーム

URL:


CN105574534A - Significant object detection method based on sparse subspace clustering and low-order expression - Google Patents

Significant object detection method based on sparse subspace clustering and low-order expression
Download PDF

Info

Publication number
CN105574534A
CN105574534ACN201510951934.8ACN201510951934ACN105574534ACN 105574534 ACN105574534 ACN 105574534ACN 201510951934 ACN201510951934 ACN 201510951934ACN 105574534 ACN105574534 ACN 105574534A
Authority
CN
China
Prior art keywords
low
pixel
matrix
super
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510951934.8A
Other languages
Chinese (zh)
Other versions
CN105574534B (en
Inventor
张强
梁宁
朱四洋
王龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian UniversityfiledCriticalXidian University
Priority to CN201510951934.8ApriorityCriticalpatent/CN105574534B/en
Publication of CN105574534ApublicationCriticalpatent/CN105574534A/en
Application grantedgrantedCritical
Publication of CN105574534BpublicationCriticalpatent/CN105574534B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于稀疏子空间聚类和低秩表示的显著性目标检测方法。其步骤为:1、对输入图像进行超像素分割和聚类;2、提取聚类中每一个超像素的颜色、纹理和边缘特征,构建聚类特征矩阵;3、根据颜色对比度的大小对所有超像素特征进行排序,构建字典;4、根据字典构建联合低秩表示模型,求解该模型对聚类的特征矩阵进行分解得到低秩表示系数,并计算聚类的显著性因子;5、将每一个聚类的显著值按照其空间位置映射到输入图像中,获得输入图像的显著图。本发明能完整一致地检测出图像中尺寸较大的显著性目标,且能抑制背景中的噪声,提升复杂背景图像中显著性目标检测的鲁棒性。可用于图像分割、目标识别、图像恢复和自适应图像压缩。

The invention discloses a salient target detection method based on sparse subspace clustering and low-rank representation. The steps are: 1. Carry out superpixel segmentation and clustering on the input image; 2. Extract the color, texture and edge features of each superpixel in the cluster to construct a cluster feature matrix; 3. According to the size of the color contrast, all Sorting superpixel features to build a dictionary; 4. Construct a joint low-rank representation model according to the dictionary, solve the model to decompose the cluster feature matrix to obtain low-rank representation coefficients, and calculate the significance factor of clustering; 5. Combine each The saliency value of a cluster is mapped to the input image according to its spatial position, and the saliency map of the input image is obtained. The invention can completely and consistently detect the large-sized salient target in the image, suppress the noise in the background, and improve the robustness of the salient target detection in the complex background image. It can be used for image segmentation, object recognition, image restoration and adaptive image compression.

Description

Translated fromChinese
基于稀疏子空间聚类和低秩表示的显著性目标检测方法Salient Object Detection Method Based on Sparse Subspace Clustering and Low-Rank Representation

技术领域technical field

本发明属于图像处理技术领域,具体的说涉及一种显著性目标检测方法,可用于图像分割、目标识别、图像恢复和自适应图像压缩。The invention belongs to the technical field of image processing, and in particular relates to a salient object detection method, which can be used for image segmentation, object recognition, image recovery and adaptive image compression.

背景技术Background technique

显著性目标检测的目的是完整一致地检测出图像中最吸引人注意的目标区域。近年来,随着可视信息网络的扩张和电子商务行业的飞速增长,图像显著性目标检测技术的重要性日益突显。作为一种新的高维数据分析和处理工具,低秩矩阵恢复技术能够有效地从受强噪声污染或部分缺失的高维观测样本中发现它的本征低维空间,该技术已经被广泛应用于计算机视觉、机器学习、统计分析等领域,并出现了一些基于低秩矩阵恢复的显著性检测方法。The goal of salient object detection is to detect the most attractive object regions in an image completely and consistently. In recent years, with the expansion of visual information network and the rapid growth of e-commerce industry, the importance of image salient object detection technology has become increasingly prominent. As a new high-dimensional data analysis and processing tool, low-rank matrix recovery technology can effectively discover its intrinsic low-dimensional space from high-dimensional observation samples polluted by strong noise or partially missing, and this technology has been widely used In computer vision, machine learning, statistical analysis and other fields, some saliency detection methods based on low-rank matrix restoration have emerged.

Yan等基于稀疏编码和低秩矩阵恢复技术提出一种显著性检测方法(YanJ,ZhuM,LiuH,etal.Visualsaliencydetectionviasparsitypursuit[J].SignalProcessingLetters,2010,17(8):739-742.)。该方法的具体步骤是:首先,将图像分成8×8的小块。然后,提取图像块的特征并进行稀疏编码,得到表示输入图像的稀疏编码矩阵。最后,利用鲁棒主成分分析方法分解稀疏编码矩阵,并采用分解得到的稀疏矩阵定义相应图像块的显著性。该方法简单假设显著性目标的尺寸很小,故其特征具有稀疏特性,因此能够较为准确地检测出简单场景中尺寸较小的显著性目标。但是该方法仍然存在的不足之处是,难以完整一致地检测出尺寸较大的显著性目标。Yan et al. proposed a saliency detection method based on sparse coding and low-rank matrix recovery technology (YanJ, ZhuM, LiuH, et al. Visual saliency detection via asparsity pursuit [J]. Signal Processing Letters, 2010, 17(8): 739-742.). The specific steps of the method are: firstly, the image is divided into 8×8 small blocks. Then, the features of the image blocks are extracted and sparsely encoded to obtain a sparsely encoded matrix representing the input image. Finally, the sparse coding matrix is decomposed using the robust principal component analysis method, and the saliency of the corresponding image block is defined using the decomposed sparse matrix. This method simply assumes that the size of salient objects is small, so its features are sparse, so it can more accurately detect salient objects with smaller sizes in simple scenes. However, the disadvantage of this method is that it is difficult to detect the salient objects with large size completely and consistently.

Shen等基于低秩矩阵恢复技术提出了一种结合先验信息的显著性目标检测方法(ShenX,WuY.Aunifiedapproachtosalientobjectdetectionvialowrankmatrixrecovery[C].IEEEConferenceonComputerVisionandPatternRecognition,2012,23(10):853-860.)。该方法首先对输入图像进行超像素分割,并提取超像素的特征构建特征矩阵,然后利用MSRA数据库学习先验知识和特征变换矩阵,并对所有超像素特征组成的矩阵进行变换,最后采用低秩矩阵恢复方法分解变换后的特征矩阵。该方法通过特征变换使得显著性目标的特征更加稀疏,背景的特征更具有相似性。由于尺寸较大的显著性目标不再具有稀疏特性,该方法并没有从本质上解决较大尺寸显著性目标的检测问题。Shen et al. proposed a salient object detection method based on low-rank matrix recovery technology (ShenX, WuY. Aunified approach to salient object detection via low rank matrix recovery [C]. IEEE Conference on Computer Vision and Pattern Recognition, 2012, 23 (10): 853-860.). This method first performs superpixel segmentation on the input image, and extracts the features of superpixels to construct a feature matrix, then uses the MSRA database to learn prior knowledge and feature transformation matrix, and transforms the matrix composed of all superpixel features, and finally adopts low-rank The matrix restoration method decomposes the transformed feature matrix. This method makes the features of salient objects more sparse and the background features more similar through feature transformation. Since the salient objects with larger size are no longer sparse, this method does not essentially solve the detection problem of larger-sized salient objects.

发明内容Contents of the invention

本发明的目的在于针对上述已有技术的不足,提出一种基于稀疏子空间聚类和低秩表示的显著性目标检测方法,以更加完整一致地检测尺寸较大的显著性目标,且提高检测准确度和召回率。The purpose of the present invention is to address the deficiencies of the above-mentioned prior art, and propose a salient object detection method based on sparse subspace clustering and low-rank representation, so as to detect salient objects with larger sizes more completely and consistently, and improve detection efficiency. precision and recall.

本发明的技术思路是,对输入图像进行超像素分割,并对分割得到的超像素进行聚类,由于同一聚类内超像素的特征具有相似性,本发明假设同一聚类内的超像素特征处于相同的低维子空间,并根据颜色对比度构建字典,在此基础上构建联合低秩表示模型,求解该模型对聚类的特征矩阵进行低秩稀疏分解,最后采用矩阵分解得到的低秩表示系数定义相应聚类的显著性,并按照其空间位置将显著性值映射到输入图像中,得到输入图像的显著图。The technical idea of the present invention is to perform superpixel segmentation on the input image, and cluster the superpixels obtained by segmentation. Since the characteristics of the superpixels in the same cluster are similar, the present invention assumes that the superpixel features in the same cluster Be in the same low-dimensional subspace, and build a dictionary according to the color contrast, build a joint low-rank representation model on this basis, solve the model to perform low-rank sparse decomposition of the clustering feature matrix, and finally use the low-rank representation obtained by matrix decomposition The coefficients define the saliency of the corresponding clusters, and the saliency values are mapped into the input image according to their spatial location, resulting in a saliency map of the input image.

本发明的实现步骤如下:The realization steps of the present invention are as follows:

(1)将输入图像I分割成N个超像素{pi|i=1,2,...,N};(1) Divide the input image I into N superpixels {pi |i=1,2,...,N};

(2)对所有超像素进行聚类,得到输入图像I的J个聚类{Cj|j=1,2,...,J},其中每一个聚类Cj包含了mj个超像素,即Cj={pj,k|k=1,2,...,mj};(2) Cluster all superpixels to obtain J clusters {Cj |j=1,2,...,J} of the input image I, where each cluster Cj contains mj superpixels Pixel, that is, Cj ={pj,k |k=1,2,...,mj };

(3)构建聚类特征矩阵:(3) Construct the cluster feature matrix:

针对每一个聚类Cj={pj,k|k=1,2,...,mj}当中包含的第k个超像素pj,k,提取该超像素中每一个像素点的颜色、边缘和纹理特征构建该像素点的特征向量,其维数为M=53,并利用超像素pj,k中所有像素特征向量的均值向量xj,k作为该超像素的特征,构建聚类Cj的特征矩阵为Xj=[xj,1,xj,2,...,xj,k,...,xj,mj],k=1,2,...,mjFor the kth superpixel pj ,k contained in each cluster Cj ={pj,k |k=1,2,...,mj }, extract the The color, edge and texture features construct the feature vector of the pixel, whose dimension is M=53, and use the mean vector xj,k of all pixel feature vectors in the superpixel p j, k as the feature of the superpixel, construct The characteristic matrix of cluster Cj is Xj =[xj,1 ,xj,2 ,...,xj,k ,...,xj,mj ], k=1,2,... , mj ;

(4)计算所有超像素的颜色对比度,并根据超像素颜色对比度由大到小对超像素特征进行排序,得到低秩表示算法的字典D;(4) Calculate the color contrast of all superpixels, and sort the superpixel features according to the superpixel color contrast from large to small, and obtain the dictionary D of the low-rank representation algorithm;

(5)根据上述特征矩阵Xj和字典D构建联合低秩表示模型:(5) Construct a joint low-rank representation model based on the above feature matrix Xj and dictionary D:

minminZZjj,,EE.jjΣΣjj==11JJ||||ZZjj||||**++λλ||||EE.||||22,,11

s.t.Xj=DZj+EjstXj =DZj +Ej

其中,Zj为低秩表示系数,Ek为重构误差,λ是权衡低秩成分和稀疏成分之间的一个常量因子,||·||*为矩阵核范数,表示矩阵的所有奇异值之和,||E||2,1为重构误差矩阵E=[E1,E2,...,EJ]的范数,且E(u,v)表示E的第u行第v列元素;Among them, Zj is the low-rank representation coefficient, Ek is the reconstruction error, λ is a constant factor to balance the low-rank component and the sparse component, ||·||* is the matrix kernel norm, which represents all the singularities of the matrix The sum of values, ||E||2,1 is the reconstruction error matrix E=[E1 ,E2 ,...,EJ ] norm, and E(u,v) represents the uth row and vth column element of E;

求解上述联合低秩表示模型,对聚类Cj的特征矩阵Xj进行低秩稀疏分解,得到低秩表示系数的最优解集合Solve the above joint low-rank representation model, perform low-rank sparse decomposition on the characteristic matrix Xj of cluster Cj , and obtain the optimal solution set of low-rank representation coefficients

(6)利用聚类Cj对应的低秩表示系数计算该聚类的显著性因子L(Cj):(6) Use the low rank representation coefficient corresponding to the cluster Cj Calculate the significance factor L(Cj ) for this cluster:

LL((CCjj))==||||ZZjjFfGG||||11,,11--||||ZZjjBBGG||||11,,11

其中,是低秩表示系数矩阵的前m行,是低秩表示系数矩阵的后m行,||·||1,1表示矩阵的范数,即||A||1,1=ΣuΣv|A(u,v)|,|A(u,v)|表示A的第u行第v列元素的绝对值;in, is the low-rank representation coefficient matrix The first m lines of is the low-rank representation coefficient matrix The last m rows, ||·||1,1 means the matrix Norm, ie | | A | | 1 , 1 = Σ u Σ v | A ( u , v ) | , |A(u,v)|Indicates the absolute value of the element in row u and column v of A;

(7)将每一个聚类Cj的显著性因子L(Cj)按照其空间位置映射到输入图像I中,得到输入图像I的显著图。(7) Map the saliency factor L(Cj ) of each cluster Cj to the input image I according to its spatial position, and obtain the saliency map of the input image I.

本发明与现有的技术相比具有以下优点:Compared with the prior art, the present invention has the following advantages:

第一,解决了传统方法难以检测较大尺寸显著性目标的问题First, it solves the problem that traditional methods are difficult to detect large-scale salient objects

本发明对输入图像进行超像素分割,并对分割得到的超像素聚类,然后采用低秩表示算法对聚类特征矩阵进行低秩稀疏分解,并利用低秩表示系数定义聚类的显著性,克服了传统基于低秩矩阵恢复的显著性检测方法中难以完整一致地检测较大尺寸显著性目标的问题。The invention performs superpixel segmentation on the input image, and clusters the superpixels obtained by segmentation, then uses a low-rank representation algorithm to perform low-rank sparse decomposition on the clustering feature matrix, and uses the low-rank representation coefficient to define the significance of clustering, It overcomes the problem that the traditional saliency detection method based on low-rank matrix recovery is difficult to completely and consistently detect large-scale saliency targets.

第二,提升了复杂背景图像显著性检测的鲁棒性Second, the robustness of saliency detection in complex background images is improved

传统基于低秩矩阵恢复的显著性检测方法简单假设背景特征属于同一低维子空间,并对整幅图像的特征进行低秩稀疏分解,由于复杂背景中包含不同的纹理区域,在这种情况下背景的特征不具有低秩特性,因而传统方法的假设不再成立。鉴于同一聚类内超像素的特征具有相似性,本发明假设其属于相同的低维子空间,并利用低秩表示算法对聚类特征进行低秩稀疏分解,因此与传统基于低秩矩阵恢复的显著性检测方法相比,本发明提升了复杂背景图像显著性检测的鲁棒性。Traditional saliency detection methods based on low-rank matrix recovery simply assume that the background features belong to the same low-dimensional subspace, and perform low-rank sparse decomposition on the features of the entire image. Since the complex background contains different texture regions, in this case The features of the background do not have low-rank properties, so the assumptions of traditional methods no longer hold. In view of the similarity of the features of superpixels in the same cluster, the present invention assumes that they belong to the same low-dimensional subspace, and uses a low-rank representation algorithm to perform low-rank sparse decomposition on the cluster features, so it is different from the traditional low-rank matrix recovery method. Compared with the saliency detection method, the invention improves the robustness of the saliency detection of complex background images.

附图说明Description of drawings

图1为本发明的实现流程图;Fig. 1 is the realization flowchart of the present invention;

图2为用本发明对包含大尺寸显著性目标图像检测的仿真图;Fig. 2 is the emulation figure that uses the present invention to include the large-scale salient target image detection;

图3为用本发明对包含复杂背景图像显著性目标检测的仿真图;Fig. 3 is the emulation figure that uses the present invention to comprise complex background image salient object detection;

图4为用本发明对显著性目标检测结果的客观评价图。Fig. 4 is an objective evaluation diagram of the detection results of salient objects by the present invention.

具体实施方法Specific implementation method

下面结合附图对本发明的实施例和效果作进一步的详细描述。The embodiments and effects of the present invention will be further described in detail below in conjunction with the accompanying drawings.

参照附图1,本发明的实现步骤如下:With reference to accompanying drawing 1, the realization step of the present invention is as follows:

步骤1,输入图像并进行超像素分割。Step 1, input an image and perform superpixel segmentation.

(1a)从MSRA-1000数据库中选取一幅图像作为输入图像I;(1a) Select an image from the MSRA-1000 database as the input image I;

(1b)将输入图像I分割成N个超像素{pi|i=1,2,...,N}。已有的超像素分割算法有SuperpixelLattice、Normalizedcuts、Turbopixels和SLIC等,其中SLIC超像素分割算法在超像素形状、边界保持以及算法的运算速度上具有明显的优势,本发明选用SLIC超像素分割算法对输入图像进行分割。(1b) Segment the input image I into N superpixels {pi |i=1,2,...,N}. Existing superpixel segmentation algorithm has SuperpixelLattice, Normalizedcuts, Turbopixels and SLIC etc., and wherein SLIC superpixel segmentation algorithm has obvious advantage on superpixel shape, boundary preservation and the computing speed of algorithm, the present invention selects SLIC superpixel segmentation algorithm for use Input image for segmentation.

步骤2,对超像素进行聚类。Step 2, clustering the superpixels.

拉普拉斯稀疏子空间聚类算法在稀疏子空间聚类算法的基础上加入约束数据特征一致性的Laplacian项,具有更加优越的性能,因此本发明选择拉普拉斯稀疏子空间算法对所有超像素进行聚类,得到输入图像I的J个聚类{Cj|j=1,2,...,J},其中每一个聚类Cj包含了mj个超像素,即Cj={pj,k|k=1,2,...,mj}。The Laplacian sparse subspace clustering algorithm adds the Laplacian item that constrains the consistency of data features on the basis of the sparse subspace clustering algorithm, and has more superior performance. Therefore, the present invention selects the Laplacian sparse subspace algorithm for all Superpixels are clustered to obtain J clusters {Cj |j=1,2,...,J} of the input image I, where each cluster Cj contains mj superpixels, that is, Cj ={pj,k |k=1,2,...,mj }.

步骤3,构建聚类特征矩阵。Step 3, construct the cluster feature matrix.

(3a)提取聚类中超像素特征:(3a) Extract superpixel features in clustering:

(3a1)针对每一个聚类Cj={pj,k|k=1,2,...,mj}当中包含的第k个超像素pj,k,分别提取pj,k中每个像素点的颜色、边缘和纹理特征,每个像素点的颜色特征的维数为5,用3个尺度4个方向的金字塔滤波器提取每个像素点的边缘特征,其维数为12,用3个尺度12个方向的Gabor滤波器提取每个像素点的纹理特征,其维数为36,将上述颜色、边缘和纹理特征组合得到每个像素点的特征向量,其维数M=53;(3a1) For the kth superpixel pj ,k contained in each cluster Cj ={pj,k |k=1,2,...,mj }, extract pj,k respectively The color, edge and texture features of each pixel, the color feature of each pixel has a dimension of 5, and the edge feature of each pixel is extracted by a pyramid filter with 3 scales and 4 directions, and its dimension is 12 , use the Gabor filter with 3 scales and 12 directions to extract the texture feature of each pixel, its dimension is 36, combine the above color, edge and texture features to get the feature vector of each pixel, its dimension M= 53;

(3a2)根据(3a1)提取的超像素pj,k中每个像素点的53维特征向量,对超像素pj,k包含的所有像素点的特征向量取均值得到pj,k的特征xj,k,k=1,2,...,mj(3a2) According to the 53-dimensional feature vector of each pixel in the superpixel pj, k extracted in (3a1), take the mean value of the feature vectors of all pixels contained in the superpixel pj, k to obtain the feature of p j, k xj,k , k=1,2,...,mj ;

(3b)构建特征矩阵:(3b) Construct feature matrix:

根据(3a)中提取的聚类Cj中mj个超像素的特征,构建聚类Cj的特征矩阵为Xj=[xj,1,xj,2,...,xj,k,...,xj,mj].According to the features of mj superpixels in cluster Cj extracted in (3a), the feature matrix of cluster Cj is constructed as x j = [ x j , 1 , x j , 2 , ... , x j , k , ... , x j , m j ] .

步骤4,构建基于颜色对比度的字典。Step 4, build a dictionary based on color contrast.

(4a)计算超像素的颜色对比度:(4a) Calculate the color contrast of the superpixel:

Uuii==ΣΣjj==11NN||||ccii--ccjj||||22·&Center Dot;wwii,,jj((ll))

其中,表示超像素pi中所有像素的CIElab空间颜色特征的均值,常用的颜色空间模型有RGB、HSI和CIElab,本发明采用的颜色特征取自于CIElab空间,表示超像素pi中任一像素点Im的颜色特征,|pi|是超像素pi中像素点的总数,表示超像素pj中所有像素的颜色特征的均值,表示超像素pj中任一像素点In的颜色特征,|pj|是超像素pj中像素点的总数,||ci-cj||2为ci和cj欧氏距离的平方;wi,j(l)=1Ziexp(-12σp2||li-lj||2),li=ΣIm∈piImp|pi|表示超像素pi中所有像素位置坐标的均值,表示超像素pi中任一像素点Im的位置坐标,表示超像素pj中所有像素位置坐标的均值,表示超像素pj中任一像素点In的位置坐标,||li-lj||2为li和lj欧氏距离的平方,Zi是使得的归一化参数,σp是局部-全局权衡因子;in, Represents the mean value of the CIElab space color features of all pixels in the superpixel pi , commonly used color space models include RGB, HSI and CIElab, and the color features used in the present invention are taken from the CIElab space, Indicates the color feature of any pixel Im in the superpixel pi , |pi | is the total number of pixels in the superpixel pi , Denotes the mean value of the color features of all pixels in a superpixelpj , Represents the color feature of any pixel In in the superpixel pj , |pj | is the total number of pixels in the superpixel pj , ||ci -cj ||2 is the Euclidean distance between ci and cj squared; w i , j ( l ) = 1 Z i exp ( - 1 2 σ p 2 | | l i - l j | | 2 ) , l i = Σ I m ∈ p i I m p | p i | Indicates the mean value of all pixel position coordinates in the superpixel pi , Indicates the position coordinates of any pixel point Im in the superpixel pi , Indicates the mean value of all pixel position coordinates in the superpixelpj , Indicates the position coordinates of any pixel In in the superpixel pj , ||li -lj ||2 is the square of the Euclidean distance between li and lj , Zi is such that The normalization parameter of , σp is the local-global trade-off factor;

(4b)根据超像素颜色对比度由大到小对超像素特征进行排序,得到低秩表示算法的字典其中,是超像素的特征向量,1≤si≤N,且满足Us1≥Us2≥...≥Usi≥...UsN.(4b) Sort the superpixel features according to the superpixel color contrast from large to small, and obtain a dictionary of low-rank representation algorithms in, is a superpixel The eigenvectors of , 1≤si ≤N, and satisfy u the s 1 &Greater Equal; u the s 2 &Greater Equal; ... &Greater Equal; u the s i &Greater Equal; ... u the s N .

步骤5,根据特征矩阵Xj和字典D构建联合低秩表示模型。Step 5, construct a joint low-rank representation model according to the feature matrix Xj and the dictionary D.

由于同一聚类内超像素的特征具有相似性,因此Zj具有低秩特性,重构误差Ej具有稀疏特性,且核范数是矩阵秩的最优凸近似,基于上述原理,根据特征矩阵Xj和字典D构建联合低秩表示模型,对聚类{Cj|j=1,2,...,J}的低秩表示系数{Zj|j=1,2,...,J}进行联合低秩约束,对重构误差矩阵E进行范数约束,其表示如下:Due to the similarity of the features of superpixels in the same cluster, Zj has low-rank characteristics, and the reconstruction error Ej has sparse characteristics, and the kernel norm is the optimal convex approximation of the matrix rank. Based on the above principles, according to the feature matrix Xj and dictionary D construct a joint low-rank representation model, and the low-rank representation coefficient {Zj |j =1,2,..., J} for joint low-rank constraints, and for the reconstruction error matrix E Norm constraints, which are expressed as follows:

minminZZjj,,EE.jjΣΣjj==11JJ||||ZZjj||||**++λλ||||EE.||||22,,11

s.t.Xj=DZj+EjstXj =DZj +Ej

其中,为特征矩阵在字典D下低秩表示系数,为聚类Cj的重构误差,λ是权衡低秩成分和稀疏成分之间的一个常量因子,D∈RM×N是根据颜色对比度构建的字典,||·||*为矩阵核范数,表示矩阵的所有奇异值之和,||E||2,1为重构误差矩阵E=[E1,E2,...,EJ]∈RM×Σj=1Jmj范数,且||E||2,1=ΣvΣu(E(u,v))2,E(u,v)表示E的第u行第v列元素;in, is the feature matrix Low-rank representation coefficients under the dictionary D, is the reconstruction error of the cluster Cj , λ is a constant factor to weigh the low-rank component and the sparse component, D∈RM×N is a dictionary constructed according to the color contrast, ||·||* is the matrix kernel norm Number, representing the sum of all singular values of the matrix, ||E||2,1 is the reconstruction error matrix E. = [ E. 1 , E. 2 , ... , E. J ] ∈ R m × Σ j = 1 J m j of norm, and | | E. | | 2 , 1 = Σ v Σ u ( E. ( u , v ) ) 2 , E(u,v) represents the uth row and vth column element of E;

求解上述联合低秩表示模型,对聚类Cj的特征矩阵Xj进行低秩稀疏分解,得到低秩表示系数的最优解集合Solve the above joint low-rank representation model, perform low-rank sparse decomposition on the characteristic matrix Xj of cluster Cj , and obtain the optimal solution set of low-rank representation coefficients

步骤6,利用特征矩阵Xj的低秩稀疏分解得到的低秩表示系数计算聚类Cj的显著性因子L(Cj)。Step 6: Calculate the significance factor L(Cj ) of the cluster Cj by using the low-rank representation coefficient obtained from the low-rank sparse decomposition of the feature matrix Xj .

假设图像中所有超像素的特征共同构成字典其中每一列对应一个超像素特征,则字典中任意一个超像素特征可以表示为字典D中所有超像素特征的线性组合,即:Assume that the features of all superpixels in the image together form a dictionary Each column corresponds to a superpixel feature, then any superpixel feature in the dictionary can be expressed as a linear combination of all superpixel features in the dictionary D, namely:

xxsthe sjj==[[xxsthe s11,,xxsthe s22,,......,,xxsthe sii,,......,,xxsthe sNN]]zzjj11zzjj22......zzjjii......zzjjNN

其中是特征在特征下的表示系数,较大的表示特征和特征在由字典D列向量张成的特征空间中拥有相似的映射。因此,能够衡量超像素与超像素的相似程度,即超像素特征的表示系数能够揭示该特征与字典中对应特征的相似程度。in is a feature in feature The lower express coefficient, the larger Express features and features It has a similar mapping in the feature space spanned by the column vectors of the dictionary D. therefore, Capable of measuring superpixels with superpixels The degree of similarity, that is, the representation coefficient of the superpixel feature can reveal the similarity between the feature and the corresponding feature in the dictionary.

综上所述,由于字典D是利用超像素颜色对比度由大到小对超像素特征进行排序得到的,低秩表示系数矩阵的前m行用表示,后m行用表示,能够反映该聚类特征与高对比度超像素特征的相似程度,表示该聚类特征与低对比度超像素特征的相似程度,则L(Cj)可以用来衡量聚类Cj的显著性:To sum up, since the dictionary D is obtained by sorting the superpixel features from large to small by using the superpixel color contrast, the low rank represents the coefficient matrix The first m lines of Indicates that the next m lines use express, It can reflect the similarity between the clustering feature and the high-contrast superpixel feature, Indicates the similarity between the cluster feature and the low-contrast superpixel feature, then L(Cj ) can be used to measure the significance of the cluster Cj :

LL((CCjj))==||||ZZjjFfGG||||11,,11--||||ZZjjBBGG||||11,,11

其中,||·||1,1表示矩阵的范数,即|A(u,v)|表示A的第u行第v列元素的绝对值。Among them, ||·||1,1 represents the matrix Norm, ie |A(u,v)| represents the absolute value of the element in row u and column v of A.

步骤7,获得输入图像的显著图。Step 7, obtain the saliency map of the input image.

将每一个聚类Cj的显著性因子L(Cj)按照其空间位置映射到输入图像中,得到输入图像I的显著图。The saliency factor L(Cj ) of each cluster Cj is mapped to the input image according to its spatial position, and the saliency map of the input image I is obtained.

本发明的效果可通过以下仿真进一步说明:Effect of the present invention can be further illustrated by following simulation:

1.仿真方法与条件1. Simulation method and conditions

仿真使用的方法包括本发明方法和现有的如下10种方法,它们分别是:The method that emulation uses comprises the inventive method and existing following 10 kinds of methods, and they are respectively:

基于稀疏编码与鲁棒主成分分析的显著性检测方法SP,A saliency detection method SP based on sparse coding and robust principal component analysis,

基于特征变换与鲁棒主成分分析的显著性检测方法UA,UA, a saliency detection method based on feature transformation and robust principal component analysis,

基于区域对比度的显著性检测方法RC,A region-contrast-based saliency detection method RC,

基于直方图对比度的显著性检测方法HC,Histogram contrast-based saliency detection method HC,

基于频率调制的显著性检测方法FT,A frequency modulation based saliency detection method FT,

基于图论的显著性检测方法GBVS,Graph theory-based saliency detection method GBVS,

基于局部对比度的显著性检测方法ITTI,The local contrast-based saliency detection method ITTI,

基于编码长度增量的显著性检测方法DVA,DVA, a saliency detection method based on code length increments,

基于自信息的显著性检测方法AIM,Self-information-based saliency detection method AIM,

基于图像签名的显著性区域检测方法SIG。Salient region detection method based on image signature SIG.

实验数据利用MSRA-1000数据库,仿真实验均在Windows7操作系统下采用Matlab2010软件实现。The experimental data uses the MSRA-1000 database, and the simulation experiments are all implemented using Matlab2010 software under the Windows7 operating system.

仿真1Simulation 1

从MSRA-1000数据库中选取一组包含大尺寸显著性目标的图像,选取图像中目标的尺寸均大于整幅图像大小的25%,用本发明与现有的SP和UA方法分别对这一组输入图像进行显著性检测,结果如图2,其中:Select a group of images containing large-scale salient objects from the MSRA-1000 database, and the size of the objects in the selected images is greater than 25% of the size of the entire image. Use the present invention and the existing SP and UA methods to analyze this group The input image is subjected to saliency detection, and the result is shown in Figure 2, where:

图2的第(a)列为一组待检测图像,Column (a) of Figure 2 is a set of images to be detected,

图2的第(b)列为采用SP方法的显著性检测结果,Column (b) of Figure 2 is the significance detection result using the SP method,

图2的第(c)列为采用UA方法的显著性检测结果,Column (c) of Figure 2 is the significance detection result using the UA method,

图2的第(d)列为采用本发明方法的显著性检测结果,The (d) column of Fig. 2 is the significance detection result adopting the method of the present invention,

图2的第(e)列为Ground-truth图像。Column (e) of Figure 2 is the Ground-truth image.

由图2的第(b)列可以看出,SP方法检测得到的显著图会在显著性目标的边缘处分配较大的显著值,该方法不能完整一致地检测出尺寸较大的显著性目标。UA方法得到的显著图如图2的第(c)列所示,该方法虽然能够较为准确地检测出尺寸较大的显著性目标,但目标区域难以分配一致的显著值。如图2的第(d)列所示,本发明提出的显著性目标检测方法不仅能够准确检测出显著性目标,而且能够在目标区域内分配一致的显著值。From the column (b) of Figure 2, it can be seen that the saliency map detected by the SP method will assign a larger saliency value at the edge of the salient object, and this method cannot detect the salient object with a larger size completely and consistently . The saliency map obtained by the UA method is shown in column (c) of Figure 2. Although this method can more accurately detect large-sized saliency objects, it is difficult to assign consistent saliency values to the target area. As shown in column (d) of Fig. 2, the salient object detection method proposed by the present invention can not only detect the salient objects accurately, but also assign consistent saliency values within the object area.

仿真2Simulation 2

从MSRA-1000数据库中选取一组包含复杂背景的图像,用本发明与现有的SP和UA方法分别对这一组输入图像进行显著性检测,结果如图3,其中:Select a group of images containing complex backgrounds from the MSRA-1000 database, and use the present invention and the existing SP and UA methods to carry out saliency detection on this group of input images, the results are shown in Figure 3, where:

图3的第(a)列为一组待检测图像,Column (a) of Figure 3 is a set of images to be detected,

图3的第(b)列为采用SP方法的显著性检测结果,Column (b) of Figure 3 is the significance detection result using the SP method,

图3的第(c)列为采用UA方法的显著性检测结果,Column (c) of Figure 3 is the significance detection result using the UA method,

图3的第(d)列为采用本发明方法的显著性检测结果,The (d) column of Fig. 3 is the significance detection result adopting the method of the present invention,

图3的第(e)列为Ground-truth图像。Column (e) of Figure 3 is the Ground-truth image.

由图3的第(b)列可以看出,SP方法得到的显著图会给复杂背景分配较大的显著值,不能准确检测出显著性目标。UA方法得到的显著图如图3的第(c)列所示,该方法虽然能够较为准确地检测出复杂背景中的显著性目标,但难以抑制复杂背景中的噪声。如图3的第(d)列所示,本发明提出的显著性目标检测方法不仅能够完整一致地检测出显著性目标,而且能够充分抑制复杂背景中的噪声。From the column (b) of Figure 3, it can be seen that the saliency map obtained by the SP method will assign a large saliency value to the complex background, and cannot accurately detect the saliency target. The saliency map obtained by the UA method is shown in column (c) of Fig. 3. Although this method can detect the salient objects in the complex background more accurately, it is difficult to suppress the noise in the complex background. As shown in column (d) of Fig. 3, the salient object detection method proposed by the present invention can not only detect the salient objects completely and consistently, but also fully suppress the noise in the complex background.

仿真3,基于准确度-召回率曲线、F-measure曲线、平均绝对误差以及自适应阈值下的平均准确度、召回率和F-measure这四种常用的显著性目标检测评价标准,用本发明与现有的UA、RC、HC、FT、GBVS、ITTI、DVA、AIM、SIG显著性检测方法对MSRA-1000数据库中的1000幅图像作对比实验,结果如图4,其中:Simulation 3, based on the accuracy-recall rate curve, F-measure curve, mean absolute error and the average accuracy, recall rate and F-measure under the adaptive threshold, these four commonly used salient target detection evaluation criteria, using the present invention Compared with the existing UA, RC, HC, FT, GBVS, ITTI, DVA, AIM, and SIG saliency detection methods on 1,000 images in the MSRA-1000 database, the results are shown in Figure 4, where:

图4(a)为准确度-召回率曲线,Figure 4(a) is the accuracy-recall curve,

图4(b)为F-measure曲线,Figure 4(b) is the F-measure curve,

图4(c)为自适应阈值下的平均准确度、召回率和F-measure直方图,Figure 4(c) is the average accuracy, recall and F-measure histogram under the adaptive threshold,

图4(d)为平均绝对误差。Figure 4(d) is the mean absolute error.

由图4(a)可以看出,本发明方法能够达到主流算法的准确度-召回率曲线指标。由图4(b)可知,本发明方法能够在较大的阈值范围内得到最大的F-measure值。由图4(c)可以看出,本发明方法能够得到最高的平均准确度、召回率和F-measure值。由图4(d)可知,本发明方法与Ground-truth图像的差异最小,即最接近理想的显著性目标检测结果。It can be seen from Fig. 4(a) that the method of the present invention can reach the accuracy-recall curve index of mainstream algorithms. It can be seen from Fig. 4(b) that the method of the present invention can obtain the maximum F-measure value within a relatively large threshold range. It can be seen from Fig. 4(c) that the method of the present invention can obtain the highest average accuracy, recall rate and F-measure value. It can be seen from Figure 4(d) that the difference between the method of the present invention and the Ground-truth image is the smallest, that is, it is the closest to the ideal salient object detection result.

Claims (2)

wherein,representing a super pixel piThe average of the CIElab color features of all the pixels in the set,representing a super pixel piAny one of the pixel points ImP |, piIs a super pixel piThe total number of the middle pixel points,representing a super pixel pjThe average of the color characteristics of all the pixels in (a),representing a super pixel pjAny one of the pixel points InP |, pjIs a super pixel pjTotal number of middle pixels, | ci-cj||2Is ciAnd cjThe square of the euclidean distance;wi,j(l)=1Ziexp(-12σp2||li-lj||2),li=ΣIm∈piImP|pi|representing a super pixel piThe mean of the coordinates of all the pixel locations in (a),representing a super pixel piAny one of the pixel points ImThe position coordinates of the (c) and (d),representing a super pixel pjThe mean of the coordinates of all the pixel locations in (a),representing a super pixel pjAny one of the pixel points InPosition coordinates, | | li-lj||2Is 1iAnd ljSquare of Euclidean distance, ZiIs that makeNormalized parameter of (a)pIs a local-global trade-off factor;
CN201510951934.8A2015-12-172015-12-17Conspicuousness object detection method based on sparse subspace clustering and low-rank representationActiveCN105574534B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201510951934.8ACN105574534B (en)2015-12-172015-12-17Conspicuousness object detection method based on sparse subspace clustering and low-rank representation

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201510951934.8ACN105574534B (en)2015-12-172015-12-17Conspicuousness object detection method based on sparse subspace clustering and low-rank representation

Publications (2)

Publication NumberPublication Date
CN105574534Atrue CN105574534A (en)2016-05-11
CN105574534B CN105574534B (en)2019-03-26

Family

ID=55884641

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201510951934.8AActiveCN105574534B (en)2015-12-172015-12-17Conspicuousness object detection method based on sparse subspace clustering and low-rank representation

Country Status (1)

CountryLink
CN (1)CN105574534B (en)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106056070A (en)*2016-05-262016-10-26重庆大学SAR target identification method based on low-rank matrix recovery and sparse representation
CN106407927A (en)*2016-09-122017-02-15河海大学常州校区Salient visual method based on polarization imaging and applicable to underwater target detection
CN106447667A (en)*2016-10-312017-02-22郑州轻工业学院Visual significance detection method based on self-learning characteristics and matrix low-rank recovery
CN106529549A (en)*2016-10-312017-03-22郑州轻工业学院Visual saliency detection method based on adaptive features and discrete cosine transform
CN106683074A (en)*2016-11-032017-05-17中国科学院信息工程研究所Image tampering detection method based on haze characteristic
CN106846377A (en)*2017-01-092017-06-13深圳市美好幸福生活安全系统有限公司A kind of target tracking algorism extracted based on color attribute and active features
CN107085725A (en)*2017-04-212017-08-22河南科技大学 A Method for Clustering Image Regions via Adaptive Codebook Based LLC
CN107301643A (en)*2017-06-062017-10-27西安电子科技大学Well-marked target detection method based on robust rarefaction representation Yu Laplce's regular terms
CN107358590A (en)*2017-07-192017-11-17南京邮电大学Three-dimensional video-frequency method for shielding error code based on super-pixel segmentation and similar group rarefaction representation
CN107423668A (en)*2017-04-142017-12-01山东建筑大学Eeg signal classification System and method for based on wavelet transformation and sparse expression
CN107977661A (en)*2017-10-132018-05-01天津工业大学The region of interest area detecting method decomposed based on full convolutional neural networks and low-rank sparse
CN107992875A (en)*2017-12-252018-05-04北京航空航天大学A kind of well-marked target detection method based on image bandpass filtering
CN108053372A (en)*2017-12-012018-05-18北京小米移动软件有限公司The method and apparatus for handling depth image
CN108109153A (en)*2018-01-122018-06-01西安电子科技大学SAR image segmentation method based on SAR-KAZE feature extractions
CN108171279A (en)*2018-01-282018-06-15北京工业大学A kind of adaptive product Grassmann manifold Subspace clustering methods of multi-angle video
CN108460412A (en)*2018-02-112018-08-28北京盛安同力科技开发有限公司A kind of image classification method based on subspace joint sparse low-rank Structure learning
CN108460379A (en)*2018-02-062018-08-28西安电子科技大学Well-marked target detection method based on refinement Space Consistency two-stage figure
CN108491883A (en)*2018-03-262018-09-04福州大学A kind of conspicuousness inspection optimization method based on condition random field
CN108734174A (en)*2018-04-212018-11-02大连海事大学A kind of complex background image conspicuousness detection method based on low-rank representation
CN108805136A (en)*2018-03-262018-11-13中国地质大学(武汉)A kind of conspicuousness detection method towards waterborne contaminant monitoring
CN109086775A (en)*2018-07-192018-12-25南京信息工程大学A kind of collaboration conspicuousness detection method of quick manifold ranking and low-rank constraint
CN109101978A (en)*2018-07-062018-12-28中国地质大学(武汉)Conspicuousness object detection method and system based on weighting low-rank matrix Restoration model
CN109242854A (en)*2018-07-142019-01-18西北工业大学A kind of image significance detection method based on FLIC super-pixel segmentation
CN109691055A (en)*2016-10-072019-04-26赫尔实验室有限公司 System for anomaly detection on CAN bus data using sparse and low-rank decomposition of transfer entropy matrix
CN109697197A (en)*2018-12-252019-04-30四川效率源信息安全技术股份有限公司A method of carving multiple Access database file
CN110097530A (en)*2019-04-192019-08-06西安电子科技大学Based on multi-focus image fusing method super-pixel cluster and combine low-rank representation
CN110321787A (en)*2019-05-132019-10-11仲恺农业工程学院Disease identification method, system and storage medium based on joint sparse representation
CN110490206A (en)*2019-08-202019-11-22江苏建筑职业技术学院A kind of quick conspicuousness algorithm of target detection based on low-rank matrix dualistic analysis
CN110675412A (en)*2019-09-272020-01-10腾讯科技(深圳)有限公司Image segmentation method, training method, device and equipment of image segmentation model
CN114220058A (en)*2021-12-172022-03-22桂林电子科技大学Significance detection method based on MCP sparse representation dictionary learning model
CN114240788A (en)*2021-12-212022-03-25西南石油大学Robustness and self-adaptability background restoration method for complex scene
CN115272705A (en)*2022-07-292022-11-01北京百度网讯科技有限公司 Training method, device and device for salient object detection model
CN115690418A (en)*2022-10-312023-02-03武汉大学Unsupervised image waypoint automatic detection method
CN115984137A (en)*2023-01-042023-04-18上海人工智能创新中心 Method, system, device and storage medium for image restoration in dark light
CN117576493A (en)*2024-01-162024-02-20武汉明炀大数据科技有限公司Cloud storage compression method and system for large sample data
CN117636162A (en)*2023-11-212024-03-01中国地质大学(武汉) A sparse unmixing method, device, equipment and storage medium for hyperspectral images
CN119672061A (en)*2024-11-292025-03-21西北工业大学 A moving target detection method based on sparse decomposition and block multi-channel background subtraction

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104392463A (en)*2014-12-162015-03-04西安电子科技大学Image salient region detection method based on joint sparse multi-scale fusion
CN104392231A (en)*2014-11-072015-03-04南京航空航天大学Block and sparse principal feature extraction-based rapid collaborative saliency detection method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104392231A (en)*2014-11-072015-03-04南京航空航天大学Block and sparse principal feature extraction-based rapid collaborative saliency detection method
CN104392463A (en)*2014-12-162015-03-04西安电子科技大学Image salient region detection method based on joint sparse multi-scale fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘甜甜: "基于稀疏和低秩表示的显著性目标检测", 《电子科技》*
张艳邦等: "基于颜色对比度及局部信息熵的目标检测算法", 《内江科技》*
李小平等: "图像分割的改进稀疏子空间聚类方法", 《系统工程与电子技术》*

Cited By (60)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106056070A (en)*2016-05-262016-10-26重庆大学SAR target identification method based on low-rank matrix recovery and sparse representation
CN106056070B (en)*2016-05-262019-05-10重庆大学 SAR target recognition method based on low-rank matrix recovery and sparse representation
CN106407927A (en)*2016-09-122017-02-15河海大学常州校区Salient visual method based on polarization imaging and applicable to underwater target detection
CN106407927B (en)*2016-09-122019-11-05河海大学常州校区The significance visual method suitable for underwater target detection based on polarization imaging
CN109691055A (en)*2016-10-072019-04-26赫尔实验室有限公司 System for anomaly detection on CAN bus data using sparse and low-rank decomposition of transfer entropy matrix
CN106447667A (en)*2016-10-312017-02-22郑州轻工业学院Visual significance detection method based on self-learning characteristics and matrix low-rank recovery
CN106529549A (en)*2016-10-312017-03-22郑州轻工业学院Visual saliency detection method based on adaptive features and discrete cosine transform
CN106447667B (en)*2016-10-312017-09-08郑州轻工业学院The vision significance detection method restored based on self study feature and matrix low-rank
CN106683074A (en)*2016-11-032017-05-17中国科学院信息工程研究所Image tampering detection method based on haze characteristic
CN106683074B (en)*2016-11-032019-11-05中国科学院信息工程研究所A kind of distorted image detection method based on haze characteristic
CN106846377A (en)*2017-01-092017-06-13深圳市美好幸福生活安全系统有限公司A kind of target tracking algorism extracted based on color attribute and active features
CN107423668A (en)*2017-04-142017-12-01山东建筑大学Eeg signal classification System and method for based on wavelet transformation and sparse expression
CN107085725B (en)*2017-04-212020-08-14河南科技大学Method for clustering image areas through LLC based on self-adaptive codebook
CN107085725A (en)*2017-04-212017-08-22河南科技大学 A Method for Clustering Image Regions via Adaptive Codebook Based LLC
CN107301643A (en)*2017-06-062017-10-27西安电子科技大学Well-marked target detection method based on robust rarefaction representation Yu Laplce's regular terms
CN107301643B (en)*2017-06-062019-08-06西安电子科技大学 Salient Object Detection Method Based on Robust Sparse Representation and Laplacian Regularization
CN107358590A (en)*2017-07-192017-11-17南京邮电大学Three-dimensional video-frequency method for shielding error code based on super-pixel segmentation and similar group rarefaction representation
CN107977661A (en)*2017-10-132018-05-01天津工业大学The region of interest area detecting method decomposed based on full convolutional neural networks and low-rank sparse
CN107977661B (en)*2017-10-132022-05-03天津工业大学 Region-of-interest detection method based on FCN and low-rank sparse decomposition
CN108053372A (en)*2017-12-012018-05-18北京小米移动软件有限公司The method and apparatus for handling depth image
CN107992875A (en)*2017-12-252018-05-04北京航空航天大学A kind of well-marked target detection method based on image bandpass filtering
CN108109153A (en)*2018-01-122018-06-01西安电子科技大学SAR image segmentation method based on SAR-KAZE feature extractions
CN108109153B (en)*2018-01-122019-10-11西安电子科技大学 SAR Image Segmentation Method Based on SAR-KAZE Feature Extraction
CN108171279A (en)*2018-01-282018-06-15北京工业大学A kind of adaptive product Grassmann manifold Subspace clustering methods of multi-angle video
CN108171279B (en)*2018-01-282021-11-05北京工业大学 A Multi-view Video Adaptive Product Grassmann Manifold Subspace Clustering Method
CN108460379A (en)*2018-02-062018-08-28西安电子科技大学Well-marked target detection method based on refinement Space Consistency two-stage figure
CN108460379B (en)*2018-02-062021-05-04西安电子科技大学 A Salient Object Detection Method Based on Refined Spatial Consistency Two-Stage Graph
CN108460412A (en)*2018-02-112018-08-28北京盛安同力科技开发有限公司A kind of image classification method based on subspace joint sparse low-rank Structure learning
CN108460412B (en)*2018-02-112020-09-04北京盛安同力科技开发有限公司Image classification method based on subspace joint sparse low-rank structure learning
CN108805136A (en)*2018-03-262018-11-13中国地质大学(武汉)A kind of conspicuousness detection method towards waterborne contaminant monitoring
CN108491883A (en)*2018-03-262018-09-04福州大学A kind of conspicuousness inspection optimization method based on condition random field
CN108491883B (en)*2018-03-262022-03-22福州大学Saliency detection optimization method based on conditional random field
CN108734174A (en)*2018-04-212018-11-02大连海事大学A kind of complex background image conspicuousness detection method based on low-rank representation
CN109101978A (en)*2018-07-062018-12-28中国地质大学(武汉)Conspicuousness object detection method and system based on weighting low-rank matrix Restoration model
CN109242854A (en)*2018-07-142019-01-18西北工业大学A kind of image significance detection method based on FLIC super-pixel segmentation
CN109086775A (en)*2018-07-192018-12-25南京信息工程大学A kind of collaboration conspicuousness detection method of quick manifold ranking and low-rank constraint
CN109086775B (en)*2018-07-192020-10-27南京信息工程大学Rapid manifold ordering and low-rank constraint cooperative significance detection method
CN109697197B (en)*2018-12-252023-05-02四川效率源信息安全技术股份有限公司Method for engraving and restoring Access database file
CN109697197A (en)*2018-12-252019-04-30四川效率源信息安全技术股份有限公司A method of carving multiple Access database file
CN110097530A (en)*2019-04-192019-08-06西安电子科技大学Based on multi-focus image fusing method super-pixel cluster and combine low-rank representation
CN110097530B (en)*2019-04-192023-03-24西安电子科技大学Multi-focus image fusion method based on super-pixel clustering and combined low-rank representation
CN110321787A (en)*2019-05-132019-10-11仲恺农业工程学院Disease identification method, system and storage medium based on joint sparse representation
CN110490206A (en)*2019-08-202019-11-22江苏建筑职业技术学院A kind of quick conspicuousness algorithm of target detection based on low-rank matrix dualistic analysis
CN110490206B (en)*2019-08-202023-12-26江苏建筑职业技术学院Rapid saliency target detection algorithm based on low-rank matrix binary decomposition
CN110675412A (en)*2019-09-272020-01-10腾讯科技(深圳)有限公司Image segmentation method, training method, device and equipment of image segmentation model
CN110675412B (en)*2019-09-272023-08-01腾讯科技(深圳)有限公司Image segmentation method, training method, device and equipment of image segmentation model
CN114220058A (en)*2021-12-172022-03-22桂林电子科技大学Significance detection method based on MCP sparse representation dictionary learning model
CN114220058B (en)*2021-12-172024-10-22桂林电子科技大学Significance detection method based on MCP sparse representation dictionary learning model
CN114240788A (en)*2021-12-212022-03-25西南石油大学Robustness and self-adaptability background restoration method for complex scene
CN114240788B (en)*2021-12-212023-09-08西南石油大学 A robust and adaptive background restoration method for complex scenes
CN115272705B (en)*2022-07-292023-08-29北京百度网讯科技有限公司 Salient object detection model training method, device and equipment
CN115272705A (en)*2022-07-292022-11-01北京百度网讯科技有限公司 Training method, device and device for salient object detection model
CN115690418A (en)*2022-10-312023-02-03武汉大学Unsupervised image waypoint automatic detection method
CN115690418B (en)*2022-10-312024-03-12武汉大学Unsupervised automatic detection method for image waypoints
CN115984137A (en)*2023-01-042023-04-18上海人工智能创新中心 Method, system, device and storage medium for image restoration in dark light
CN115984137B (en)*2023-01-042024-05-14上海人工智能创新中心 A dark light image restoration method, system, device and storage medium
CN117636162A (en)*2023-11-212024-03-01中国地质大学(武汉) A sparse unmixing method, device, equipment and storage medium for hyperspectral images
CN117576493A (en)*2024-01-162024-02-20武汉明炀大数据科技有限公司Cloud storage compression method and system for large sample data
CN117576493B (en)*2024-01-162024-04-02武汉明炀大数据科技有限公司Cloud storage compression method and system for large sample data
CN119672061A (en)*2024-11-292025-03-21西北工业大学 A moving target detection method based on sparse decomposition and block multi-channel background subtraction

Also Published As

Publication numberPublication date
CN105574534B (en)2019-03-26

Similar Documents

PublicationPublication DateTitle
CN105574534A (en)Significant object detection method based on sparse subspace clustering and low-order expression
CN106203430B (en)A kind of conspicuousness object detecting method based on foreground focused degree and background priori
CN106682598B (en)Multi-pose face feature point detection method based on cascade regression
Yuan et al.Factorization-based texture segmentation
CN110334762B (en)Feature matching method based on quad tree combined with ORB and SIFT
CN107358260B (en) A Multispectral Image Classification Method Based on Surface Wave CNN
CN105528595A (en)Method for identifying and positioning power transmission line insulators in unmanned aerial vehicle aerial images
CN107977661B (en) Region-of-interest detection method based on FCN and low-rank sparse decomposition
CN112200121B (en) A hyperspectral unknown target detection method based on EVM and deep learning
CN106650744B (en) Image object co-segmentation method guided by local shape transfer
CN107194937A (en)Tongue image partition method under a kind of open environment
CN104778457A (en)Video face identification algorithm on basis of multi-instance learning
CN107784288A (en)A kind of iteration positioning formula method for detecting human face based on deep neural network
CN105184772A (en) A Superpixel-Based Adaptive Color Image Segmentation Method
CN107977660A (en)Region of interest area detecting method based on background priori and foreground node
CN106886754B (en)Object identification method and system under a kind of three-dimensional scenic based on tri patch
CN108932518A (en)A kind of feature extraction of shoes watermark image and search method of view-based access control model bag of words
CN105931241A (en)Automatic marking method for natural scene image
Dong et al.Feature extraction through contourlet subband clustering for texture classification
CN106250811A (en)Unconfinement face identification method based on HOG feature rarefaction representation
CN104361096A (en)Image retrieval method based on characteristic enrichment area set
CN113011506A (en)Texture image classification method based on depth re-fractal spectrum network
CN104463091B (en)A kind of facial image recognition method based on image LGBP feature subvectors
CN104778683B (en) A Multimodal Image Segmentation Method Based on Functional Mapping
CN112668662B (en) Target detection method in wild mountain forest environment based on improved YOLOv3 network

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp