Movatterモバイル変換


[0]ホーム

URL:


CN108491883A - A kind of conspicuousness inspection optimization method based on condition random field - Google Patents

A kind of conspicuousness inspection optimization method based on condition random field
Download PDF

Info

Publication number
CN108491883A
CN108491883ACN201810256988.6ACN201810256988ACN108491883ACN 108491883 ACN108491883 ACN 108491883ACN 201810256988 ACN201810256988 ACN 201810256988ACN 108491883 ACN108491883 ACN 108491883A
Authority
CN
China
Prior art keywords
image
cluster
saliency
random field
input image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810256988.6A
Other languages
Chinese (zh)
Other versions
CN108491883B (en
Inventor
牛玉贞
林文奇
柯逍
陈俊豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou UniversityfiledCriticalFuzhou University
Priority to CN201810256988.6ApriorityCriticalpatent/CN108491883B/en
Publication of CN108491883ApublicationCriticalpatent/CN108491883A/en
Application grantedgrantedCritical
Publication of CN108491883BpublicationCriticalpatent/CN108491883B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明涉及一种基于条件随机场的显著性检测优化方法,其特征在于,包括以下步骤:步骤S1:提取输入图像集合中各图像的全局深度卷积特征;步骤S2:根据全局深度卷积特征计算输入图像集合中两两图像之间的相似性;步骤S3:根据图像之间的相似性对输入图像集合进行K‑means聚类,形成k个相互独立的图像簇;步骤S4:采用网格搜索方法计算每个图像簇的全连接条件随机场最优参数;步骤S5:对于新的输入图像,判断其所属的图像簇,采用所属图像簇的全连接条件随机场最优参数对所述新的输入图像的显著性图进行优化。该方法适用于多种显著性检测算法的优化,且优化效果好。

The present invention relates to a saliency detection optimization method based on conditional random field, which is characterized in that it comprises the following steps: step S1: extracting the global depth convolution feature of each image in the input image set; step S2: according to the global depth convolution feature Calculate the similarity between two images in the input image set; step S3: perform K-means clustering on the input image set according to the similarity between the images to form k mutually independent image clusters; step S4: use grid The search method calculates the optimal parameter of the fully connected conditional random field of each image cluster; Step S5: For a new input image, determine the image cluster it belongs to, and use the optimal parameter of the fully connected conditional random field of the image cluster to which the new input image belongs The saliency map of the input image is optimized. This method is suitable for the optimization of various saliency detection algorithms, and the optimization effect is good.

Description

Translated fromChinese
一种基于条件随机场的显著性检测优化方法An optimization method for saliency detection based on conditional random fields

技术领域technical field

本发明涉及图像和视频处理以及计算机视觉领域,特别是一种基于条件随机场的显著性检测优化方法。The invention relates to the fields of image and video processing and computer vision, in particular to a conditional random field-based saliency detection optimization method.

背景技术Background technique

图像显著性检测算法提取图像中最吸引人注意的地方,这部分信息在实际应用中的重要性较其它区域相比更大。显著性检测算法被广泛应用于包括图像分割、图像分类、图像检索等实际应用中。因此,大量学者研究图像显著性检测算法并不断提出最新的算法。Margolin等人提出一种基于主成分分析法(Principal Component Analysis,PCA)的显著性检测算法,该算法计算模式差异性来检测显著性图。Wei等人提出一种基于测地线(Geodesic saliency,GS)的显著性检测算法,该算法关注图像的背景先验信息。Kim等人提出一种基于高维颜色转换(High-dimensional Color Transform,HDCT)的显著性检测算法,将低维的RGB颜色图映射到高维的颜色特征空间,并在高维空间查找一个最优的颜色系数线性组合来定义显著性区域。但现有的图像显著性检测算法与人工标注的标准图像相比都有一定的缺陷,比如PCA算法注重于提取显著对象的边缘,几乎无法检测出显著对象的中间部分。GS算法只能提取到显著对象的部分区域,HDCT算法将一部分的背景也检测为显著性对象。The image saliency detection algorithm extracts the most attractive places in the image, and this part of information is more important than other regions in practical applications. Saliency detection algorithms are widely used in practical applications including image segmentation, image classification, and image retrieval. Therefore, a large number of scholars study image saliency detection algorithms and constantly propose the latest algorithms. Margolin et al. proposed a saliency detection algorithm based on Principal Component Analysis (PCA), which calculates pattern differences to detect saliency maps. Wei et al. proposed a saliency detection algorithm based on Geodesic saliency (GS), which focuses on the background prior information of the image. Kim et al. proposed a saliency detection algorithm based on High-dimensional Color Transform (HDCT), which maps the low-dimensional RGB color map to the high-dimensional color feature space, and finds the most Optimal linear combination of color coefficients to define salient regions. However, the existing image saliency detection algorithms have certain defects compared with the standard images manually marked. For example, the PCA algorithm focuses on extracting the edges of salient objects, and it is almost impossible to detect the middle part of salient objects. The GS algorithm can only extract some areas of salient objects, and the HDCT algorithm also detects a part of the background as salient objects.

发明内容Contents of the invention

本发明的目的在于提供一种基于条件随机场的显著性检测优化方法,该方法适用于多种显著性检测算法的优化,且优化效果好。The purpose of the present invention is to provide a saliency detection optimization method based on a conditional random field, which is suitable for the optimization of various saliency detection algorithms and has good optimization effect.

为实现上述目的,本发明采用的技术方案是:一种基于条件随机场的显著性检测优化方法,包括以下步骤:In order to achieve the above object, the technical solution adopted in the present invention is: a method for optimizing saliency detection based on conditional random fields, comprising the following steps:

步骤S1:提取输入图像集合中各图像的全局深度卷积特征;Step S1: extract the global depth convolution features of each image in the input image set;

步骤S2:根据全局深度卷积特征计算输入图像集合中两两图像之间的相似性;Step S2: Calculate the similarity between two images in the input image set according to the global deep convolution feature;

步骤S3:根据图像之间的相似性对输入图像集合进行K-means聚类,形成k个相互独立的图像簇;Step S3: performing K-means clustering on the input image set according to the similarity between the images to form k mutually independent image clusters;

步骤S4:采用网格搜索方法计算每个图像簇的全连接条件随机场最优参数;Step S4: Using the grid search method to calculate the optimal parameters of the fully connected conditional random field for each image cluster;

步骤S5:对于新的输入图像,判断其所属的图像簇,采用所属图像簇的全连接条件随机场最优参数对所述新的输入图像的显著性图进行优化。Step S5: For the new input image, determine the image cluster it belongs to, and optimize the saliency map of the new input image by using the fully connected conditional random field optimal parameters of the image cluster it belongs to.

进一步地,所述步骤S1中,提取图像的全局深度卷积特征,包括以下步骤:Further, in the step S1, extracting the global depth convolution feature of the image includes the following steps:

步骤S11:提取图像的局部深度卷积特征,该特征由图像识别深度网络的最后一层卷积层产生,对于输入的图像I,输出d维t×t的局部深度卷积特征f;Step S11: Extract the local depth convolution feature of the image, which is generated by the last convolution layer of the image recognition deep network, and output the local depth convolution feature f of d dimension t×t for the input image I;

步骤S12:计算局部深度卷积特征的聚合权重,计算公式如下:Step S12: Calculate the aggregation weight of the local deep convolution feature, the calculation formula is as follows:

其中,α(x,y)表示图像I中像素点(x,y)的聚合权重,σ取图像中心与最近边界之间距离的1/3;Among them, α(x, y) represents the aggregation weight of the pixel point (x, y) in image I, and σ takes 1/3 of the distance between the image center and the nearest boundary;

步骤S13:加权聚合局部深度卷积特征,得到初步全局特征,计算公式如下:Step S13: Weighted aggregation of local deep convolution features to obtain preliminary global features, the calculation formula is as follows:

其中,表示图像I聚合后的初步全局特征,维数是d,W和H分别表示图像的宽度和高度,f(x,y)表示图像I中像素点(x,y)的局部深度卷积特征;in, Represents the preliminary global feature after image I is aggregated, the dimension is d, W and H represent the width and height of the image respectively, f(x, y) represents the local depth convolution feature of the pixel point (x, y) in image I;

步骤S14:对进行L2规范化处理,计算公式如下:Step S14: Yes Perform L2 normalization processing, the calculation formula is as follows:

其中,表示图像I规范化后的全局特征,维数是d;in, Indicates the normalized global features of the image I, the dimension is d;

步骤S15:将进行PCA降维和白化处理,得到n维的全局深度卷积特征n<d,计算公式如下:Step S15: put Perform PCA dimensionality reduction and whitening processing to obtain n-dimensional global depth convolution features n<d, the calculation formula is as follows:

其中,MPCA是一个n×d的PCA矩阵,vi是关联奇异值,i∈{1,2,...,n},diag(v1,v2,...,vn)表示对角矩阵;Among them, MPCA is an n×d PCA matrix, vi is the associated singular value, i∈{1,2,...,n}, diag(v1 ,v2 ,...,vn ) means diagonal matrix;

步骤S16:对进行L2规范化处理,得到最后的全局深度卷积特征计算公式如下:Step S16: Yes Perform L2 normalization processing to obtain the final global depth convolution feature Calculated as follows:

进一步地,所述步骤S2中,两两图像之间的相似性采用相应的全局深度卷积特征之间的点乘计算,计算公式如下:Further, in the step S2, the similarity between the two images is calculated by using the point product between the corresponding global depth convolution features, and the calculation formula is as follows:

其中,< >表示点乘运算,S(Ip,Iq)表示图像Ip和Iq之间的相似性,相似性的值越大表示图像Ip和Iq越相似,分别表示图像Ip、Iq的全局深度卷积特征。Among them, <> represents the dot multiplication operation, S(Ip , Iq ) represents the similarity between the image Ip and Iq , and the larger the value of the similarity, the more similar the image Ip and Iq are, Denote the global depthwise convolutional features of imagesIp andIq , respectively.

进一步地,所述步骤S3中,根据图像之间的相似性对输入图像集合进行K-means聚类,包括以下步骤:Further, in the step S3, K-means clustering is performed on the input image set according to the similarity between images, including the following steps:

步骤S31:随机选取k张图像为图像簇的中心,然后通过求解以下最优化问题计算其它图像Ii所属的图像簇ciStep S31: Randomly select k images as the center of the image cluster, and then calculate the image cluster ci to which other images Ii belong by solving the following optimization problem:

其中,Ur是图像簇的中心,r=1,2,...,k,S(Ii,Ur)是图像Ii和图像簇的中心Ur之间的相似性,表示取与图像Ii相似性最大的图像簇的中心对应的图像簇,然后赋值给ci,从而图像Ii属于该图像簇ciwhere Ur is the center of the image cluster, r=1,2,...,k, S(Ii , Ur ) is the similarity between the image Ii and the center Ur of the image cluster, Indicates that the image cluster corresponding to the center of the image cluster with the greatest similarity to image Ii is taken, and then assigned to ci , so that image Ii belongs to the image cluster ci ;

步骤S32:每个图像簇的中心取该图像簇中所有图像的均值,形成新的中心,然后按步骤S31求解最优化问题的方法进行重新聚类;Step S32: The center of each image cluster takes the mean value of all images in the image cluster to form a new center, and then re-clusters according to the method for solving the optimization problem in step S31;

步骤S33:不断迭代步骤S32,直到每个图像簇的中心不再改变,得到k个图像簇{Cr|r=1,...,k}。Step S33: Step S32 is iterated continuously until the center of each image cluster does not change, and k image clusters {Cr |r=1,...,k} are obtained.

进一步地,所述步骤S4中,采用网格搜索方法计算每个图像簇的全连接条件随机场最优参数,包括以下步骤:Further, in the step S4, the grid search method is used to calculate the optimal parameters of the fully connected conditional random field of each image cluster, including the following steps:

步骤S41:对于每个图像簇中的每张图像,采用显著性检测算法生成显著性图;Step S41: For each image in each image cluster, use a saliency detection algorithm to generate a saliency map;

步骤S42:对于每个图像簇中的每张图像,采用网格搜索方法遍历全连接条件随机场的能量函数的参数ω1、ω2、σα、σβ、σγ的所有取值组合;能量函数的计算公式如下:Step S42: For each image in each image cluster, use the grid search method to traverse all value combinations of the parameters ω1 , ω2 , σα , σβ , and σγ of the energy function of the fully connected conditional random field; The calculation formula of the energy function is as follows:

其中,L表示整张图像的显著标签,L∈{0,1}W×H,{0,1}W×H表示W×H个0或1的取值;E(L)表示显著标签L对应的能量函数,li和lj分别表示显著标签L中像素点i和j的显著标签,取值为0表示不显著,取值为1表示显著;P(li)为像素点i是显著标签li的概率,取P(1)=δi,P(0)=1-δi,δi是显著性图中像素点i的显著性值;能量函数包括两个部分:第一部分为单元势函数,剩下的部分是成对势函数,包括两个核,第一个核是基于像素点距离和颜色差异的双边核,位置相近且颜色相似的像素点分配相近的显著性值,该核采用参数ω1控制,像素点距离和颜色差异采用参数σα和σβ控制;第二个核只依赖于像素点距离,目标在于改进原显著性图中小的孤立点,采用参数ω2和σγ控制;μ(li,lj)是判定函数,当li=lj时,取值为0,否则,取值为1;pi和pj分别表示像素点i和j的像素点位置;ri和rj分别表示像素点i和j的颜色值;所述参数ω1、ω2、σα、σβ、σγ的取值范围均为整数集;Among them, L represents the salient label of the entire image, L∈{0,1}W×H , {0,1}W ×H represents W×H values of 0 or 1; E(L) represents the salient label L The corresponding energy function, li and lj respectively represent the salient labels of pixel i and j in the salient label L, the value of 0 means insignificant, and the value of 1 means significant; P(li ) is the pixel point i is The probability of the salient label li is taken as P(1)=δi , P(0)=1-δi , where δi is the saliency value of pixel i in the saliency map; the energy function includes two parts: the first part is a unit potential function, and the rest is a pair potential function, including two kernels. The first kernel is a bilateral kernel based on pixel distance and color difference. Pixels with similar positions and similar colors are assigned similar significance values , the kernel is controlled by the parameter ω1 , the pixel distance and color difference are controlled by the parameters σα and σβ ; the second kernel only depends on the pixel distance, the goal is to improve the small isolated points in the original saliency map, and the parameter ω2 and σγ control; μ(li , lj ) is a decision function, when li =lj , the value is 0, otherwise, the value is 1; pi and pj represent pixel points i and j respectively ri and rj represent the color values of pixel i and j respectively; the value ranges of the parameters ω1 , ω2 , σα , σβ , and σγ are integer sets;

对于每张图像的参数ω1、ω2、σα、σβ、σγ的每个参数取值组合,计算全连接条件随机场的能量函数的最小化问题:For each parameter value combination of parameters ω1 , ω2 , σα , σβ , σγ for each image, calculate the minimization problem of the energy function of the fully connected conditional random field:

表示取能量函数E(L)最小对应的显著标签L,然后赋值给显著标记L*; Indicates that the saliency label L corresponding to the minimum energy function E(L) is taken, and then assigned to the saliency label L*;

对于每张图像的每个参数取值组合,根据求解出的显著标记L*计算图像中每个像素点的后验概率,从而生成优化后的显著性图;For each parameter value combination of each image, calculate the posterior probability of each pixel in the image according to the solved salient marker L*, thereby generating an optimized saliency map;

步骤S43:采用准确率召回率曲线下面积PR-AUC这一显著性检测评估标准作为最优参数取值组合的判断依据,对于每个图像簇,计算每种参数取值组合下该图像簇中所有图像优化后显著性图的平均PR-AUC值,取PR-AUC值最高的参数取值组合作为该图像簇的全连接条件随机场最优参数。Step S43: Use the saliency detection and evaluation standard PR-AUC as the basis for judging the optimal parameter value combination. For each image cluster, calculate the value of each parameter value combination in the image cluster. The average PR-AUC value of the saliency map of all images after optimization, the parameter value combination with the highest PR-AUC value is taken as the optimal parameter of the fully connected conditional random field of the image cluster.

进一步地,所述步骤S5中,判断新的输入图像所属的图像簇,采用所属图像簇的全连接条件随机场最优参数对所述新的输入图像的显著性图进行优化,包括以下步骤:Further, in the step S5, the image cluster to which the new input image belongs is judged, and the saliency map of the new input image is optimized by using the fully connected conditional random field optimal parameters of the image cluster to which it belongs, including the following steps:

步骤S51:采用与步骤S41中相同的显著性检测算法生成新的输入图像Ie的显著性图;Step S51: using the same saliency detection algorithm as in step S41 to generate a saliency map of the new input imageIe ;

步骤S52:通过求解以下最优化问题计算输入图像Ie所属的图像簇ceStep S52: Calculate the image cluster ce to which the input image Ie belongs by solving the following optimization problem:

其中,Ur是步骤S3得到的图像簇的中心,S(Ie,Ur)是输入图像Ie和图像簇的中心Ur之间的相似性,表示取与输入图像Ie相似性最大的图像簇的中心对应的图像簇,然后赋值给ce,从而输入图像Ie属于图像簇ceAmong them, Ur is the center of the image cluster obtained in step S3, S(Ie , Ur ) is the similarity between the input image Ie and the center Ur of the image cluster, Indicates that the image cluster corresponding to the center of the image cluster with the greatest similarity to the input image Ie is taken, and then assigned to ce , so that the input image Ie belongs to the image cluster ce ;

步骤S53:采用图像簇ce的全连接条件随机场最优参数按照步骤S42中的计算公式对输入图像Ie的显著性图进行优化,生成优化后的显著性图。Step S53: Optimizing the saliency map of the input image Ie by using the optimal parameters of the fully connected conditional random field of the image cluster c eaccording to the calculation formula in step S42, and generating the optimized saliency map.

相较于现有技术,本发明的有益效果是:采用全连接的条件随机场方法,利用全连接图模型结合彩色图中的颜色和位置信息来调整原显著性图像的显著性值,并利用图像聚类和网格搜索方法搜索核函数的最优权重,同一聚类内的显著性图像采用相同的参数设置进行优化。该方法适用于多种显著性检测算法的优化,不仅适用于简单场景下显著性图的优化,也适用于复杂场景下显著性图的优化,且优化后的显著性图与原显著性图相比更接近人工标注的结果。Compared with the prior art, the beneficial effect of the present invention is: adopting the fully connected conditional random field method, using the fully connected graph model combined with the color and position information in the color map to adjust the saliency value of the original saliency image, and using The image clustering and grid search methods search for the optimal weight of the kernel function, and the salient images in the same cluster are optimized with the same parameter settings. This method is suitable for the optimization of various saliency detection algorithms, not only for the optimization of saliency maps in simple scenes, but also for the optimization of saliency maps in complex scenes, and the optimized saliency map is consistent with the original saliency map. It is closer to the result of manual annotation.

附图说明Description of drawings

图1是本发明方法的流程框图。Fig. 1 is a block flow diagram of the method of the present invention.

图2是本发明一实施例的实现流程图。Fig. 2 is an implementation flowchart of an embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图及具体实施例对本发明作进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.

本发明提供一种基于条件随机场的显著性检测优化方法,如图1和图2所示,包括以下步骤:The present invention provides a saliency detection optimization method based on a conditional random field, as shown in Figure 1 and Figure 2, comprising the following steps:

步骤S1:提取输入图像集合中各图像的全局深度卷积特征。具体包括以下步骤:Step S1: Extract the global depthwise convolution features of each image in the input image set. Specifically include the following steps:

步骤S11:提取图像的局部深度卷积特征,该特征由图像识别深度网络的最后一层卷积层产生。在本实施例中,图像识别深度网络采用ILSVRC 2014物体识别比赛夺冠的VGG-19深度网络。对于输入的图像I,输出d维t×t的局部深度卷积特征f。在本实施例中,为512维37×37的局部深度卷积特征f。Step S11: Extract the local deep convolutional features of the image, which are generated by the last convolutional layer of the image recognition deep network. In this embodiment, the image recognition deep network adopts the VGG-19 deep network that won the ILSVRC 2014 object recognition competition. For an input image I, a local depthwise convolutional feature f of d-dimensional t×t is output. In this embodiment, it is a 512-dimensional 37×37 local depth convolution feature f.

步骤S12:计算局部深度卷积特征的聚合权重,计算公式如下:Step S12: Calculate the aggregation weight of the local deep convolution feature, the calculation formula is as follows:

其中,α(x,y)表示图像I中像素点(x,y)的聚合权重,σ取图像中心与最近边界之间距离的1/3。Among them, α(x, y) represents the aggregation weight of the pixel point (x, y) in image I, and σ takes 1/3 of the distance between the image center and the nearest boundary.

步骤S13:加权聚合局部深度卷积特征,得到初步全局特征,计算公式如下:Step S13: Weighted aggregation of local deep convolution features to obtain preliminary global features, the calculation formula is as follows:

其中,表示图像I聚合后的初步全局特征,维数是512;W和H分别表示图像的宽度和高度,在本实施例中,W=H=37;f(x,y)表示图像I中像素点(x,y)的局部深度卷积特征。in, Represents the preliminary global feature after the image I is aggregated, and the dimension is 512; W and H represent the width and height of the image respectively, in this embodiment, W=H=37; f(x, y) represents the pixel point in the image I Local depthwise convolutional features at (x,y).

步骤S14:对进行L2规范化处理,计算公式如下:Step S14: Yes Perform L2 normalization processing, the calculation formula is as follows:

其中,是图像I规范化后的全局特征,维数是512。in, is the normalized global feature of image I, and its dimension is 512.

步骤S15:将进行PCA降维和白化处理,得到n维的全局深度卷积特征本实施例中n为256,计算公式如下:Step S15: put Perform PCA dimensionality reduction and whitening processing to obtain n-dimensional global depth convolution features In the present embodiment, n is 256, and the calculation formula is as follows:

其中,MPCA是一个n×d的PCA矩阵,vi是关联奇异值,i∈{1,2,...,n},diag(v1,v2,...,vn)表示对角矩阵。Among them, MPCA is an n×d PCA matrix, vi is the associated singular value, i∈{1,2,...,n}, diag(v1 ,v2 ,...,vn ) means diagonal matrix.

步骤S16:对进行L2规范化处理,得到最后的全局深度卷积特征计算公式如下:Step S16: Yes Perform L2 normalization processing to obtain the final global depth convolution feature Calculated as follows:

步骤S2:根据全局深度卷积特征计算输入图像集合中两两图像之间的相似性。Step S2: Calculate the similarity between two images in the input image set according to the global deep convolution feature.

两两图像之间的相似性采用相应的全局深度卷积特征之间的点乘计算,计算公式如下:The similarity between two images is calculated by the dot product between the corresponding global deep convolution features, and the calculation formula is as follows:

其中,< >表示点乘运算,S(Ip,Iq)表示图像Ip和Iq之间的相似性,相似性的值越大表示图像Ip和Iq越相似,分别表示图像Ip、Iq的全局深度卷积特征。Among them, <> represents the dot multiplication operation, S(Ip , Iq ) represents the similarity between the image Ip and Iq , and the larger the value of the similarity, the more similar the image Ip and Iq are, Denote the global depthwise convolutional features of imagesIp andIq , respectively.

步骤S3:根据图像之间的相似性对输入图像集合进行K-means聚类,形成k个相互独立的图像簇。具体包括以下步骤:Step S3: Perform K-means clustering on the input image set according to the similarity between images to form k mutually independent image clusters. Specifically include the following steps:

步骤S31:随机选取k张图像为图像簇的中心,然后通过求解以下最优化问题计算其它图像Ii所属的图像簇ciStep S31: Randomly select k images as the center of the image cluster, and then calculate the image cluster ci to which other images Ii belong by solving the following optimization problem:

其中,Ur是图像簇的中心,r=1,2,...,k,S(Ii,Ur)是图像Ii和图像簇的中心Ur之间的相似性,表示取与图像Ii相似性最大的图像簇的中心对应的图像簇,然后赋值给ci,从而图像Ii属于该图像簇ciwhere Ur is the center of the image cluster, r=1,2,...,k, S(Ii , Ur ) is the similarity between the image Ii and the center Ur of the image cluster, Indicates that the image cluster corresponding to the center of the image cluster with the greatest similarity to image Ii is taken, and then assigned to ci , so that image Ii belongs to the image cluster ci ;

步骤S32:每个图像簇的中心取该图像簇中所有图像的均值,形成新的中心,然后按步骤S31求解最优化问题的方法进行重新聚类;Step S32: The center of each image cluster takes the mean value of all images in the image cluster to form a new center, and then re-clusters according to the method for solving the optimization problem in step S31;

步骤S33:不断迭代步骤S32,直到每个图像簇的中心不再改变,得到k个图像簇{Cr|r=1,...,k}。Step S33: Step S32 is iterated continuously until the center of each image cluster does not change, and k image clusters {Cr |r=1,...,k} are obtained.

步骤S4:采用网格搜索方法计算每个图像簇的全连接条件随机场最优参数。具体包括以下步骤:Step S4: Using the grid search method to calculate the optimal parameters of the fully connected conditional random field for each image cluster. Specifically include the following steps:

步骤S41:对于每个图像簇中的每张图像,采用显著性检测算法生成显著性图。Step S41: For each image in each image cluster, use a saliency detection algorithm to generate a saliency map.

步骤S42:对于每个图像簇中的每张图像,采用网格搜索方法遍历全连接条件随机场的能量函数的参数ω1、ω2、σα、σβ、σγ的所有取值组合。能量函数的计算公式如下:Step S42: For each image in each image cluster, use the grid search method to traverse all value combinations of parameters ω1 , ω2 , σα , σβ , and σγ of the energy function of the fully connected conditional random field. The calculation formula of the energy function is as follows:

其中,L表示整张图像的显著标签,L∈{0,1}W×H,{0,1}W×H表示W×H个0或1的取值;E(L)表示显著标签L对应的能量函数,li和lj分别表示显著标签L中像素点i和j的显著标签,取值为0表示不显著,取值为1表示显著;P(li)为像素点i是显著标签li的概率,取P(1)=δi,P(0)=1-δi,δi是显著性图中像素点i的显著性值;能量函数包括两个部分:第一部分为单元势函数,剩下的部分是成对势函数,包括两个核,第一个核是基于像素点距离和颜色差异的双边核,位置相近且颜色相似的像素点分配相近的显著性值,该核采用参数ω1控制,像素点距离和颜色差异采用参数σα和σβ控制;第二个核只依赖于像素点距离,目标在于改进原显著性图中小的孤立点,采用参数ω2和σγ控制;μ(li,lj)是判定函数,当li=lj时,取值为0,否则,取值为1;pi和pj分别表示像素点i和j的像素点位置;ri和rj分别表示像素点i和j的颜色值;所述参数ω1、ω2、σα、σβ、σγ的取值范围均为整数集。在本实施例中,参数的取值范围为:ω1取值3到6的整数,ω2取值5到10的整数,σα取值3,σβ取值20到70之间能被10整除的整数,σγ取值5到30之间能被5整除的整数和整数33,网格搜索的参数遍历上述参数取值范围内的所有参数取值组合。Among them, L represents the salient label of the entire image, L∈{0,1}W×H , {0,1}W×H represents W×H values of 0 or 1; E(L) represents the salient label L The corresponding energy function, li and lj respectively represent the salient labels of pixel i and j in the salient label L, the value of 0 means insignificant, and the value of 1 means significant; P(li ) is the pixel point i is The probability of the salient label li is taken as P(1)=δi , P(0)=1-δi , where δi is the saliency value of pixel i in the saliency map; the energy function includes two parts: the first part is a unit potential function, and the rest is a pair potential function, including two kernels. The first kernel is a bilateral kernel based on pixel distance and color difference. Pixels with similar positions and similar colors are assigned similar significance values , the kernel is controlled by the parameter ω1 , the pixel distance and color difference are controlled by the parameters σα and σβ ; the second kernel only depends on the pixel distance, the goal is to improve the small isolated points in the original saliency map, and the parameter ω2 and σγ control; μ(li , lj ) is a decision function, when li =lj , the value is 0, otherwise, the value is 1; pi and pj represent pixel points i and j respectively ri and rj represent the color values of pixel i and j respectively; the value ranges of the parameters ω1 , ω2 , σα , σβ , and σγ are all integer sets. In this embodiment, the value range of the parameters is: ω1 takes an integer from 3 to 6, ω2 takes an integer from 5 to 10, σα takes a value of 3, and σβ takes a value between 20 and 70. An integer divisible by 10, σγ is an integer divisible by 5 between 5 and 30 and an integer 33, and the parameters of the grid search traverse all parameter value combinations within the value range of the above parameters.

对于每张图像的参数ω1、ω2、σα、σβ、σγ的每个参数取值组合,计算全连接条件随机场的能量函数的最小化问题:For each parameter value combination of parameters ω1 , ω2 , σα , σβ , σγ for each image, calculate the minimization problem of the energy function of the fully connected conditional random field:

表示取能量函数E(L)最小对应的显著标签L,然后赋值给显著标记L*。 Indicates that the saliency label L corresponding to the minimum energy function E(L) is taken, and then assigned to the saliency label L*.

对于每张图像的每个参数取值组合,根据求解出的显著标记L*计算图像中每个像素点的后验概率,从而生成优化后的显著性图。For each parameter value combination of each image, the posterior probability of each pixel in the image is calculated according to the salient marker L* obtained by solving, so as to generate an optimized saliency map.

步骤S43:采用准确率召回率曲线下面积(Area Under Precision Recall Curve,PR-AUC)这一显著性检测评估标准作为最优参数取值组合的判断依据,对于每个图像簇,计算每种参数取值组合下该图像簇中所有图像优化后显著性图的平均PR-AUC值,取PR-AUC值最高的参数取值组合作为该图像簇的全连接条件随机场最优参数。Step S43: Use Area Under Precision Recall Curve (Area Under Precision Recall Curve, PR-AUC), the significance detection evaluation standard, as the basis for judging the optimal parameter value combination, and for each image cluster, calculate each parameter The average PR-AUC value of the optimized saliency map of all images in the image cluster under the value combination, and the parameter value combination with the highest PR-AUC value is taken as the optimal parameter of the fully connected conditional random field of the image cluster.

步骤S5:对于新的输入图像,判断其所属的图像簇,采用所属图像簇的全连接条件随机场最优参数对所述新的输入图像的显著性图进行优化。具体包括以下步骤:Step S5: For the new input image, determine the image cluster it belongs to, and optimize the saliency map of the new input image by using the fully connected conditional random field optimal parameters of the image cluster it belongs to. Specifically include the following steps:

步骤S51:采用与步骤S41中相同的显著性检测算法生成新的输入图像Ie的显著性图;Step S51: using the same saliency detection algorithm as in step S41 to generate a saliency map of the new input imageIe ;

步骤S52:通过求解以下最优化问题计算输入图像Ie所属的图像簇ceStep S52: Calculate the image cluster ce to which the input image Ie belongs by solving the following optimization problem:

其中,Ur是步骤S3得到的图像簇的中心,S(Ie,Ur)是输入图像Ie和图像簇的中心Ur之间的相似性,表示取与输入图像Ie相似性最大的图像簇的中心对应的图像簇,然后赋值给ce,从而输入图像Ie属于图像簇ceAmong them, Ur is the center of the image cluster obtained in step S3, S(Ie , Ur ) is the similarity between the input image Ie and the center Ur of the image cluster, Indicates that the image cluster corresponding to the center of the image cluster with the greatest similarity to the input image Ie is taken, and then assigned to ce , so that the input image Ie belongs to the image cluster ce ;

步骤S53:采用图像簇ce的全连接条件随机场最优参数按照步骤S42中的计算公式对输入图像Ie的显著性图进行优化,生成优化后的显著性图。Step S53: Optimizing the saliency map of the input image Ie by using the optimal parameters of the fully connected conditional random field of the image cluster c eaccording to the calculation formula in step S42, and generating the optimized saliency map.

以上是本发明的较佳实施例,凡依本发明技术方案所作的改变,所产生的功能作用未超出本发明技术方案的范围时,均属于本发明的保护范围。The above are the preferred embodiments of the present invention, and all changes made according to the technical solution of the present invention, when the functional effect produced does not exceed the scope of the technical solution of the present invention, all belong to the protection scope of the present invention.

Claims (6)

Translated fromChinese
1.一种基于条件随机场的显著性检测优化方法,其特征在于,包括以下步骤:1. A saliency detection optimization method based on conditional random field, is characterized in that, comprises the following steps:步骤S1:提取输入图像集合中各图像的全局深度卷积特征;Step S1: extract the global depth convolution features of each image in the input image set;步骤S2:根据全局深度卷积特征计算输入图像集合中两两图像之间的相似性;Step S2: Calculate the similarity between two images in the input image set according to the global deep convolution feature;步骤S3:根据图像之间的相似性对输入图像集合进行K-means聚类,形成k个相互独立的图像簇;Step S3: performing K-means clustering on the input image set according to the similarity between the images to form k mutually independent image clusters;步骤S4:采用网格搜索方法计算每个图像簇的全连接条件随机场最优参数;Step S4: Using the grid search method to calculate the optimal parameters of the fully connected conditional random field for each image cluster;步骤S5:对于新的输入图像,判断其所属的图像簇,采用所属图像簇的全连接条件随机场最优参数对所述新的输入图像的显著性图进行优化。Step S5: For the new input image, determine the image cluster it belongs to, and optimize the saliency map of the new input image by using the fully connected conditional random field optimal parameters of the image cluster it belongs to.2.根据权利要求1所述的一种基于条件随机场的显著性检测优化方法,其特征在于,所述步骤S1中,提取图像的全局深度卷积特征,包括以下步骤:2. A kind of saliency detection optimization method based on conditional random field according to claim 1, it is characterized in that, in described step S1, extract the global depth convolution feature of image, comprise the following steps:步骤S11:提取图像的局部深度卷积特征,该特征由图像识别深度网络的最后一层卷积层产生,对于输入的图像I,输出d维t×t的局部深度卷积特征f;Step S11: Extract the local depth convolution feature of the image, which is generated by the last convolution layer of the image recognition deep network, and output the local depth convolution feature f of d dimension t×t for the input image I;步骤S12:计算局部深度卷积特征的聚合权重,计算公式如下:Step S12: Calculate the aggregation weight of the local deep convolution feature, the calculation formula is as follows:其中,α(x,y)表示图像I中像素点(x,y)的聚合权重,σ取图像中心与最近边界之间距离的1/3;Among them, α(x, y) represents the aggregation weight of the pixel point (x, y) in image I, and σ takes 1/3 of the distance between the image center and the nearest boundary;步骤S13:加权聚合局部深度卷积特征,得到初步全局特征,计算公式如下:Step S13: Weighted aggregation of local deep convolution features to obtain preliminary global features, the calculation formula is as follows:其中,表示图像I聚合后的初步全局特征,维数是d,W和H分别表示图像的宽度和高度,f(x,y)表示图像I中像素点(x,y)的局部深度卷积特征;in, Represents the preliminary global feature after image I is aggregated, the dimension is d, W and H represent the width and height of the image respectively, f(x, y) represents the local depth convolution feature of the pixel point (x, y) in image I;步骤S14:对进行L2规范化处理,计算公式如下:Step S14: Yes Perform L2 normalization processing, the calculation formula is as follows:其中,表示图像I规范化后的全局特征,维数是d;in, Indicates the normalized global features of the image I, the dimension is d;步骤S15:将进行PCA降维和白化处理,得到n维的全局深度卷积特征n<d,计算公式如下:Step S15: put Perform PCA dimensionality reduction and whitening processing to obtain n-dimensional global depth convolution features n<d, the calculation formula is as follows:其中,MPCA是一个n×d的PCA矩阵,vi是关联奇异值,i∈{1,2,...,n},diag(v1,v2,...,vn)表示对角矩阵;Among them, MPCA is an n×d PCA matrix, vi is the associated singular value, i∈{1,2,...,n}, diag(v1 ,v2 ,...,vn ) means diagonal matrix;步骤S16:对进行L2规范化处理,得到最后的全局深度卷积特征计算公式如下:Step S16: Yes Perform L2 normalization processing to obtain the final global depth convolution feature Calculated as follows:3.根据权利要求2所述的一种基于条件随机场的显著性检测优化方法,其特征在于,所述步骤S2中,两两图像之间的相似性采用相应的全局深度卷积特征之间的点乘计算,计算公式如下:3. A kind of saliency detection optimization method based on conditional random field according to claim 2, it is characterized in that, in described step S2, the similarity between two images adopts the corresponding global depth convolution feature The dot product calculation, the calculation formula is as follows:其中,<>表示点乘运算,S(Ip,Iq)表示图像Ip和Iq之间的相似性,相似性的值越大表示图像Ip和Iq越相似,分别表示图像Ip、Iq的全局深度卷积特征。Among them, <> represents the dot multiplication operation, S(Ip , Iq ) represents the similarity between the images Ip and Iq , and the larger the value of the similarity, the more similar the images Ip and Iq are. Denote the global depthwise convolutional features of imagesIp andIq , respectively.4.根据权利要求3所述的一种基于条件随机场的显著性检测优化方法,其特征在于,所述步骤S3中,根据图像之间的相似性对输入图像集合进行K-means聚类,包括以下步骤:4. A kind of saliency detection optimization method based on conditional random field according to claim 3, is characterized in that, in described step S3, according to the similarity between images, K-means clustering is carried out to input image set, Include the following steps:步骤S31:随机选取k张图像为图像簇的中心,然后通过求解以下最优化问题计算其它图像Ii所属的图像簇ciStep S31: Randomly select k images as the center of the image cluster, and then calculate the image cluster ci to which other images Ii belong by solving the following optimization problem:其中,Ur是图像簇的中心,r=1,2,...,k,S(Ii,Ur)是图像Ii和图像簇的中心Ur之间的相似性,表示取与图像Ii相似性最大的图像簇的中心对应的图像簇,然后赋值给ci,从而图像Ii属于该图像簇ciwhere Ur is the center of the image cluster, r=1,2,...,k, S(Ii , Ur ) is the similarity between the image Ii and the center Ur of the image cluster, Indicates that the image cluster corresponding to the center of the image cluster with the greatest similarity to image Ii is taken, and then assigned to ci , so that image Ii belongs to the image cluster ci ;步骤S32:每个图像簇的中心取该图像簇中所有图像的均值,形成新的中心,然后按步骤S31求解最优化问题的方法进行重新聚类;Step S32: The center of each image cluster takes the mean value of all images in the image cluster to form a new center, and then re-clusters according to the method for solving the optimization problem in step S31;步骤S33:不断迭代步骤S32,直到每个图像簇的中心不再改变,得到k个图像簇{Cr|r=1,...,k}。Step S33: Step S32 is iterated continuously until the center of each image cluster does not change, and k image clusters {Cr |r=1,...,k} are obtained.5.根据权利要求4所述的一种基于条件随机场的显著性检测优化方法,其特征在于,所述步骤S4中,采用网格搜索方法计算每个图像簇的全连接条件随机场最优参数,包括以下步骤:5. A conditional random field-based saliency detection optimization method according to claim 4, characterized in that, in said step S4, the grid search method is used to calculate the optimal fully connected conditional random field of each image cluster parameters, including the following steps:步骤S41:对于每个图像簇中的每张图像,采用显著性检测算法生成显著性图;Step S41: For each image in each image cluster, use a saliency detection algorithm to generate a saliency map;步骤S42:对于每个图像簇中的每张图像,采用网格搜索方法遍历全连接条件随机场的能量函数的参数ω1、ω2、σα、σβ、σγ的所有取值组合;能量函数的计算公式如下:Step S42: For each image in each image cluster, use the grid search method to traverse all value combinations of the parameters ω1 , ω2 , σα , σβ , and σγ of the energy function of the fully connected conditional random field; The calculation formula of the energy function is as follows:其中,L表示整张图像的显著标签,L∈{0,1}W×H,{0,1}W×H表示W×H个0或1的取值;E(L)表示显著标签L对应的能量函数,li和lj分别表示显著标签L中像素点i和j的显著标签,取值为0表示不显著,取值为1表示显著;P(li)为像素点i是显著标签li的概率,取P(1)=δi,P(0)=1-δi,δi是显著性图中像素点i的显著性值;能量函数包括两个部分:第一部分为单元势函数,剩下的部分是成对势函数,包括两个核,第一个核是基于像素点距离和颜色差异的双边核,位置相近且颜色相似的像素点分配相近的显著性值,该核采用参数ω1控制,像素点距离和颜色差异采用参数σα和σβ控制;第二个核只依赖于像素点距离,目标在于改进原显著性图中小的孤立点,采用参数ω2和σγ控制;μ(li,lj)是判定函数,当li=lj时,取值为0,否则,取值为1;pi和pj分别表示像素点i和j的像素点位置;ri和rj分别表示像素点i和j的颜色值;所述参数ω1、ω2、σα、σβ、σγ的取值范围均为整数集;Among them, L represents the salient label of the entire image, L∈{0,1}W×H , {0,1}W×H represents W×H values of 0 or 1; E(L) represents the salient label L The corresponding energy function, li and lj respectively represent the salient labels of pixel i and j in the salient label L, the value of 0 means insignificant, and the value of 1 means significant; P(li ) is the pixel point i is The probability of the salient label li is taken as P(1)=δi , P(0)=1-δi , where δi is the saliency value of pixel i in the saliency map; the energy function includes two parts: the first part is a unit potential function, and the rest is a pair potential function, including two kernels. The first kernel is a bilateral kernel based on pixel distance and color difference. Pixels with similar positions and similar colors are assigned similar significance values , the kernel is controlled by the parameter ω1 , the pixel distance and color difference are controlled by the parameters σα and σβ ; the second kernel only depends on the pixel distance, the goal is to improve the small isolated points in the original saliency map, and the parameter ω2 and σγ control; μ(li , lj ) is a decision function, when li =lj , the value is 0, otherwise, the value is 1; pi and pj represent pixel points i and j respectively ri and rj represent the color values of pixel i and j respectively; the value ranges of the parameters ω1 , ω2 , σα , σβ , and σγ are integer sets;对于每张图像的参数ω1、ω2、σα、σβ、σγ的每个参数取值组合,计算全连接条件随机场的能量函数的最小化问题:For each parameter value combination of parameters ω1 , ω2 , σα , σβ , σγ for each image, calculate the minimization problem of the energy function of the fully connected conditional random field:表示取能量函数E(L)最小对应的显著标签L,然后赋值给显著标记L*; Indicates that the saliency label L corresponding to the minimum energy function E(L) is taken, and then assigned to the saliency label L*;对于每张图像的每个参数取值组合,根据求解出的显著标记L*计算图像中每个像素点的后验概率,从而生成优化后的显著性图;For each parameter value combination of each image, calculate the posterior probability of each pixel in the image according to the solved salient marker L*, thereby generating an optimized saliency map;步骤S43:采用准确率召回率曲线下面积PR-AUC这一显著性检测评估标准作为最优参数取值组合的判断依据,对于每个图像簇,计算每种参数取值组合下该图像簇中所有图像优化后显著性图的平均PR-AUC值,取PR-AUC值最高的参数取值组合作为该图像簇的全连接条件随机场最优参数。Step S43: Use the saliency detection and evaluation standard PR-AUC as the basis for judging the optimal parameter value combination. For each image cluster, calculate the value of each parameter value combination in the image cluster. The average PR-AUC value of the saliency map of all images after optimization, the parameter value combination with the highest PR-AUC value is taken as the optimal parameter of the fully connected conditional random field of the image cluster.6.根据权利要求5所述的一种基于条件随机场的显著性检测优化方法,其特征在于,所述步骤S5中,判断新的输入图像所属的图像簇,采用所属图像簇的全连接条件随机场最优参数对所述新的输入图像的显著性图进行优化,包括以下步骤:6. A conditional random field-based saliency detection optimization method according to claim 5, characterized in that, in the step S5, the image cluster to which the new input image belongs is judged, and the full connection condition of the image cluster to which it belongs is adopted The random field optimal parameter optimizes the saliency map of the new input image, comprising the following steps:步骤S51:采用与步骤S41中相同的显著性检测算法生成新的输入图像Ie的显著性图;Step S51: using the same saliency detection algorithm as in step S41 to generate a saliency map of the new input imageIe ;步骤S52:通过求解以下最优化问题计算输入图像Ie所属的图像簇ceStep S52: Calculate the image cluster ce to which the input image Ie belongs by solving the following optimization problem:其中,Ur是步骤S3得到的图像簇的中心,S(Ie,Ur)是输入图像Ie和图像簇的中心Ur之间的相似性,表示取与输入图像Ie相似性最大的图像簇的中心对应的图像簇,然后赋值给ce,从而输入图像Ie属于图像簇ceAmong them, Ur is the center of the image cluster obtained in step S3, S(Ie , Ur ) is the similarity between the input image Ie and the center Ur of the image cluster, Indicates that the image cluster corresponding to the center of the image cluster with the greatest similarity to the input image Ie is taken, and then assigned to ce , so that the input image Ie belongs to the image cluster ce ;步骤S53:采用图像簇ce的全连接条件随机场最优参数按照步骤S42中的计算公式对输入图像Ie的显著性图进行优化,生成优化后的显著性图。Step S53: Optimizing the saliency map of the input image Ie by using the optimal parameters of the fully connected conditional random field of the image cluster c eaccording to the calculation formula in step S42, and generating the optimized saliency map.
CN201810256988.6A2018-03-262018-03-26Saliency detection optimization method based on conditional random fieldExpired - Fee RelatedCN108491883B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810256988.6ACN108491883B (en)2018-03-262018-03-26Saliency detection optimization method based on conditional random field

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810256988.6ACN108491883B (en)2018-03-262018-03-26Saliency detection optimization method based on conditional random field

Publications (2)

Publication NumberPublication Date
CN108491883Atrue CN108491883A (en)2018-09-04
CN108491883B CN108491883B (en)2022-03-22

Family

ID=63337515

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810256988.6AExpired - Fee RelatedCN108491883B (en)2018-03-262018-03-26Saliency detection optimization method based on conditional random field

Country Status (1)

CountryLink
CN (1)CN108491883B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109740646A (en)*2018-12-192019-05-10创新奇智(北京)科技有限公司A kind of image difference comparison method and its system, electronic device
CN109800692A (en)*2019-01-072019-05-24重庆邮电大学A kind of vision SLAM winding detection method based on pre-training convolutional neural networks
CN111680702A (en)*2020-05-282020-09-18杭州电子科技大学 A method for weakly supervised image saliency detection using detection boxes

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104103082A (en)*2014-06-062014-10-15华南理工大学Image saliency detection method based on region description and priori knowledge
CN105574534A (en)*2015-12-172016-05-11西安电子科技大学Significant object detection method based on sparse subspace clustering and low-order expression
CN107330431A (en)*2017-06-302017-11-07福州大学A kind of conspicuousness inspection optimization method that fitting is clustered based on K means

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104103082A (en)*2014-06-062014-10-15华南理工大学Image saliency detection method based on region description and priori knowledge
CN105574534A (en)*2015-12-172016-05-11西安电子科技大学Significant object detection method based on sparse subspace clustering and low-order expression
CN107330431A (en)*2017-06-302017-11-07福州大学A kind of conspicuousness inspection optimization method that fitting is clustered based on K means

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ARTEM BABENKO ET AL: "Aggregating Local Deep Features for Image Retrieval", 《COMPUTER SCIENCE》*
DONGJING SHAN ET AL: "Saliency optimization via low rank matrix recovery With Multi-Prior Integration", 《IEEE/CAA JOURNAL OF AUTOMATICA SINICA》*
YUZHEN NIU ET AL: "CF‐based optimisation for saliency detection", 《IET COMPUTER VISION》*
YUZHEN NIU ET AL: "Fitting-based optimisation for image visual salient object detection", 《IET COMPUTER VISION》*
邓燕子等: "结合场景结构和条件随机场的道路检测", 《华中科技大学学报(自然科学版)》*

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109740646A (en)*2018-12-192019-05-10创新奇智(北京)科技有限公司A kind of image difference comparison method and its system, electronic device
CN109740646B (en)*2018-12-192021-01-05创新奇智(北京)科技有限公司Image difference comparison method and system and electronic device
CN109800692A (en)*2019-01-072019-05-24重庆邮电大学A kind of vision SLAM winding detection method based on pre-training convolutional neural networks
CN109800692B (en)*2019-01-072022-12-27重庆邮电大学Visual SLAM loop detection method based on pre-training convolutional neural network
CN111680702A (en)*2020-05-282020-09-18杭州电子科技大学 A method for weakly supervised image saliency detection using detection boxes
CN111680702B (en)*2020-05-282022-04-01杭州电子科技大学Method for realizing weak supervision image significance detection by using detection frame

Also Published As

Publication numberPublication date
CN108491883B (en)2022-03-22

Similar Documents

PublicationPublication DateTitle
CN106203430B (en)A kind of conspicuousness object detecting method based on foreground focused degree and background priori
CN109684922B (en) A multi-model recognition method for finished dishes based on convolutional neural network
CN107844795B (en)Convolutional neural network feature extraction method based on principal component analysis
CN107145889B (en) Object recognition method based on dual CNN network with RoI pooling
Zhou et al.Salient object detection via fuzzy theory and object-level enhancement
CN106023257B (en)A kind of method for tracking target based on rotor wing unmanned aerial vehicle platform
Lu et al.A novel approach for video text detection and recognition based on a corner response feature map and transferred deep convolutional neural network
CN111507334B (en) An instance segmentation method based on key points
CN108108746A (en)License plate character recognition method based on Caffe deep learning frames
JP4098021B2 (en) Scene identification method, apparatus, and program
CN105678278A (en)Scene recognition method based on single-hidden-layer neural network
CN103413119A (en)Single sample face recognition method based on face sparse descriptors
CN109344856B (en)Offline signature identification method based on multilayer discriminant feature learning
CN111583279A (en) A Superpixel Image Segmentation Method Based on PCBA
CN102073841A (en)Poor video detection method and device
CN114359323B (en)Image target area detection method based on visual attention mechanism
CN117197064B (en) A contactless automatic analysis method for red eye degree
CN108491883B (en)Saliency detection optimization method based on conditional random field
Yogarajah et al.A dynamic threshold approach for skin tone detection in colour images
CN104715476B (en)A kind of well-marked target detection method based on histogram power function fitting
CN109993772A (en) Instance-level feature aggregation method based on spatiotemporal sampling
CN112101283A (en)Intelligent identification method and system for traffic signs
CN115359562A (en)Sign language letter spelling recognition method based on convolutional neural network
CN110334597A (en) Finger vein recognition method and system based on Gabor neural network
CN119888616A (en)X-ray contraband detection method based on contour sensing and optimal distribution migration

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20220322

CF01Termination of patent right due to non-payment of annual fee

[8]ページ先頭

©2009-2025 Movatter.jp