Movatterモバイル変換


[0]ホーム

URL:


CN107704864A - Salient Object Detection Method Based on Image Object Semantic Detection - Google Patents

Salient Object Detection Method Based on Image Object Semantic Detection
Download PDF

Info

Publication number
CN107704864A
CN107704864ACN201610546190.6ACN201610546190ACN107704864ACN 107704864 ACN107704864 ACN 107704864ACN 201610546190 ACN201610546190 ACN 201610546190ACN 107704864 ACN107704864 ACN 107704864A
Authority
CN
China
Prior art keywords
mrow
image
msub
detection
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610546190.6A
Other languages
Chinese (zh)
Other versions
CN107704864B (en
Inventor
于纯妍
张维石
宋梅萍
王春阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime UniversityfiledCriticalDalian Maritime University
Priority to CN201610546190.6ApriorityCriticalpatent/CN107704864B/en
Publication of CN107704864ApublicationCriticalpatent/CN107704864A/en
Application grantedgrantedCritical
Publication of CN107704864BpublicationCriticalpatent/CN107704864B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a salient object detection method based on image object semantic detection, which comprises the following steps of: s1: randomly finding out a plurality of images and a remarkable target marking result graph to form a sample database; s2: randomly densely sampling x within each image range1、x2、y1And y2Generating a detection window w (x)1,y1,x2,y2) the method comprises the steps of S3, calculating an object edge density feature BF of an image, an object convex hull feature CF of the image and an object brightness contrast feature L F of the image under a detection window w, S4, calculating probability density of each feature value in S3 under the detection window by adopting a Bayes frame and calculating conditional probability of each feature, S5, establishing a significant target recognition model by utilizing a naive Bayes model to fuse three image features, S6, performing significant target detection of the image by adopting the method, inputting an image I' to be detected, calculating feature values BF, CF and L F under each detection window respectively, performing feature fusion by utilizing the naive Bayes model in S5, and selecting an optimal window by adopting a non-maximum value to mark a detection result of a significant target.

Description

Translated fromChinese
基于图像对象性语义检测的显著目标检测方法Salient Object Detection Method Based on Image Object Semantic Detection

技术领域technical field

本发明涉及图像检测技术领域,尤其涉及基于图像对象性语义检测的显著目标检测方法。The invention relates to the technical field of image detection, in particular to a salient target detection method based on image objectness semantic detection.

背景技术Background technique

图像的显著性目标检测是指从图像中提取视觉器官感兴趣的目标对象,显著目标的识别对于图像的检索、分类和检测以及目标跟踪等方面有重要的研究意义。目前,大部分的视觉显著性目标检测主要是从图像全局特征出发,提取其特征显著差异性作为图像的显著图,缺乏对于图像对象语义的使用;还有一部分算法采用计算图像中层语义特征如SIFT,BOW等方法,增大检测过程中的计算量;同时,作为显著目标检测结果的显著图无法凸出最为引人注意的目标,不能明确显著目标的具体位置。Image salient object detection refers to the extraction of objects of interest to visual organs from images. Salient object recognition has important research significance for image retrieval, classification, detection, and object tracking. At present, most of the visual saliency object detection mainly starts from the global features of the image, and extracts the significant difference of its features as the saliency map of the image, lacking the use of image object semantics; some algorithms use the middle-level semantic features of the calculation image such as SIFT , BOW and other methods increase the amount of calculation in the detection process; at the same time, the saliency map as the detection result of the salient object cannot highlight the most attractive object, and cannot clarify the specific location of the salient object.

发明内容Contents of the invention

根据现有技术存在的问题,本发明公开了基于图像对象性语义检测的显著目标检测方法,此方法从图像的边缘显著性、亮度显著性及凸包对象显著性出发,提出检测窗口下图像对象性特征的定义及计算方法,并利用贝叶斯框架来实现显著目标对象的明确位置的检测,具体方案如下:According to the problems existing in the prior art, the present invention discloses a salient object detection method based on image object semantic detection. This method starts from the edge saliency, luminance saliency and convex hull object saliency of the image, and proposes an image object under the detection window. The definition and calculation method of the characteristic feature, and use the Bayesian framework to realize the detection of the clear position of the salient target object. The specific scheme is as follows:

S1:随机找出多张图像及其显著目标标注结果图组成样本数据库;S1: Randomly find multiple images and their salient target annotation results to form a sample database;

S2:在每个图像范围内随机密集采样x1、x2、y1和y2,生成检测窗口w(x1,y1,x2,y2);S2: Randomly densely sample x1 , x2 , y1 and y2 within each image range to generate a detection window w(x1 ,y1 ,x2 ,y2 );

S3:在检测窗口w下计算图像的对象性边缘密度特征BF、图像的对象性凸包特征CF和图像的对象性亮度对比特征LF;S3: Calculating the objective edge density feature BF of the image, the objective convex hull feature CF of the image, and the objective brightness contrast feature LF of the image under the detection window w;

S4:采用贝叶斯框架统计检测窗口下S3中各特征值的概率密度,并计算出各特征的条件概率。S4: Using the Bayesian framework to count the probability density of each feature value in S3 under the detection window, and calculate the conditional probability of each feature.

S5:利用朴素贝叶斯模型融合三种图像特征建立显著目标识别模型;S5: Use the naive Bayesian model to fuse three image features to establish a salient target recognition model;

S6:采用上述方式进行图像的显著目标检测:输入待检测图像I’,分别计算每个检测窗口下的特征值BF、CF和LF,利用S5中的朴素贝叶斯模型进行特征融合,采用非极大值抑制选取最佳窗口标出显著目标的检测结果。S6: Use the above method to detect the salient target of the image: input the image I' to be detected, calculate the eigenvalues BF, CF and LF under each detection window, and use the naive Bayesian model in S5 to perform feature fusion. Maximum suppression selects the best window to mark the detection results of salient objects.

所述计算图像的对象性边缘密度特征BF采用如下方式:The object edge density feature BF of the calculation image adopts the following method:

把已知图像转变为相应的灰度图,利用canny边缘检测得到二值化边缘图像,填充边缘轮廓缝隙得到图像Ic,当图像的像素间隔大于1认为是缝隙;在检测窗口w(W,H)下,定义图像的连续边缘密度为:Convert the known image into the corresponding grayscale image, use canny edge detection to obtain the binarized edge image, fill the edge contour gap to obtain the image Ic, when the pixel interval of the image is greater than 1, it is considered a gap; in the detection window w(W, H ), define the continuous edge density of the image as:

其中Er表示环状窗口内的连续边缘,Sr为矩形环的面积,Rw和Rh表示设定环状窗口的宽度和高度,计算公式如下:Where Er represents the continuous edge in the ring window, Sr is the area of the rectangular ring, Rw and Rh represent the width and height of the set ring window, and the calculation formula is as follows:

Rw=W/4 (2)Rw =W/4 (2)

Rh=H/4 。 (3)Rh = H/4. (3)

所述图像的对象性凸包特征CF采用如下方式计算:The object convex hull feature CF of the image is calculated in the following way:

采用聚类方法得到图像Im,求取图像的邻域图像In,再利用自适应阈值及角度的曲率尺度空间角点提取算法求取邻域图像In的CSS角点集合c1;邻域图像In采用如下方式计算:The clustering method is used to obtain the image Im , and the neighborhood image In of the image is obtained, and then the corner point extraction algorithm of the curvature scale space of the adaptive threshold and angle is used to obtain the CSS corner set c1 of the neighborhood image In ; the neighborhood The image In is calculated as follows:

In=gk*Im-Im (4)In =gk *Im -Im (4)

其中gk为图像平均卷积算子;where gk is the image average convolution operator;

设定图像边缘阈值,去掉图像边缘角点c2,将图像显著对象点的集合最终定义为:C={c1-c2},然后利用Graham算法求取其凸包C及并计算凸包面积Sc。定义在窗口w下,凸包对象特征CF为:Set the image edge threshold, remove the image edge corner point c2 , finally define the set of salient object points in the image as: C={c1 -c2 }, and then use the Graham algorithm to find its convex hull C and calculate the convex hull Area Sc . Defined under the window w, the convex hull object feature CF is:

所述图像的对象性亮度对比特征LF采用如下方式:The object brightness contrast feature LF of the image adopts the following method:

已知检测窗口w(W,H)及其外围窗口环w',使矩形环的面积等同于窗口w面积,将窗口环的宽度和高度设置为:The detection window w(W,H) and its peripheral window ring w' are known, so that the area of the rectangular ring is equal to the area of the window w, and the width and height of the window ring are set as:

利用积分图像分别统计w和w'中图像的亮度直方图,并进行归一化处理,得到H(w)和H(w'):Use the integral image to count the brightness histograms of the images in w and w' respectively, and perform normalization processing to obtain H(w) and H(w'):

定义窗口下图像亮度中心对比特征为:Define the image brightness center contrast feature under the window as:

其中N为直方图中bin的数目。where N is the number of bins in the histogram.

S4:采用贝叶斯框架进行特征训练时采用如下方式:S4: The Bayesian framework is used for feature training as follows:

对于样本库的每个图像,生成检测窗口w,对前景图像和背景图像是否在该窗口内部进行统计,得到No和Nb,其中No表示统计的前景图像的数目,Nb表示统计的背景图像的数目,计算前景图像和背景图像的先验概率分别为:For each image in the sample library, a detection window w is generated, and statistics are made on whether the foreground image and the background image are inside the window, and No and Nb are obtained, where No represents the number of statistical foreground images, and Nb represents the statistical The number of background images, the prior probability of calculating the foreground image and background image are:

p(b)=1-p(o)(10)p(b)=1-p(o)(10)

按照上述步骤的三种对象性特征进行计算,统计出检测窗口下各特征值的概率密度,然后计算各特征的条件概率为:Calculate according to the three object characteristics of the above steps, and calculate the probability density of each feature value under the detection window, and then calculate the conditional probability of each feature as follows:

其中F={EF,CF,LF}表示对象性特征,H(F(w))表示各特征值在概率统计分布中的像素数目。Among them, F={EF, CF, LF} represents the object feature, and H(F(w)) represents the number of pixels of each feature value in the probability distribution.

S5中利用朴素贝叶斯模型融合三种图像特征,建立显著目标识别模型为:In S5, the naive Bayesian model is used to fuse three kinds of image features, and the salient target recognition model is established as follows:

由于采用了上述技术方案,本发明从图像显著性的角度出发,设定图像的显著目标具有连续边缘、内部元素一致性及亮度一致性的特点定义图像的对象性语义特征符合以下三个条件:1显著目标具有边缘密度语义特征,2图像的凸包包含的兴趣点集中了大部分的视觉定位区域,具有明显的对象语义特征,3图像的显著目标具有亮度一致性的语义特征。本发明从基本的视觉特征出发,实现了检测窗口下三种显著目标对象性语义特征的定义与计算,并在贝叶斯框架下实现了图像对象性特征的学习,建立了可以快速准确的检测出显著目标具体位置的检测模型,能够更通用的应用于常见的物体种类的显著目标检测。Due to the adoption of the above technical solution, the present invention sets out from the perspective of image salience, and sets the salient object of the image to have the characteristics of continuous edges, internal element consistency, and brightness consistency to define the object semantic features of the image to meet the following three conditions: 1. Salient objects have edge density semantic features; 2. The convex hull of the image contains interest points that concentrate most of the visual positioning area, which has obvious object semantic features; 3. The salient objects of the image have semantic features of brightness consistency. Starting from the basic visual features, the present invention realizes the definition and calculation of three salient object semantic features under the detection window, realizes the learning of image object features under the Bayesian framework, and establishes a fast and accurate detection The detection model of the specific position of the salient target can be more generally applied to the salient target detection of common object types.

附图说明Description of drawings

为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments described in this application. Those skilled in the art can also obtain other drawings based on these drawings without creative work.

图1.为本发明所述识别方法的技术方案示意图;Fig. 1. is the schematic diagram of the technical scheme of identification method described in the present invention;

图2.为实施例中凸包对象提取流程图;Fig. 2. is the flow chart of convex hull object extraction in the embodiment;

图3(a)图3(b)图3(c)图3(d)为实施例中待判断的目标源图像;Fig. 3 (a) Fig. 3 (b) Fig. 3 (c) Fig. 3 (d) is the target source image to be judged in the embodiment;

图4(a)图4(b)为生成的检测窗口的示意图;Figure 4(a) Figure 4(b) is a schematic diagram of the generated detection window;

图5(a)图5(b)为实施例中二值连续边缘图像;Fig. 5 (a) Fig. 5 (b) is binary continuous edge image in the embodiment;

图6(a)图6(b)为实施例中凸包对象特征图;Fig. 6 (a) Fig. 6 (b) is the convex hull object feature map in the embodiment;

图7(a)图7(b)为实施例中提取的显著目标结果显示图;Fig. 7 (a) Fig. 7 (b) is the significant target result display figure extracted in the embodiment;

图8(a)图8(b)为实施例中检测到的显著目标结果。Figure 8(a) and Figure 8(b) are the results of the salient targets detected in the examples.

具体实施方式Detailed ways

为使本发明的技术方案和优点更加清楚,下面结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚完整的描述:In order to make the technical solutions and advantages of the present invention more clear, the technical solutions in the embodiments of the present invention are clearly and completely described below in conjunction with the drawings in the embodiments of the present invention:

如图1所示的基于图像对象性语义检测的显著目标检测方法,具体实施例具体步骤如下:As shown in Figure 1, based on the salient target detection method of image object semantic detection, the specific steps of the specific embodiment are as follows:

A、首先样本数据来源:A. First, the source of sample data:

从图像数据库Pascal VOC以及Caltech101中随机找出200张具有一种目标的图像,其中包括20个类别,包含物品、飞机等。From the image database Pascal VOC and Caltech101, randomly find 200 images with one kind of target, including 20 categories, including objects, airplanes, etc.

B、生成检测窗口:B. Generate detection window:

训练过程开始之前,对于每一张图像,随机产生检测窗口:设图像的高度和宽度分别为h和w,显著目标的大小不小于10×10像素,在图像范围内随机采样x1、x2、y1和y2,使其满足条件(x2-x1)>10,(y2-y1)>10,生成1000个滑动窗口(x1,y1,x2,y2)。图3所示为生成窗口的数目,从右往左随机生成的窗口数目分别为5,50,100及1000。Before the training process starts, for each image, a detection window is randomly generated: Let the height and width of the image be h and w respectively, the size of the salient object is not less than 10×10 pixels, randomly sample x1 , x2 within the image range , y1 and y2 , so that they meet the conditions (x2 -x1 )>10, (y2 -y1 )>10, and generate 1000 sliding windows (x1 , y1 , x2 , y2 ). Figure 3 shows the number of generated windows, and the numbers of randomly generated windows from right to left are 5, 50, 100 and 1000 respectively.

C、提取检测窗口w下的对象的边缘密度特征:C. Extract the edge density feature of the object under the detection window w:

首先把图片变为灰度图,然后利用canny边缘检测得到二值化边缘图像,并填充边缘轮廓缝隙(间隔大于1像素即认为是缝隙)得到图像Ic。在检测窗口w(W,H)下,按照公式(1)(2)(3)计算图像的连续边缘密度EF。First, the image is converted into a grayscale image, and then the binary edge image is obtained by using canny edge detection, and the edge contour gap is filled (if the interval is greater than 1 pixel, it is considered a gap) to obtain the image Ic. Under the detection window w(W, H), the continuous edge density EF of the image is calculated according to formulas (1)(2)(3).

D、按照图2所示的方法提取检测窗口w下的对象性凸包特征:D, according to the method shown in Figure 2 to extract the object convex hull features under the detection window w:

首先进行中值滤波,去掉图像部分噪声,接着采用Meanshift聚类方法得到图像Im。具体的操作为,在luv颜色模式下,定义核函数:采用如下公式进行均值漂移向量计算:Firstly, the median filtering is performed to remove part of the image noise, and then the Meanshift clustering method is used to obtain the image Im . The specific operation is to define the kernel function in luv color mode: Use the following formula to calculate the mean shift vector:

其中g(x)=-K′(x),h为核函数的带宽,设定值为10。Where g(x)=-K'(x), h is the bandwidth of the kernel function, and the set value is 10.

按照公式(4)求取图像邻域图像In,再利用自适应阈值及角度的曲率尺度空间角点提取算法求取In的CSS角点集合c1。设定图像边缘阈值,去掉图像边缘角点c2,得到图像显著对象点的集合定义为:C={c1-c2}。然后利用Graham算法求取其凸包c及并计算凸包面积Sc。然后按照公式(5)计算对象性凸包特征。在计算的过程中,采用积分图像计算各个面积的值来加快运行速度。Calculate the image neighborhood image In according to formula (4), and then use the adaptive threshold value and angle curvature scale space corner point extraction algorithm to obtain the CSS corner point set c1 of In. Set the image edge threshold, remove the image edge corner point c2, and obtain the set of image salient object points defined as: C={c1 -c2 }. Then use the Graham algorithm to obtain its convex hull c and calculate the convex hull area Sc . Then calculate the object convex hull feature according to formula (5). In the calculation process, the integral image is used to calculate the value of each area to speed up the operation.

E、接着提取矩形环下的图像对象性亮度对比特征:E, then extract the image object brightness contrast feature under the rectangular ring:

首先将图像从rgb模式转换为lab模式,已知检测窗口w(W,H),按照公式(6)和(7)设置外围窗口环的宽度和高度,然后利用积分图像分别统计w和w'中图像的亮度直方图,并进行归一化处理,然后利用公式(8)计算检测窗口下图像亮度中心对比特征。First convert the image from rgb mode to lab mode, and the detection window w(W,H) is known, set the width and height of the peripheral window ring according to formulas (6) and (7), and then use the integral image to count w and w' respectively The brightness histogram of the image in the image is normalized, and then the contrast feature of the brightness center of the image under the detection window is calculated by formula (8).

F、采用贝叶斯框架进行特征训练:F. Using the Bayesian framework for feature training:

对于样本库的每个图像,生成检测窗口w,将窗口w内的图像与样本图像中标记(box)的正(目标)样本图像进行比较:已知窗口w内的图像面积为sw,样本图像中人工标注box内的图像面积为sb,计算当结果p>0.5,表示其为前景图像,设置标签值为1,否则为背景图像,设置标签值为-1。如此对前景图像和背景图像是否在该窗口内部进行统计,得到No和Nb,其中No表示统计的前景图像的数目,Nb表示统计的背景图像的数目。按照公式(9)和(10)计算前景图像和背景图像的先验概率,统计出检测窗口下各特征值的概率密度,接下来按照公式(11)计算各对象性特征的条件概率p(EF|o)、p(EF|b)、p(CF|o)、p(CF|b)、p(LF|o)、p(LF|b)。For each image in the sample library, a detection window w is generated, and the image in the window w is compared with the positive (target) sample image marked (box) in the sample image: the area of the image in the known windoww is sw , and the sample The area of the image in the manually marked box in the image is sb , and the calculation When the result p>0.5, it means it is a foreground image, set the label value to 1, otherwise it is a background image, set the label value to -1. In this way, whether the foreground image and the background image are within the window is counted, and No and Nb are obtained, where No represents the number of foreground images counted, and Nb represents the number of background images counted. Calculate the prior probability of the foreground image and the background image according to formulas (9) and (10), and calculate the probability density of each feature value under the detection window, and then calculate the conditional probability p(EF |o), p(EF|b), p(CF|o), p(CF|b), p(LF|o), p(LF|b).

G、建立对象目标检测模型G. Establish object target detection model

按照公式(13)融合三种图像特征,建立显著目标识别模型BM(EF,CF,LF)。According to the formula (13), the three kinds of image features are fused, and the salient object recognition model BM (EF, CF, LF) is established.

H、实例图像显著对象目标检测:H. Example image salient object target detection:

输入待检测图像I,如图4所示,按照步骤B生成1000个检测窗口;然后按照步骤C、D、E分别计算每个检测窗口下的特征值BF、CF和LF。图5所示为图像I的二值边缘图像,图6为图像I的聚类图像Im,图像I的对象性性凸包区域检测结果如图7所示;接下来利用BM(EF,CF,LF)融合三种对象性特征;采用非极大值抑制选取最佳窗口,检测到的显著目标结果如图8所示。Input the image I to be detected, as shown in Figure 4, generate 1000 detection windows according to step B; then calculate the eigenvalues BF, CF and LF under each detection window according to steps C, D and E respectively. Figure 5 shows the binary edge image of image I, Figure 6 shows the clustering image Im of image I, and the object detection result of image I is shown in Figure 7; then use BM(EF, CF, LF) integrates three object features; non-maximum suppression is used to select the best window, and the detected salient target results are shown in Figure 8.

以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,根据本发明的技术方案及其发明构思加以等同替换或改变,都应涵盖在本发明的保护范围之内。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto, any person familiar with the technical field within the technical scope disclosed in the present invention, according to the technical solution of the present invention Any equivalent replacement or change of the inventive concepts thereof shall fall within the protection scope of the present invention.

Claims (6)

Translated fromChinese
1.一种基于图像对象性语义检测的显著目标检测方法,其特征在于包括以下步骤:1. A salient target detection method based on image objectness semantic detection, is characterized in that comprising the following steps:S1:随机找出多张图像及其显著目标标注结果图组成样本数据库;S1: Randomly find multiple images and their salient target annotation results to form a sample database;S2:在每个图像范围内随机密集采样x1、x2、y1和y2,生成检测窗口w(x1,y1,x2,y2);S2: Randomly densely sample x1 , x2 , y1 and y2 within each image range to generate a detection window w(x1 ,y1 ,x2 ,y2 );S3:在检测窗口w下计算图像的对象性边缘密度特征BF、图像的对象性凸包特征CF和图像的对象性亮度对比特征LF;S3: Calculating the objective edge density feature BF of the image, the objective convex hull feature CF of the image, and the objective brightness contrast feature LF of the image under the detection window w;S4:采用贝叶斯框架统计检测窗口下S3中各特征值的概率密度,并计算出各特征的条件概率;S4: Use the Bayesian framework to statistically detect the probability density of each feature value in S3 under the detection window, and calculate the conditional probability of each feature;S5:利用朴素贝叶斯模型融合三种图像特征建立显著目标识别模型;S5: Use the naive Bayesian model to fuse three image features to establish a salient target recognition model;S6:采用上述方式进行图像的显著目标检测:输入待检测图像I’,分别计算每个检测窗口下的特征值BF、CF和LF,利用S5中的朴素贝叶斯模型进行特征融合,采用非极大值抑制选取最佳窗口标出显著目标的检测结果。S6: Use the above method to detect the salient target of the image: input the image I' to be detected, calculate the eigenvalues BF, CF and LF under each detection window, and use the naive Bayesian model in S5 to perform feature fusion. Maximum suppression selects the best window to mark the detection results of salient objects.2.根据权利要求1所述的基于图像对象性语义检测的显著目标检测方法,其特征还在于:所述计算图像的对象性边缘密度特征BF采用如下方式:2. The salient target detection method based on image object semantics detection according to claim 1, further characterized in that: the object edge density feature BF of the calculation image adopts the following method:把已知图像转变为相应的灰度图,利用canny边缘检测得到二值化边缘图像,填充边缘轮廓缝隙得到图像Ic,当图像的像素间隔大于1认为是缝隙;在检测窗口w(W,H)下,定义图像的连续边缘密度为:Convert the known image into the corresponding grayscale image, use canny edge detection to obtain the binarized edge image, fill the edge contour gap to obtain the image Ic, when the pixel interval of the image is greater than 1, it is considered a gap; in the detection window w(W, H ), define the continuous edge density of the image as: <mrow> <mi>E</mi> <mi>F</mi> <mrow> <mo>(</mo> <mi>w</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>w</mi> </msub> <mo>+</mo> <msub> <mi>R</mi> <mi>h</mi> </msub> <mo>)</mo> <msub> <mi>E</mi> <mi>r</mi> </msub> </mrow> <mrow> <mn>2</mn> <msub> <mi>S</mi> <mi>r</mi> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow><mrow><mi>E</mi><mi>F</mi><mrow><mo>(</mo><mi>w</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mrow><mo>(</mo><msub><mi>R</mi><mi>w</mi></msub><mo>+</mo><msub><mi>R</mi><mi>h</mi></msub><mo>)</mo><msub><mi>E</mi><mi>r</mi></msub></mrow><mrow><mn>2</mn><msub><mi>S</mi><mi>r</mi></msub></mrow></mfrac><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mrow>其中Er表示环状窗口内的连续边缘,Sr为矩形环的面积,Rw和Rh表示设定环状窗口的宽度和高度,计算公式如下:Where Er represents the continuous edge in the ring window, Sr is the area of the rectangular ring, Rw and Rh represent the width and height of the set ring window, and the calculation formula is as follows:Rw=W/4 (2)Rw =W/4 (2)Rh=H/4。 (3)Rh = H/4. (3)3.根据权利要求1所述的基于图像对象性语义检测的显著目标检测方法,其特征还在于:所述图像的对象性凸包特征CF采用如下方式计算:3. the salient object detection method based on image object semantics detection according to claim 1, further characterized in that: the object convex hull feature CF of the image is calculated in the following manner:采用聚类方法得到图像Im,求取图像的邻域图像In,再利用自适应阈值及角度的曲率尺度空间角点提取算法求取邻域图像In的CSS角点集合c1;邻域图像In采用如下方式计算:The clustering method is used to obtain the image Im , and the neighborhood image In of the image is obtained, and then the corner point extraction algorithm of the curvature scale space of the adaptive threshold and angle is used to obtain the CSS corner set c1 of the neighborhood image In ; the neighborhood The image In is calculated as follows:In=gk*Im-Im (4)In =gk *Im -Im (4)其中gk为图像平均卷积算子;where gk is the image average convolution operator;设定图像边缘阈值,去掉图像边缘角点c2,将图像显著对象点的集合最终定义为:C={c1-c2},然后利用Graham算法求取其凸包C及并计算凸包面积Sc。定义在窗口w下,凸包对象特征CF为:Set the image edge threshold, remove the image edge corner point c2 , finally define the set of salient object points in the image as: C={c1 -c2 }, and then use the Graham algorithm to find its convex hull C and calculate the convex hull Area Sc . Defined under the window w, the convex hull object feature CF is:4.根据权利要求1所述的基于图像对象性语义检测的显著目标检测方法,其特征还在于:所述图像的对象性亮度对比特征LF采用如下方式:4. The salient target detection method based on image object semantic detection according to claim 1, further characterized in that: the object brightness contrast feature LF of the image adopts the following method:已知检测窗口w(W,H)及其外围窗口环w',使矩形环的面积等同于窗口w面积,将窗口环的宽度和高度设置为:The detection window w(W,H) and its peripheral window ring w' are known, so that the area of the rectangular ring is equal to the area of the window w, and the width and height of the window ring are set as: <mrow> <msub> <mi>R</mi> <mi>w</mi> </msub> <mo>=</mo> <mi>W</mi> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <msqrt> <mn>2</mn> </msqrt> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow><mrow><msub><mi>R</mi><mi>w</mi></msub><mo>=</mo><mi>W</mi><mo>&amp;times;</mo><mrow><mo>(</mo><msqrt><mn>2</mn></msqrt><mo>-</mo><mn>1</mn><mo>)</mo></mrow><mo>/</mo><mn>2</mn><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>6</mn><mo>)</mo></mrow></mrow> <mrow> <msub> <mi>R</mi> <mi>h</mi> </msub> <mo>=</mo> <mi>H</mi> <mo>&amp;times;</mo> <mrow> <mo>(</mo> <msqrt> <mn>2</mn> </msqrt> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>/</mo> <mn>2</mn> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow><mrow><msub><mi>R</mi><mi>h</mi></msub><mo>=</mo><mi>H</mi><mo>&amp;times;</mo><mrow><mo>(</mo><msqrt><mn>2</mn></msqrt><mo>-</mo><mn>1</mn><mo>)</mo></mrow><mo>/</mo><mn>2</mn><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>7</mn><mo>)</mo></mrow></mrow>利用积分图像分别统计w和w'中图像的亮度直方图,并进行归一化处理,得到H(w)和H(w'):Use the integral image to count the brightness histograms of the images in w and w' respectively, and perform normalization processing to obtain H(w) and H(w'):定义窗口下图像亮度中心对比特征为:Define the image brightness center contrast feature under the window as: <mrow> <mi>L</mi> <mi>F</mi> <mrow> <mo>(</mo> <mi>w</mi> <mo>,</mo> <msub> <mi>&amp;theta;</mi> <mrow> <mi>Y</mi> <mi>C</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mi>N</mi> </munder> <mi>H</mi> <mrow> <mo>(</mo> <mi>w</mi> <mo>)</mo> </mrow> <mi>l</mi> <mi>o</mi> <mi>g</mi> <mfrac> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <mi>w</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>H</mi> <mrow> <mo>(</mo> <msup> <mi>w</mi> <mo>&amp;prime;</mo> </msup> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow><mrow><mi>L</mi><mi>F</mi><mrow><mo>(</mo><mi>w</mi><mo>,</mo><msub><mi>&amp;theta;</mi><mrow><mi>Y</mi><mi>C</mi></mrow></msub><mo>)</mo></mrow><mo>=</mo><munder><mo>&amp;Sigma;</mo><mi>N</mi></munder><mi>H</mi><mrow><mo>(</mo><mi>w</mi><mo>)</mo></mrow><mi>l</mi><mi>o</mi><mi>g</mi><mfrac><mrow><mi>H</mi><mrow><mo>(</mo><mi>w</mi><mo>)</mo></mrow></mrow><mrow><mi>H</mi><mrow><mo>(</mo><msup><mi>w</mi><mo>&amp;prime;</mo></msup><mo>)</mo></mrow></mrow></mfrac><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>8</mn><mo>)</mo></mrow></mrow>其中N为直方图中bin的数目。where N is the number of bins in the histogram.5.根据权利要求1所述的基于图像对象性语义检测的显著目标检测方法,其特征还在于:S4:采用贝叶斯框架进行特征训练时采用如下方式:5. the salient object detection method based on image objectness semantic detection according to claim 1, is also characterized in that: S4: adopt following mode when adopting Bayesian framework to carry out feature training:对于样本库的每个图像,生成检测窗口w,对前景图像和背景图像是否在该窗口内部进行统计,得到No和Nb,其中No表示统计的前景图像的数目,Nb表示统计的背景图像的数目,计算前景图像和背景图像的先验概率分别为:For each image in the sample library, a detection window w is generated, and statistics are made on whether the foreground image and the background image are inside the window, and No and Nb are obtained, where No represents the number of statistical foreground images, and Nb represents the statistical The number of background images, the prior probability of calculating the foreground image and background image are: <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>o</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <msub> <mi>N</mi> <mi>o</mi> </msub> <mrow> <msub> <mi>N</mi> <mi>o</mi> </msub> <mo>+</mo> <msub> <mi>N</mi> <mi>b</mi> </msub> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow><mrow><mi>p</mi><mrow><mo>(</mo><mi>o</mi><mo>)</mo></mrow><mo>=</mo><mfrac><msub><mi>N</mi><mi>o</mi></msub><mrow><msub><mi>N</mi><mi>o</mi></msub><mo>+</mo><msub><mi>N</mi><mi>b</mi></msub></mrow></mfrac><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>9</mn><mo>)</mo></mrow></mrow> p(b)=1-p(o) (10) p(b)=1-p(o) (10)按照上述步骤的三种对象性特征进行计算,统计出检测窗口下各特征值的概率密度,然后计算各特征的条件概率为:Calculate according to the three object characteristics of the above steps, and calculate the probability density of each feature value under the detection window, and then calculate the conditional probability of each feature as follows: <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>F</mi> <mo>|</mo> <mi>o</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Pi;</mo> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </munder> <mfrac> <mrow> <msub> <mi>H</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>F</mi> <mo>(</mo> <mi>w</mi> <mo>)</mo> <mo>)</mo> </mrow> </mrow> <msub> <mi>N</mi> <mi>o</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </mrow><mrow><mi>p</mi><mrow><mo>(</mo><mi>F</mi><mo>|</mo><mi>o</mi><mo>)</mo></mrow><mo>=</mo><munder><mo>&amp;Pi;</mo><mrow><mi>p</mi><mi>o</mi><mi>s</mi></mrow></munder><mfrac><mrow><msub><mi>H</mi><mrow><mi>p</mi><mi>o</mi><mi>s</mi></mrow></msub><mrow><mo>(</mo><mi>F</mi><mo>(</mo><mi>w</mi><mo>)</mo><mo>)</mo></mrow></mrow><msub><mi>N</mi><mi>o</mi></msub></mfrac><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>11</mn><mo>)</mo></mrow></mrow> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>F</mi> <mo>|</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Pi;</mo> <mrow> <mi>n</mi> <mi>e</mi> <mi>g</mi> </mrow> </munder> <mfrac> <mrow> <msub> <mi>H</mi> <mrow> <mi>n</mi> <mi>e</mi> <mi>g</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>F</mi> <mo>(</mo> <mi>w</mi> <mo>)</mo> <mo>)</mo> </mrow> </mrow> <msub> <mi>N</mi> <mi>b</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow><mrow><mi>p</mi><mrow><mo>(</mo><mi>F</mi><mo>|</mo><mi>b</mi><mo>)</mo></mrow><mo>=</mo><munder><mo>&amp;Pi;</mo><mrow><mi>n</mi><mi>e</mi><mi>g</mi></mrow></munder><mfrac><mrow><msub><mi>H</mi><mrow><mi>n</mi><mi>e</mi><mi>g</mi></mrow></msub><mrow><mo>(</mo><mi>F</mi><mo>(</mo><mi>w</mi><mo>)</mo><mo>)</mo></mrow></mrow><msub><mi>N</mi><mi>b</mi></msub></mfrac><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>12</mn><mo>)</mo></mrow></mrow>其中F={EF,CF,LF}表示对象性特征,H(F(w))表示各特征值在概率统计分布中的像素数目。Among them, F={EF, CF, LF} represents the object feature, and H(F(w)) represents the number of pixels of each feature value in the probability distribution.6.根据权利要求1所述的基于图像对象性语义检测的显著目标检测方法,其特征还在于:S5中利用朴素贝叶斯模型融合三种图像特征,建立显著目标识别模型为:6. The salient target detection method based on image objectness semantic detection according to claim 1, further characterized in that: in S5, the naive Bayesian model is used to fuse three kinds of image features, and the salient target recognition model is set up as: <mrow> <mtable> <mtr> <mtd> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <mi>o</mi> <mo>|</mo> <mi>E</mi> <mi>F</mi> <mo>,</mo> <mi>C</mi> <mi>F</mi> <mo>,</mo> <mi>L</mi> <mi>F</mi> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>o</mi> <mo>)</mo> </mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <mi>E</mi> <mi>F</mi> <mo>,</mo> <mi>C</mi> <mi>F</mi> <mo>,</mo> <mi>L</mi> <mi>F</mi> <mo>|</mo> <mi>o</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <mi>E</mi> <mi>F</mi> <mo>,</mo> <mi>C</mi> <mi>F</mi> <mo>,</mo> <mi>L</mi> <mi>F</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>=</mo> <mfrac> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>o</mi> <mo>)</mo> </mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <mi>E</mi> <mi>F</mi> <mo>|</mo> <mi>o</mi> </mrow> <mo>)</mo> </mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <mi>C</mi> <mi>F</mi> <mo>|</mo> <mi>o</mi> </mrow> <mo>)</mo> </mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <mi>L</mi> <mi>F</mi> <mo>|</mo> <mi>o</mi> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <mi>E</mi> <mi>F</mi> <mo>|</mo> <mi>o</mi> </mrow> <mo>)</mo> </mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <mi>C</mi> <mi>F</mi> <mo>|</mo> <mi>o</mi> </mrow> <mo>)</mo> </mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <mi>L</mi> <mi>F</mi> <mo>|</mo> <mi>o</mi> </mrow> <mo>)</mo> </mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>o</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <mi>E</mi> <mi>F</mi> <mo>|</mo> <mi>b</mi> </mrow> <mo>)</mo> </mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <mi>C</mi> <mi>F</mi> <mo>|</mo> <mi>b</mi> </mrow> <mo>)</mo> </mrow> <mi>p</mi> <mrow> <mo>(</mo> <mrow> <mi>L</mi> <mi>F</mi> <mo>|</mo> <mi>b</mi> </mrow> <mo>)</mo> </mrow> <mi>p</mi> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow> </mtd> </mtr> </mtable> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>13</mn> <mo>)</mo> </mrow> </mrow><mrow><mtable><mtr><mtd><mrow><mi>p</mi><mrow><mo>(</mo><mrow><mi>o</mi><mo>|</mo><mi>E</mi><mi>F</mi><mo>,</mo><mi>C</mi><mi>F</mi><mo>,</mo><mi>L</mi><mi>F</mi></mrow><mo>)</mo></mrow><mo>=</mo><mfrac><mrow><mi>p</mi><mrow><mo>(</mo><mi>o</mi><mo>)</mo></mrow><mi>p</mi><mrow><mo>(</mo><mrow><mi>E</mi><mi>F</mi><mo>,</mo><mi>C</mi><mi>F</mi><mo>,</mo><mi>L</mi><mi>F</mi><mo>|</mo><mi>o</mi></mrow><mo>)</mo></mrow></mrow><mrow><mi>p</mi><mrow><mo>(</mo><mrow><mi>E</mi><mi>F</mi><mo>,</mo><mi>C</mi><mi>F</mi><mo>,</mo><mi>L</mi><mi>F</mi></mrow><mo>)</mo></mrow></mrow></mfrac></mrow></mtd></mtr><mtr><mtd><mrow><mo>=</mo><mfrac><mrow><mi>p</mi><mrow><mo>(</mo><mi>o</mi><mo>)</mo></mrow><mi>p</mi><mrow><mo>(</mo><mrow><mi>E</mi><mi>F</mi><mo>|</mo><mi>o</mi></mrow><mo>)</mo></mrow><mi>p</mi><mrow><mo>(</mo><mrow><mi>C</mi><mi>F</mi><mo>|</mo><mi>o</mi></mrow><mo>)</mo></mrow><mi>p</mi><mrow><mo>(</mo><mrow><mi>L</mi><mi>F</mi><mo>|</mo><mi>o</mi></mrow><mo>)</mo></mrow></mrow><mrow><mi>p</mi><mrow><mo>(</mo><mrow><mi>E</mi><mi>F</mi><mo>|</mo><mi>o</mi></mrow><mo>)</mo></mrow><mi>p</mi><mrow><mo>(</mo><mrow><mi>C</mi><mi>F</mi><mo>|</mo><mi>o</mi></mrow><mo>)</mo></mrow><mi>p</mi><mrow><mo>(</mo><mrow><mi>L</mi><mi>F</mi><mo>|</mo><mi>o</mi></mrow><mo>)</mo></mrow><mi>p</mi><mrow><mo>(</mo><mi>o</mi><mo>)</mo></mrow><mo>+</mo><mi>p</mi><mrow><mo>(</mo><mrow><mi>E</mi><mi>F</mi><mo>|</mo><mi>b</mi></mrow><mo>)</mo></mrow><mi>p</mi><mrow><mo>(</mo><mrow><mi>C</mi><mi>F</mi><mo>|</mo><mi>b</mi></mrow><mo>)</mo></mrow><mi>p</mi><mrow><mo>(</mo><mrow><mi>L</mi><mi>F</mi><mo>|</mo><mi>b</mi></mrow><mo>)</mo></mrow><mi>p</mi><mrow><mo>(</mo><mi>b</mi><mo>)</mo></mrow></mrow></mfrac></mrow></mtd></mtr></mtable><mo>.</mo><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>13</mn><mo>)</mo></mrow></mrow>
CN201610546190.6A2016-07-112016-07-11Salient object detection method based on image object semantic detectionActiveCN107704864B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201610546190.6ACN107704864B (en)2016-07-112016-07-11Salient object detection method based on image object semantic detection

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610546190.6ACN107704864B (en)2016-07-112016-07-11Salient object detection method based on image object semantic detection

Publications (2)

Publication NumberPublication Date
CN107704864Atrue CN107704864A (en)2018-02-16
CN107704864B CN107704864B (en)2020-10-27

Family

ID=61168695

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610546190.6AActiveCN107704864B (en)2016-07-112016-07-11Salient object detection method based on image object semantic detection

Country Status (1)

CountryLink
CN (1)CN107704864B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108960042A (en)*2018-05-172018-12-07新疆医科大学第附属医院The echinococcus protoscolex survival rate detection method of vision significance and SIFT feature
CN110598776A (en)*2019-09-032019-12-20成都信息工程大学Image classification method based on intra-class visual mode sharing
CN111639672A (en)*2020-04-232020-09-08中国科学院空天信息创新研究院Deep learning city functional area classification method based on majority voting

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103049751A (en)*2013-01-242013-04-17苏州大学Improved weighting region matching high-altitude video pedestrian recognizing method
US20130124514A1 (en)*2011-03-102013-05-16International Business Machines CorporaitonHierarchical ranking of facial attributes
CN104050460A (en)*2014-06-302014-09-17南京理工大学Pedestrian detection method with multi-feature fusion
CN104103082A (en)*2014-06-062014-10-15华南理工大学Image saliency detection method based on region description and priori knowledge
US10353948B2 (en)*2013-09-042019-07-16Shazura, Inc.Content based image retrieval

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20130124514A1 (en)*2011-03-102013-05-16International Business Machines CorporaitonHierarchical ranking of facial attributes
CN103049751A (en)*2013-01-242013-04-17苏州大学Improved weighting region matching high-altitude video pedestrian recognizing method
US10353948B2 (en)*2013-09-042019-07-16Shazura, Inc.Content based image retrieval
CN104103082A (en)*2014-06-062014-10-15华南理工大学Image saliency detection method based on region description and priori knowledge
CN104050460A (en)*2014-06-302014-09-17南京理工大学Pedestrian detection method with multi-feature fusion

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
CHUAN YANG ET AL.: ""Graph-Regularized Saliency Detection With Convex-Hull-Based Center Prior"", 《IEEE SIGNAL PROCESSING LETTERS》*
FANJIE MENG ET AL.: ""Image fusion with saliency map and interest points"", 《NEUROCOMPUTING》*
TTTI ET AL.: ""A model of saliency-dased visual attention for rapid scene analysis"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》*
徐威 等: ""利用层次先验估计的显著性目标检测"", 《自动化学报》*
景慧昀: ""视觉显著性检测关键技术研究"", 《万方数据知识服务平台》*
李君浩 等: ""基于视觉显著性图与似物性的对象检测"", 《计算机应用》*
沈峘 等: ""融合多种特征的路面车辆检测方法"", 《光电子 激光》*

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108960042A (en)*2018-05-172018-12-07新疆医科大学第附属医院The echinococcus protoscolex survival rate detection method of vision significance and SIFT feature
CN108960042B (en)*2018-05-172021-06-08新疆医科大学第一附属医院 Visual saliency and SIFT features for detection of the survival rate of Echinococcus protozoa
CN110598776A (en)*2019-09-032019-12-20成都信息工程大学Image classification method based on intra-class visual mode sharing
CN111639672A (en)*2020-04-232020-09-08中国科学院空天信息创新研究院Deep learning city functional area classification method based on majority voting
CN111639672B (en)*2020-04-232023-12-19中国科学院空天信息创新研究院 A deep learning urban functional area classification method based on majority voting

Also Published As

Publication numberPublication date
CN107704864B (en)2020-10-27

Similar Documents

PublicationPublication DateTitle
Kim et al.An Efficient Color Space for Deep‐Learning Based Traffic Light Recognition
US8620078B1 (en)Determining a class associated with an image
US9911033B1 (en)Semi-supervised price tag detection
CN102414720B (en) Feature quantity calculation device, feature quantity calculation method
CN102968637B (en)Complicated background image and character division method
CN105184763B (en)Image processing method and device
US9418440B2 (en)Image segmenting apparatus and method
CN106651872A (en)Prewitt operator-based pavement crack recognition method and system
CN109977899B (en) A method and system for training, reasoning and adding new categories of item recognition
CN108629286B (en)Remote sensing airport target detection method based on subjective perception significance model
CN104156693A (en)Motion recognition method based on multi-model sequence fusion
CN111340831B (en) Point cloud edge detection method and device
Abedin et al.Traffic sign recognition using surf: Speeded up robust feature descriptor and artificial neural network classifier
CN111695373A (en)Zebra crossing positioning method, system, medium and device
CN114581709A (en) Model training, method, apparatus and medium for recognizing objects in medical images
CN103824090A (en)Adaptive face low-level feature selection method and face attribute recognition method
CN113591850A (en)Two-stage trademark detection method based on computer vision robustness target detection
CN103810716A (en)Image segmentation method based on grey scale flitting and Renyi entropy
CN113591719A (en)Method and device for detecting text with any shape in natural scene and training method
CN103927759A (en)Automatic cloud detection method of aerial images
CN108664968A (en)A kind of unsupervised text positioning method based on text selection model
CN111241911A (en) An Adaptive Lane Line Detection Method
CN107704864B (en)Salient object detection method based on image object semantic detection
CN104866850A (en)Optimized binarization method for document images
CN105023269B (en)A kind of vehicle mounted infrared image colorization method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp