Movatterモバイル変換


[0]ホーム

URL:


CN105261037A - Moving object detection method capable of automatically adapting to complex scenes - Google Patents

Moving object detection method capable of automatically adapting to complex scenes
Download PDF

Info

Publication number
CN105261037A
CN105261037ACN201510645189.4ACN201510645189ACN105261037ACN 105261037 ACN105261037 ACN 105261037ACN 201510645189 ACN201510645189 ACN 201510645189ACN 105261037 ACN105261037 ACN 105261037A
Authority
CN
China
Prior art keywords
image
background
gaussian
model
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510645189.4A
Other languages
Chinese (zh)
Other versions
CN105261037B (en
Inventor
闫河
杨德红
刘婕
王朴
陈伟栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Technology
Original Assignee
Chongqing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of TechnologyfiledCriticalChongqing University of Technology
Priority to CN201510645189.4ApriorityCriticalpatent/CN105261037B/en
Publication of CN105261037ApublicationCriticalpatent/CN105261037A/en
Application grantedgrantedCritical
Publication of CN105261037BpublicationCriticalpatent/CN105261037B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种自适应复杂场景的运动目标检测方法,1)对视频图像进行光照补偿;2)利用混合高斯背景建模方法得到每帧视频图像的背景图像;3)利用背景差分法原理获取每帧的绝对差分图像;4)采用最大熵分割原理获取每个绝对差分图像的灰度概率模型最优分割阈值;5)利用最优分割阈值对绝对差分图像进行二值化处理以获得前景图像;6)采用不同结构体的模块进行形态学处理;7)利用连通域标定算法对前景图像进行区域标定,利用矩形框锁定已标定的运动目标。本方法在全局光照剧烈变化、背景干扰、相对运动等不同复杂场景下具有较好的运动目标自适应检测准确性和鲁棒性,能够提高目标检测的性能。

The invention discloses a moving target detection method for self-adaptive complex scenes, 1) performing illumination compensation on video images; 2) using a mixed Gaussian background modeling method to obtain the background image of each frame of video images; 3) using the principle of background difference method Obtain the absolute difference image of each frame; 4) use the maximum entropy segmentation principle to obtain the optimal segmentation threshold of the gray probability model of each absolute difference image; 5) use the optimal segmentation threshold to binarize the absolute difference image to obtain the foreground image; 6) Morphological processing using modules of different structures; 7) Using the connected domain calibration algorithm to calibrate the area of the foreground image, and using the rectangular frame to lock the calibrated moving target. This method has good adaptive detection accuracy and robustness of moving objects in different complex scenes such as drastic changes in global illumination, background interference, and relative motion, and can improve the performance of object detection.

Description

Translated fromChinese
一种自适应复杂场景的运动目标检测方法An Adaptive Moving Object Detection Method in Complex Scenes

技术领域technical field

本发明涉及视频智能监控技术,尤其涉及一种自适应复杂场景的运动目标检测方法,属于图像处理技术领域。The invention relates to video intelligent monitoring technology, in particular to an adaptive complex scene moving target detection method, which belongs to the technical field of image processing.

背景技术Background technique

运动目标检测技术是视频智能监控技术领域的关键技术之一,是目标身份识别、跟踪、行为分析等后续研究的基础。常用运动目标检测技术有光流法、帧间差分法、背景差分法。其中,光流法是一种估计序列图像的像素点在连续帧间的运动情况,由于该方法只关心图像的像素点,并没有把像素点与运动目标关联起来,对轮廓不规则目标很难做到准确定位,且运算复杂。帧间差分法对场景变化的适用性较好,尤其是光照变化的场景,但对环境噪声较为敏锐,所提取目标区域是目标在前后两帧中位置的“或”区域,比实际目标区域大,若跟踪场景中没有显著运动趋势,则两帧之间目标重叠部分将检测不出,或检测出来的目标区域存在较大空洞,无法完整地提取运动目标。背景差分法的关键在于背景建模与阈值的选取,其基本原理是利用当前帧减去背景图像,并结合阈值以获得运动目标区域。利用传统高斯背景建模、平均背景建模、中值背景建模等,易受到天气变化、光照突变、背景扰动及摄像头与目标相对运动等因素的影响,加之固定阈值不具适应性,例如,阈值选择过低,不足以抑制图像中噪声;选择过高,则忽略了图像中有用的变化;对于较大的、颜色一致的运动目标,有可能在目标内部产生空洞,无法完整地提取运动目标的问题。虽然背景差分法在背景静止且理想场景的情况下,其目标检测效果较佳,但由于实际场景复杂,天气变化、全局光照突变、背景扰动及摄像头与目标相对运动等因素,易导致运动目标检测不准确。Moving object detection technology is one of the key technologies in the field of video intelligent surveillance technology, and it is the basis for follow-up research such as object identification, tracking, and behavior analysis. Commonly used moving target detection techniques include optical flow method, frame difference method, and background difference method. Among them, the optical flow method is a method of estimating the movement of pixels in a sequence of images between consecutive frames. Since this method only cares about the pixels of the image and does not associate pixels with moving objects, it is difficult for objects with irregular contours. Accurate positioning is achieved, and the calculation is complicated. The inter-frame difference method has better applicability to scene changes, especially scenes with changing illumination, but it is more sensitive to environmental noise. The extracted target area is the "or" area of the position of the target in the two frames before and after, which is larger than the actual target area. , if there is no significant motion trend in the tracking scene, the overlapping part of the target between two frames will not be detected, or there will be a large hole in the detected target area, and the moving target cannot be completely extracted. The key of the background subtraction method lies in the background modeling and the selection of the threshold. The basic principle is to use the current frame to subtract the background image and combine the threshold to obtain the moving target area. Using traditional Gaussian background modeling, average background modeling, median background modeling, etc., are susceptible to factors such as weather changes, sudden changes in illumination, background disturbances, and relative motion between the camera and the target, and the fixed threshold is not adaptable, for example, the threshold If the selection is too low, it is not enough to suppress the noise in the image; if the selection is too high, the useful changes in the image will be ignored; for a large moving target with consistent color, there may be holes inside the target, and the moving target cannot be completely extracted. question. Although the background subtraction method has a better target detection effect in the case of a static background and an ideal scene, due to complex actual scenes, weather changes, sudden changes in global illumination, background disturbances, and relative motion between the camera and the target, it is easy to cause moving target detection. Inaccurate.

发明内容Contents of the invention

针对现有技术存在的上述不足,本发明的目的是提供一种自适应复杂场景的运动目标检测方法。本方法在全局光照剧烈变化、背景干扰、相对运动等不同复杂场景下具有较好的运动目标自适应检测准确性和鲁棒性。本方法能够在复杂场景下提高目标检测的性能,为后面环节操作提供更加稳健的基础。In view of the above-mentioned deficiencies in the prior art, the object of the present invention is to provide a moving object detection method adaptive to complex scenes. This method has good adaptive detection accuracy and robustness of moving targets in different complex scenes such as drastic changes in global illumination, background interference, and relative motion. This method can improve the performance of target detection in complex scenes, and provide a more robust foundation for subsequent operations.

为实现本发明目的,采用了以下技术方案:For realizing the object of the present invention, adopted following technical scheme:

一种自适应复杂场景的运动目标检测方法,步骤如下,A moving target detection method for adaptive complex scenes, the steps are as follows,

1)获取视频图像,对视频图像进行光照补偿,以克服全局光照突变带来的影响;1) Obtain a video image, and perform light compensation on the video image to overcome the impact of the global illumination mutation;

2)利用混合高斯背景建模方法得到每帧视频图像对应的背景图像;2) Obtain the background image corresponding to each frame of video image by using the mixed Gaussian background modeling method;

3)根据提取的背景图像,利用背景差分法原理,获取每帧的绝对差分图像,并进行中值滤波处理,以消弱噪声影响;3) According to the extracted background image, use the principle of background difference method to obtain the absolute difference image of each frame, and perform median filter processing to weaken the influence of noise;

4)采用最大熵分割原理获取滤波后的每个绝对差分图像的灰度概率模型对应的最优分割阈值;4) Using the maximum entropy segmentation principle to obtain the optimal segmentation threshold corresponding to the gray-level probability model of each absolute difference image after filtering;

5)利用各自对应的最优分割阈值对滤波后的每个绝对差分图像进行二值化处理以获得前景图像;5) Binarize each filtered absolute difference image to obtain a foreground image using their respective optimal segmentation thresholds;

6)在步骤5)获得前景图像的基础上,采用不同结构体的模块进行形态学处理,以消去小噪声带来的影响,弥补部分运动目标区域的空洞;首先用3*3核的“十字形结构”模板进行一次腐蚀操作,以去除一些小噪声,然后用5*3核进行两次膨胀操作,再进行一次腐蚀操作;6) On the basis of obtaining the foreground image in step 5), the modules of different structures are used for morphological processing to eliminate the influence of small noises and make up for the holes in some moving target areas; "Glyph structure" template to perform an erosion operation to remove some small noises, then perform two dilation operations with 5*3 cores, and perform an erosion operation again;

7)利用连通域标定算法对第6)步形态学处理后的前景图像进行区域标定,利用矩形框锁定已标定的运动目标。7) Use the connected domain calibration algorithm to perform area calibration on the foreground image after the morphological processing in step 6), and use a rectangular frame to lock the calibrated moving target.

其中,步骤1)的光照补偿按如下方法进行,Wherein, the illumination compensation of step 1) is carried out as follows,

假设I(t)表示输入视频图像帧,δ表示两帧间允许发生的最大全局光照变化;首先计算视频每一帧序列图像的平均像素值然后利用如下规则进行光照补偿:Suppose I(t) represents the input video image frame, and δ represents the maximum global illumination change allowed between two frames; first calculate the average pixel value of each frame sequence image of the video Then use the following rules for lighting compensation:

||ΔΔVV==[[VV‾‾((tt))--VV‾‾((tt--11))]]||>>δδ

II‾‾((tt))==II((tt))--sgnsgn((ΔΔVV))((||ΔΔVV||--δδ))

式中,sgn()表示符号函数,表示补偿后的图像。In the formula, sgn() represents a symbolic function, Indicates the compensated image.

其中,步骤4)最优分割阈值获取方法为,Wherein, step 4) optimal segmentation threshold acquisition method is,

设一幅大小为M*N的图像I(x,y),I(x,y)表示图像坐标点(x,y)的像素灰度值,且灰度值取值范围为0-(L-1),步骤3)滤波后的绝对差分图像为DF(x,y),ni表示绝对差分图像的灰度值为i的像素个数,则像素个数总量为:pi表示像素灰度值i的概率,那么:Suppose an image I(x, y) with a size of M*N, I(x, y) represents the pixel gray value of the image coordinate point (x, y), and the gray value ranges from 0-(L -1), step 3) the filtered absolute difference image is DF(x, y), and ni represents the number of pixels whose gray value of the absolute difference image is i, then the total number of pixels is: pi represents the probability of pixel gray value i, then:

pi=ni/N,i=0,1,2,3……,L-1;pi = ni /N, i = 0, 1, 2, 3..., L-1;

然后采用候选分割阈值T将图像中的像素值按灰度等级分成C0和C1两类,C0表示目标对象,C1表示背景对象,即C0={0,1,…,t},C1={t+1,t+2,…,L-1},则C0和C1所对应像素灰度值概率分布分别为:Then use the candidate segmentation threshold T to divide the pixel values in the image into C0 and C1 according to the gray level, C0 represents the target object, and C1 represents the background object, that is, C0={0,1,...,t}, C1={t +1,t+2,...,L-1}, then the probability distributions of gray value of pixels corresponding to C0 and C1 are:

CC00::PP00PPDD.,,PP11PPDD.,,PP22PPDD.,,............,,PPTTPPDD.;;

CC11::PPTT++1111--PPDD.,,PPTT++2211--PPDD.,,............,,ppLL--1111--PPDD.;;

式中,L是灰度级的数目;那么,C0和C1的熵值分别由下式表示;In the formula, L is the number of gray levels; then, the entropy values of C0 and C1 are expressed by the following formulas respectively;

CC00::Hh00==--ΣΣii==00TTPPiiPPDD.lloogg((PPiiPPDD.));;

CC11::Hh11==--ΣΣii==TT++11LLPPii11--PPDD.lloogg((PPii11--PPDD.));;

在所得图像C0熵和C1熵的基础上,则后验熵之和H表示如下:On the basis of the obtained image C0 entropy and C1 entropy, the sum H of the posterior entropy is expressed as follows:

H=H0+H1H=H0 +H1 ;

那么,比较得到熵判别函数的最大值所对应灰度等级,即表示基于最大熵算法的最优分割阈值THR,如下式所示,Then, the gray level corresponding to the maximum value of the entropy discriminant function obtained by comparison means the optimal segmentation threshold THR based on the maximum entropy algorithm, as shown in the following formula:

TTHhRR==argarg00<<tt<<LLmmaaxx((Hh));;

利用获得的最优分割阈值THR对对滤波后的绝对差分图像DF(x,y)进行二值化处理,获得视频中的前景图像FI(x,y),如下式所示,Use the obtained optimal segmentation threshold THR to binarize the filtered absolute difference image DF(x,y) to obtain the foreground image FI(x,y) in the video, as shown in the following formula,

FfII((xx,,ythe y))==255255,,DD.Ff((xx,,ythe y))&GreaterEqual;&Greater Equal;TTHhRR00,,ootthheerr..

其中,步骤2)利用混合高斯背景建模方法提取背景图像的具体方法为,Wherein, step 2) uses the mixed Gaussian background modeling method to extract the specific method of the background image as,

利用K个单高斯概率模型构建某一像素点X的高斯混合模型,见公式(3)所示;Utilize K single Gaussian probability models to construct a Gaussian mixture model of a certain pixel point X, as shown in formula (3);

pp((Xxtt))==&Sigma;&Sigma;ii==11KKwwii,,tt&CenterDot;&Center Dot;&eta;&eta;((Xxtt,,&mu;&mu;ii,,tt,,&Sigma;&Sigma;ii,,tt))------((33))

其中,p(Xt)是t时刻出现像素值Xt的概率,wi,t表示t时刻第i个高斯模型的权值,并且权值和为1,K表示高斯模型总数,取3-5个,η(Xti,ti,t)表示t时刻第i个高斯模型,μi,t为均值,Σi,t为协方差矩阵,n表示维数,见公式(4);Among them, p(Xt ) is the probability of pixel value Xt appearing at time t, wi,t represents the weight of the i-th Gaussian model at time t, and the weight sum is 1, and K represents the total number of Gaussian models, taking 3- 5, η(Xti,ti,t ) represents the i-th Gaussian model at time t, μi,t is the mean value, Σi,t is the covariance matrix, n represents the dimension, see the formula (4);

&eta;&eta;((Xxtt,,&mu;&mu;ii,,tt,,&Sigma;&Sigma;ii,,tt))==11((22&tau;&tau;))nno//22||&Sigma;&Sigma;ii,,tt||11//22ee--1122((Xxtt--&mu;&mu;ii,,tt))TT&Sigma;&Sigma;ii,,tt--11((Xxtt--&mu;&mu;ii,,tt))------((44))

混合高斯背景模型匹配与更新过程如下:The matching and updating process of the mixed Gaussian background model is as follows:

模型匹配是将视频图像当前帧像素值X和已有的K个高斯模型进行匹配对比,若第i个高斯模型满足公式(5),则表示当前帧像素值与之匹配,否则不匹配;Model matching is to match and compare the pixel value X of the current frame of the video image with the existing K Gaussian models. If the i-th Gaussian model satisfies the formula (5), it means that the pixel value of the current frame matches it, otherwise it does not match;

|Xti,t-1|<2.5·σi,t-1(5)|Xti,t-1 |<2.5·σi,t-1 (5)

若匹配不成功,则采用视频当前帧的均值,并设定一个较大的方差值,建立新高斯分布模型;If the matching is unsuccessful, the mean value of the current frame of the video is used, and a larger variance value is set to establish a new Gaussian distribution model;

根据匹配结果按照公式(6)进行模型的更新;Carry out model update according to formula (6) according to matching result;

&mu;&mu;tt==((11--&alpha;&alpha;))&CenterDot;&Center Dot;&mu;&mu;tt--11++&alpha;&alpha;&CenterDot;&CenterDot;Xxtt&sigma;&sigma;tt22==((11--&alpha;&alpha;))&CenterDot;&CenterDot;&sigma;&sigma;tt--1122++&alpha;&alpha;&CenterDot;&Center Dot;((&mu;&mu;tt--Xxtt))22wwii,,tt==((11--&alpha;&alpha;))&CenterDot;&CenterDot;wwii,,tt--11++&alpha;&alpha;&CenterDot;&Center Dot;Mmii,,tt------((66))

其中,α表示视频当前帧嵌入到背景模型的速率,称为学习速率,若模型匹配,则Mi,t=1,否则为0,其μ和σ2保持不变;Among them, α represents the rate at which the current frame of the video is embedded into the background model, which is called the learning rate. If the model matches, then Mi,t = 1, otherwise it is 0, and its μ and σ2 remain unchanged;

由于Σi,t较小和权值大的高斯概率分布模型更有可能用于近似表示背景像素分布模型,为此,对视频每帧图像中的像素值按照w/σ值的大小递减的顺序对K个高斯概率分布模型排序,将前B个高斯概率分布作为背景,构成背景图像BI,见公式(7);Since Σi,t is small and the Gaussian probability distribution model with large weight is more likely to be used to approximate the background pixel distribution model, for this reason, the pixel values in each frame of the video are in the order of decreasing w/σ value Sort the K Gaussian probability distribution models, and use the first B Gaussian probability distributions as the background to form a background image BI, see formula (7);

BB==argminargminBB((&Sigma;&Sigma;kk==11BBwwkk>>TT))------((77))

其中,T为背景模型设定的阈值,T取值范围[0.7,0.8]。Among them, T is the threshold set by the background model, and the value range of T is [0.7,0.8].

与现有方法相比,本发明具有如下有益效果:Compared with existing methods, the present invention has the following beneficial effects:

1)视频背景图像不需要预设。1) The video background image does not require a preset.

2)采用光照补偿和混合高斯模型建立背景模型,能够有效克服光照突变、摄像头相对运动成像、背景扰动的影响,从而获得更加稳健的背景图像。2) The background model is established by using illumination compensation and mixed Gaussian model, which can effectively overcome the influence of sudden illumination changes, camera relative motion imaging, and background disturbance, so as to obtain a more robust background image.

3)本发明引入最大熵分割阈值,每个绝对差分图像分别进行计算而获得(即每个绝对差分图像可能不同,而现有为固定阈值,即所有绝对差分图像阈值相同),在实际应用所涉及的不同复杂场景视频图像,其固定阈值不具适应性的问题能很好的得到解决。3) The present invention introduces the maximum entropy segmentation threshold, which is obtained by calculating each absolute difference image separately (that is, each absolute difference image may be different, while the existing one is a fixed threshold, that is, the threshold of all absolute difference images is the same). The problem that the fixed threshold is not adaptable to the video images of different complex scenes involved can be well solved.

4)在光照突变、背景干扰、相对运动等不同复杂场景下具有较好的准确性和鲁棒性。4) It has good accuracy and robustness in different complex scenes such as sudden illumination changes, background interference, and relative motion.

附图说明Description of drawings

图1-本发明自适应复杂场景的运动目标检测方法总体框架图。Fig. 1 - the general frame diagram of the moving object detection method for adapting complex scenes of the present invention.

图2-本发明混合高斯背景建模的流程图。Figure 2 - Flowchart of the present invention for modeling mixed Gaussian backgrounds.

图3-本发明步骤2原理图。Fig. 3 - schematic diagram of step 2 of the present invention.

具体实施方式detailed description

本发明总体思路为:第一,考虑到光照变化的程度,引入光照补偿法改善光照变化对后续目标检测的影响;第二,考虑到背景差分法的关键是背景建模和阈值选择,利用混合高斯背景建模提取背景图像以克服动态背景对后续目标检测的影响,在此基础上,利用背景差分法原理获得绝对差分图像,并引入中值滤波首先对绝对差分图像进行滤波处理,以消弱噪声的影响,加之原始背景差分法中阈值固定的缺陷,引入最大熵分割法提取阈值以便自适应不同复杂场景视频图像;第三,考虑到获取的前景图像存在小噪声以及同一区域存在不连通的因素,采用不同结构体的模块对前景图像进行形态学处理,以消去小噪声带来的影响,弥补部分运动目标区域的空洞。最后,利用连通域标定算法标记出前景对象,并根据连通域大小锁定运动目标。The general idea of the present invention is as follows: first, considering the degree of illumination change, the illumination compensation method is introduced to improve the influence of illumination change on subsequent target detection; Gaussian background modeling extracts the background image to overcome the influence of the dynamic background on the subsequent target detection. Influenced by noise, coupled with the defect of the fixed threshold in the original background difference method, the maximum entropy segmentation method is introduced to extract the threshold in order to adapt to different complex scene video images; Factors, using modules of different structures to perform morphological processing on the foreground image to eliminate the influence of small noises and make up for the holes in some moving target areas. Finally, the foreground object is marked by the connected domain calibration algorithm, and the moving target is locked according to the size of the connected domain.

本发明的具体技术方案如下,其原理见图1:Concrete technical scheme of the present invention is as follows, and its principle is shown in Fig. 1:

步骤1:获取检测视频,采用光照补偿和混合高斯模型建立背景模型,以获得更加稳健的背景图像。获得背景图像的具体过程:Step 1: Obtain a detection video, and use illumination compensation and a mixed Gaussian model to establish a background model to obtain a more robust background image. The specific process of obtaining the background image:

(1)获取视频序列图像,首先对视频图像进行光照补偿,以克服全局光照突变带来的干扰。(1) Acquire the video sequence image, first perform illumination compensation on the video image to overcome the interference caused by the sudden change of global illumination.

假设I(t)表示输入视频图像帧,δ表示两帧间允许发生的最大全局光照变化。首先计算视频每一帧序列图像的平均像素值然后利用如下规则进行光照补偿:Suppose I(t) represents the input video image frame, and δ represents the maximum global illumination change allowed between two frames. First calculate the average pixel value of each frame sequence image of the video Then use the following rules for lighting compensation:

||&Delta;&Delta;VV==&lsqb;&lsqb;VV&OverBar;&OverBar;((tt))--VV&OverBar;&OverBar;((tt--11))&rsqb;&rsqb;||>>&delta;&delta;------((11))

II&OverBar;&OverBar;((tt))==II((tt))--sgnsgn((&Delta;&Delta;VV))((||&Delta;&Delta;VV||--&delta;&delta;))------((22))

式中,sgn()表示符号函数,表示补偿后的图像。In the formula, sgn() represents a symbolic function, Indicates the compensated image.

(2)在此基础上,利用混合高斯背景建模方法提取背景图像,能够适用于摄像头相对运动成像、背景扰动、天气变化等动态场景,其混合高斯背景建模的流程如图2所示。(2) On this basis, using the mixed Gaussian background modeling method to extract the background image can be applied to dynamic scenes such as camera relative motion imaging, background disturbance, and weather changes. The process of the mixed Gaussian background modeling is shown in Figure 2.

混合高斯背景模型是一种扩展型单高斯模型,可以近似表示任何形状的概率分布。在该模型中,视频图像序列中像素点值的变化被当成随机过程且满足高斯分布,利用K个单高斯概率模型构建某一像素点X的高斯混合模型,见公式(3)所示。The mixed Gaussian background model is an extended single Gaussian model that can approximate probability distributions of any shape. In this model, the change of the pixel point value in the video image sequence is regarded as a random process and satisfies the Gaussian distribution, and K single Gaussian probability models are used to construct a Gaussian mixture model of a certain pixel point X, as shown in formula (3).

pp((Xxtt))==&Sigma;&Sigma;ii==11KKwwii,,tt&CenterDot;&Center Dot;&eta;&eta;((Xxtt,,&mu;&mu;ii,,tt,,&Sigma;&Sigma;ii,,tt))------((33))

其中,p(Xt)是t时刻出现像素值Xt的概率,wi,t表示t时刻第i个高斯模型的权值,并且权值和为1,K表示高斯模型总数,一般取3-5个,η(Xti,ti,t)表示t时刻第i个高斯模型,μi,t为均值,Σi,t为协方差矩阵,n表示维数,见公式(4)。Among them, p(Xt ) is the probability of pixel value Xt appearing at time t, wi,t represents the weight of the i-th Gaussian model at time t, and the weight sum is 1, K represents the total number of Gaussian models, generally 3 -5, η(Xti,ti,t ) represents the i-th Gaussian model at time t, μi,t is the mean value, Σi,t is the covariance matrix, n represents the dimension, see Formula (4).

&eta;&eta;((Xxtt,,&mu;&mu;ii,,tt,,&Sigma;&Sigma;ii,,tt))==11((22&tau;&tau;))nno//22||&Sigma;&Sigma;ii,,tt||11//22ee--1122((Xxtt--&mu;&mu;ii,,tt))TT&Sigma;&Sigma;ii,,tt--11((Xxtt--&mu;&mu;ii,,tt))------((44))

混合高斯背景模型主要考虑匹配与更新问题,其匹配与更新过程如下:The mixed Gaussian background model mainly considers the matching and updating problem, and its matching and updating process is as follows:

模型匹配是将视频图像当前帧像素值X和已有的K个高斯模型进行匹配对比,若第i个高斯模型满足公式(5),则表示当前帧像素值与之匹配,否则不匹配。Model matching is to match and compare the pixel value X of the current frame of the video image with the existing K Gaussian models. If the i-th Gaussian model satisfies the formula (5), it means that the pixel value of the current frame matches it, otherwise it does not match.

|Xti,t-1|<2.5·σi,t-1(5)|Xti,t-1 |<2.5·σi,t-1 (5)

若匹配不成功,则采用视频当前帧的均值,并设定一个较大的方差值,建立新高斯分布模型。If the matching is unsuccessful, the mean value of the current frame of the video is used, and a larger variance value is set to establish a new Gaussian distribution model.

根据匹配结果按照如下公式(6)进行模型的更新。According to the matching results, the model is updated according to the following formula (6).

&mu;&mu;tt==((11--&alpha;&alpha;))&CenterDot;&CenterDot;&mu;&mu;tt--11++&alpha;&alpha;&CenterDot;&CenterDot;Xxtt&sigma;&sigma;tt22==((11--&alpha;&alpha;))&CenterDot;&CenterDot;&sigma;&sigma;tt--1122++&alpha;&alpha;&CenterDot;&CenterDot;((&mu;&mu;tt--Xxtt))22wwii,,tt==((11--&alpha;&alpha;))&CenterDot;&CenterDot;wwii,,tt--11++&alpha;&alpha;&CenterDot;&CenterDot;Mmii,,tt------((66))

其中,α表示视频当前帧嵌入到背景模型的速率,称为学习速率,若模型匹配,则Mi,t=1,否则为0,其μ和σ2保持不变。Among them, α represents the rate at which the current frame of the video is embedded into the background model, which is called the learning rate. If the model matches, then Mi,t = 1, otherwise it is 0, and its μ and σ2 remain unchanged.

由于Σi,t较小和权值大的高斯概率分布模型更有可能用于近似表示背景像素分布模型,为此,对视频每帧图像中的像素值按照w/σ值的大小递减的顺序对K个高斯概率分布模型排序,将前B个高斯概率分布作为背景,构成背景图像BI,见公式(7)。Since Σi,t is small and the Gaussian probability distribution model with large weight is more likely to be used to approximate the background pixel distribution model, for this reason, the pixel values in each frame of the video are in the order of decreasing w/σ value Sort the K Gaussian probability distribution models, and use the first B Gaussian probability distributions as the background to form a background image BI, see formula (7).

BB==argminargminBB((&Sigma;&Sigma;kk==11BBwwkk>>TT))------((77))

其中,T为背景模型设定的阈值,若该值较小,模型将会退化成单高斯概率分布模型;若该值较大,则能够表示较为复杂的背景模型,大量实验说明,T最佳取值范围[0.7,0.8]。Among them, T is the threshold set by the background model. If the value is small, the model will degenerate into a single Gaussian probability distribution model; if the value is large, it can represent a more complex background model. A large number of experiments show that T is the best The value range is [0.7,0.8].

步骤2:利用背景差分法原理获得绝对差分图像D(x,y),并对绝对差分图像进行中值滤波处理。背景差分法的基本原理是将当前帧与背景图像进行绝对差分过程,如式(8)所示。Step 2: Obtain the absolute difference image D(x, y) by using the principle of background difference method, and perform median filter processing on the absolute difference image. The basic principle of the background subtraction method is to perform an absolute difference process between the current frame and the background image, as shown in formula (8).

D(x,y)=|I(x,y)-BI(x,y)|(8)D(x,y)=|I(x,y)-BI(x,y)|(8)

其中,I(x,y)表示视频当前帧图像,BI(x,y)表示由步骤1所获得的背景图像,其步骤2原理图如图3所示。Among them, I(x, y) represents the current frame image of the video, BI(x, y) represents the background image obtained in step 1, and the schematic diagram of step 2 is shown in Figure 3.

步骤3:利用最大熵分割阈值法自适应获取每个绝对差分图像的最佳阈值,并进行二值化处理获得前景图像。获得前景图像的具体过程:Step 3: Use the maximum entropy segmentation threshold method to adaptively obtain the optimal threshold of each absolute difference image, and perform binarization to obtain the foreground image. The specific process of obtaining the foreground image:

设一幅大小为M*N的图像I(x,y),I(x,y)表示图像坐标点(x,y)的像素灰度值,且灰度值取值范围为0~(L-1),通过步骤2获得滤波后的背景绝对差分图像DF(x,y),ni表示绝对差分图像的灰度值为i的像素个数,则像素个数总量为:pi表示像素灰度值i的概率,那么:Suppose an image I(x,y) with a size of M*N, I(x,y) represents the pixel gray value of the image coordinate point (x,y), and the gray value ranges from 0 to (L -1), the filtered background absolute difference image DF(x, y) is obtained through step 2, and ni represents the number of pixels whose gray value of the absolute difference image is i, then the total number of pixels is: pi represents the probability of pixel gray value i, then:

pi=ni/N,i=0,1,2,3……,L-1(9)pi =ni /N, i=0,1,2,3...,L-1(9)

然后采用候选分割阈值T将图像中的像素值按灰度等级分成2类C0和C1,C0表示目标对象,C1表示背景对象,即C0={0,1,…,t},C1={t+1,t+2,…,L-1},则C0和C1所对应像素灰度值概率分布分别为:Then use the candidate segmentation threshold T to divide the pixel values in the image into two categories C0 and C1 according to the gray level, C0 represents the target object, and C1 represents the background object, that is, C0={0,1,...,t}, C1={t +1,t+2,...,L-1}, then the probability distribution of the gray value of the pixel corresponding to C0 and C1 is:

CC00::PP00PPDD.,,PP11PPDD.,,PP22PPDD.,,............,,PPTTPPDD.------((1010))

CC11::PPTT++1111--PPDD.,,PPTT++2211--PPDD.,,............,,ppLL--1111--PPDD.------((1111))

式中,L是灰度级的数目。那么,C0和C1的熵值分别由公式(12)(13)表示。In the formula, L is the number of gray levels. Then, the entropy values of C0 and C1 are represented by formulas (12) (13), respectively.

CC00::Hh00==--&Sigma;&Sigma;ii==00TTPPiiPPDD.lloogg((PPiiPPDD.))------((1212))

CC11::Hh11==--&Sigma;&Sigma;ii==TT++11LLPPii11--PPDD.lloogg((PPii11--PPDD.))------((1313))

在所得图像C0熵和C1熵的基础上,则后验熵之和H表示如下:On the basis of the obtained image C0 entropy and C1 entropy, the sum H of the posterior entropy is expressed as follows:

H=H0+H1(14)H=H0 +H1 (14)

那么,比较得到熵判别函数的最大值所对应灰度等级,即表示基于最大熵算法的最优阈值THR,见公式(15)所示。Then, the gray level corresponding to the maximum value of the entropy discriminant function obtained by comparison means the optimal threshold THR based on the maximum entropy algorithm, as shown in formula (15).

TTHhRR==argarg00<<tt<<LLmmaaxx((Hh))------((1515))

利用获得的最佳阈值THR对步骤2所得的差分图像DF(x,y)进行二值化处理,获得视频中的前景图像FI(x,y),见公式(16)所示。The difference image DF(x,y) obtained in step 2 is binarized using the obtained optimal threshold THR to obtain the foreground image FI(x,y) in the video, as shown in formula (16).

FfII((xx,,ythe y))==255255,,DD.Ff((xx,,ythe y))&GreaterEqual;&Greater Equal;TTHhRR00,,ootthheerr------((1616))

步骤4:对步骤3获取的前景图像进行形态学操作,以消去小噪声带来的影响,弥补部分运动目标区域的空洞。Step 4: Perform morphological operations on the foreground image acquired in step 3 to eliminate the influence of small noises and make up for holes in some moving target areas.

形态学处理是要解消去小噪声带来的影响,弥补部分运动目标区域的空洞。首先用3*3核的“十字形结构”模板进行一次腐蚀操作,可以去除一些小噪声,然后用5*3核进行二次膨胀操作,再进行一次腐蚀操作。纵向采用较大的核是考虑到常见的行人作为运动目标对象存在人头和躯干的不连通,可以适当做些补偿。Morphological processing is to eliminate the influence of small noise and make up for the holes in some moving target areas. First, use a 3*3-core "cross-shaped structure" template to perform an erosion operation, which can remove some small noises, and then use a 5*3 core to perform a second expansion operation, and then perform an erosion operation. The larger kernel is adopted in the vertical direction because the head and torso of common pedestrians as moving targets are not connected, and some compensation can be made appropriately.

步骤5:连通域标定算法对前景图像进行区域标定,利用矩形框锁定已标定的运动目标。Step 5: The connected domain calibration algorithm performs area calibration on the foreground image, and uses a rectangular frame to lock the calibrated moving target.

通过上述描述可以看出,本发明主要解决了以下问题:As can be seen from the above description, the present invention mainly solves the following problems:

1、背景建模问题1. Background modeling problem

本发明采用光照补偿和混合高斯模型建立背景模型,能够有效克服全局光照突变、摄像头相对运动成像、背景扰动的影响。The invention adopts illumination compensation and a mixed Gaussian model to establish a background model, and can effectively overcome the influence of global illumination mutation, camera relative motion imaging, and background disturbance.

2、固定阈值问题2. Fixed threshold problem

本发明引入最大熵分割阈值,每个绝对差分图像分别得到对应的分割阈值,在实际应用所涉及的不同复杂场景视频图像,其固定阈值不具适应性的问题能很好的得到解决。例如,阈值选择过低,不足以抑制图像中噪声;选择过高,则忽略了图像中有用的变化,对于较大的、颜色一致的运动目标,有可能在目标内部产生空洞,无法完整地提取运动目标的问题。The present invention introduces the maximum entropy segmentation threshold, and each absolute difference image obtains a corresponding segmentation threshold, so that the problem that the fixed threshold is not adaptable to different complex scene video images involved in practical applications can be well solved. For example, if the threshold value is too low, it is not enough to suppress the noise in the image; if it is too high, the useful changes in the image will be ignored. For a large moving target with consistent color, there may be holes inside the target, which cannot be completely extracted. The question of athletic goals.

3、具有较好的运动目标检测准确性和鲁棒性3. Good moving target detection accuracy and robustness

本发明首先采用光照补偿和混合高斯模型建立背景模型,然后利用背景差分法原理进行绝对值差分获得绝对差分图像,再利用最大熵分割阈值法自适应获取绝对差分图像的最佳阈值,并进行二值化处理获得前景图像,再对前景图像进行形态学操作,以消去小噪声带来的影响,弥补部分运动目标区域的空洞,最后,连通域标定算法对前景图像进行区域标定,利用矩形框锁定已标定的运动目标。该方法能够在全局光照变化、背景干扰、相对运动等不同复杂场景下具有较好的运动目标检测准确性和鲁棒性。The present invention first uses illumination compensation and mixed Gaussian model to establish a background model, and then utilizes the principle of background difference method to perform absolute value difference to obtain an absolute difference image, and then uses the maximum entropy segmentation threshold method to adaptively obtain the optimal threshold of the absolute difference image, and performs two The foreground image is obtained by value processing, and then the morphological operation is performed on the foreground image to eliminate the influence of small noise and make up for the holes in some moving target areas. Finally, the connected domain calibration algorithm performs regional calibration on the foreground image, and uses a rectangular frame to lock Calibrated motion target. This method can have better detection accuracy and robustness of moving objects in different complex scenes such as global illumination changes, background interference, and relative motion.

最后说明的是,上述实施例仅用于说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或者等同替换,而不脱离本发明技术方案的宗旨和范围,其均应涵盖在本发明的权利要求范围当中。Finally, it should be noted that the above-mentioned embodiments are only used to illustrate the technical solutions of the present invention without limitation, although the present invention has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present invention can be carried out Modifications or equivalent replacements without departing from the spirit and scope of the technical solution of the present invention shall be covered by the claims of the present invention.

Claims (4)

Translated fromChinese
1.一种自适应复杂场景的运动目标检测方法,其特征在于:步骤如下,1. A moving target detection method for an adaptive complex scene, characterized in that: the steps are as follows,1)获取视频图像,对视频图像进行光照补偿,以克服全局光照突变带来的影响;1) Obtain a video image, and perform light compensation on the video image to overcome the impact of the global illumination mutation;2)利用混合高斯背景建模方法得到每帧视频图像对应的背景图像;2) Obtain the background image corresponding to each frame of video image by using the mixed Gaussian background modeling method;3)根据提取的背景图像,利用背景差分法原理,获取每帧的绝对差分图像,并进行中值滤波处理,以消弱噪声影响;3) According to the extracted background image, use the principle of background difference method to obtain the absolute difference image of each frame, and perform median filter processing to weaken the influence of noise;4)采用最大熵分割原理获取滤波后的每个绝对差分图像的灰度概率模型对应的最优分割阈值;4) Using the maximum entropy segmentation principle to obtain the optimal segmentation threshold corresponding to the gray-level probability model of each absolute difference image after filtering;5)利用各自对应的最优分割阈值对滤波后的每个绝对差分图像进行二值化处理以获得前景图像;5) Binarize each filtered absolute difference image to obtain a foreground image using their respective optimal segmentation thresholds;6)在步骤5)获得前景图像的基础上,采用不同结构体的模块进行形态学处理,以消去小噪声带来的影响,弥补部分运动目标区域的空洞;首先用3*3核的“十字形结构”模板进行一次腐蚀操作,以去除一些小噪声,然后用5*3核进行两次膨胀操作,再进行一次腐蚀操作;6) On the basis of obtaining the foreground image in step 5), the modules of different structures are used for morphological processing to eliminate the influence of small noises and make up for the holes in some moving target areas; "Glyph structure" template to perform an erosion operation to remove some small noises, then perform two dilation operations with 5*3 cores, and perform an erosion operation again;7)利用连通域标定算法对第6)步形态学处理后的前景图像进行区域标定,利用矩形框锁定已标定的运动目标。7) Use the connected domain calibration algorithm to perform area calibration on the foreground image after the morphological processing in step 6), and use a rectangular frame to lock the calibrated moving target.2.根据权利要求1所述的自适应复杂场景的运动目标检测方法,其特征在于:步骤1)的光照补偿按如下方法进行,2. the moving object detection method of adaptive complex scene according to claim 1, is characterized in that: the illumination compensation of step 1) is carried out as follows,假设I(t)表示输入视频图像帧,δ表示两帧间允许发生的最大全局光照变化;首先计算视频每一帧序列图像的平均像素值然后利用如下规则进行光照补偿:Suppose I(t) represents the input video image frame, and δ represents the maximum global illumination change allowed between two frames; first calculate the average pixel value of each frame sequence image of the video Then use the following rules for lighting compensation:||&Delta;&Delta;VV==&lsqb;&lsqb;VV&OverBar;&OverBar;((tt))--VV&OverBar;&OverBar;((tt--11))&rsqb;&rsqb;||>>&delta;&delta;II&OverBar;&OverBar;((tt))==II((tt))--sgnsgn((&Delta;&Delta;VV))((||&Delta;&Delta;VV||--&delta;&delta;))式中,sgn()表示符号函数,表示补偿后的图像。In the formula, sgn() represents a symbolic function, Indicates the compensated image.3.根据权利要求1所述的自适应复杂场景的运动目标检测方法,其特征在于:步骤4)最优分割阈值获取方法为,3. the moving object detection method of self-adaptive complex scene according to claim 1, is characterized in that: step 4) optimal segmentation threshold acquisition method is,设一幅大小为M*N的图像I(x,y),I(x,y)表示图像坐标点(x,y)的像素灰度值,且灰度值取值范围为0-(L-1),步骤3)滤波后的绝对差分图像为DF(x,y),ni表示绝对差分图像的灰度值为i的像素个数,则像素个数总量为:pi表示像素灰度值i的概率,那么:Suppose an image I(x, y) with a size of M*N, I(x, y) represents the pixel gray value of the image coordinate point (x, y), and the gray value ranges from 0-(L -1), step 3) the filtered absolute difference image is DF(x, y), and ni represents the number of pixels whose gray value of the absolute difference image is i, then the total number of pixels is: pi represents the probability of pixel gray value i, then:pi=ni/N,i=0,1,2,3……,L-1;pi = ni /N, i = 0, 1, 2, 3..., L-1;然后采用候选分割阈值T将图像中的像素值按灰度等级分成C0和C1两类,C0表示目标对象,C1表示背景对象,即C0={0,1,…,t},C1={t+1,t+2,…,L-1},则C0和C1所对应像素灰度值概率分布分别为:Then use the candidate segmentation threshold T to divide the pixel values in the image into C0 and C1 according to the gray level, C0 represents the target object, and C1 represents the background object, that is, C0={0,1,...,t}, C1={t +1,t+2,...,L-1}, then the probability distributions of gray value of pixels corresponding to C0 and C1 are:CC00::PP00PPDD.,,PP11PPDD.,,PP22PPDD.,,............,,PPTTPPDD.;;CC11::PPTT++1111--PPDD.,,PPTT++2211--PPDD.,,............,,ppLL--1111--PPDD.;;式中,L是灰度级的数目;那么,C0和C1的熵值分别由下式表示;In the formula, L is the number of gray levels; then, the entropy values of C0 and C1 are expressed by the following formulas respectively;CC00::Hh00==--&Sigma;&Sigma;ii==00TTPPiiPPDD.lloogg((PPiiPPDD.));;CC11::Hh11==--&Sigma;&Sigma;ii==TT++11LLPPii11--PPDD.lloogg((PPii11--PPDD.));;在所得图像C0熵和C1熵的基础上,则后验熵之和H表示如下:On the basis of the obtained image C0 entropy and C1 entropy, the sum H of the posterior entropy is expressed as follows:H=H0+H1H=H0 +H1 ;那么,比较得到熵判别函数的最大值所对应灰度等级,即表示基于最大熵算法的最优分割阈值THR,如下式所示,Then, the gray level corresponding to the maximum value of the entropy discriminant function obtained by comparison means the optimal segmentation threshold THR based on the maximum entropy algorithm, as shown in the following formula:TTHhRR==argarg00<<tt<<LLmmaaxx((Hh));;利用获得的最优分割阈值THR对对滤波后的绝对差分图像DF(x,y)进行二值化处理,获得视频中的前景图像FI(x,y),如下式所示,Use the obtained optimal segmentation threshold THR to binarize the filtered absolute difference image DF(x,y) to obtain the foreground image FI(x,y) in the video, as shown in the following formula,FfII((xx,,ythe y))=={{255255,,DD.Ff((xx,,ythe y))&GreaterEqual;&Greater Equal;TTHhRR00,,ootthheerr..4.根据权利要求1所述的自适应复杂场景的运动目标检测方法,其特征在于:步骤2)利用混合高斯背景建模方法提取背景图像的具体方法为,4. the moving object detection method of self-adaptive complex scene according to claim 1, is characterized in that: step 2) utilizes mixed Gaussian background modeling method to extract the concrete method of background image as,利用K个单高斯概率模型构建某一像素点X的高斯混合模型,见公式(3)所示;Utilize K single Gaussian probability models to construct a Gaussian mixture model of a certain pixel point X, as shown in formula (3);pp((Xxtt))==&Sigma;&Sigma;ii==11KKwwii,,tt&CenterDot;&Center Dot;&eta;&eta;((Xxtt,,&mu;&mu;ii,,tt,,&Sigma;&Sigma;ii,,tt))------((33))其中,p(Xt)是t时刻出现像素值Xt的概率,wi,t表示t时刻第i个高斯模型的权值,并且权值和为1,K表示高斯模型总数,取3-5个,η(Xti,ti,t)表示t时刻第i个高斯模型,μi,t为均值,Σi,t为协方差矩阵,n表示维数,见公式(4);Among them, p(Xt ) is the probability of pixel value Xt appearing at time t, wi,t represents the weight of the i-th Gaussian model at time t, and the weight sum is 1, K represents the total number of Gaussian models, taking 3- 5, η(Xti,ti,t ) represents the i-th Gaussian model at time t, μi,t is the mean value, Σi,t is the covariance matrix, n represents the dimension, see the formula (4);&eta;&eta;((Xxtt,,&mu;&mu;ii,,tt,,&Sigma;&Sigma;ii,,tt))==11((22&tau;&tau;))nno//22||&Sigma;&Sigma;ii,,tt||11//22ee--1122((Xxtt--&mu;&mu;ii,,tt))TT&Sigma;&Sigma;ii,,tt--11((Xxtt--&mu;&mu;ii,,tt))------((44))混合高斯背景模型匹配与更新过程如下:The matching and updating process of the mixed Gaussian background model is as follows:模型匹配是将视频图像当前帧像素值X和已有的K个高斯模型进行匹配对比,若第i个高斯模型满足公式(5),则表示当前帧像素值与之匹配,否则不匹配;Model matching is to match and compare the pixel value X of the current frame of the video image with the existing K Gaussian models. If the i-th Gaussian model satisfies the formula (5), it means that the pixel value of the current frame matches it, otherwise it does not match;|Xti,t-1|<2.5·σi,t-1(5)|Xti,t-1 |<2.5·σi,t-1 (5)若匹配不成功,则采用视频当前帧的均值,并设定一个较大的方差值,建立新高斯分布模型;If the matching is unsuccessful, the mean value of the current frame of the video is used, and a larger variance value is set to establish a new Gaussian distribution model;根据匹配结果按照公式(6)进行模型的更新;According to the matching result, the model is updated according to formula (6);{{&mu;&mu;tt==((11--&alpha;&alpha;))&CenterDot;&Center Dot;&mu;&mu;tt--11++&alpha;&alpha;&CenterDot;&Center Dot;Xxtt&sigma;&sigma;tt22==((11--&alpha;&alpha;))&CenterDot;&Center Dot;&sigma;&sigma;tt--1122++&alpha;&alpha;&CenterDot;&Center Dot;((&mu;&mu;tt--Xxtt))22wwii,,tt==((11--&alpha;&alpha;))&CenterDot;&Center Dot;wwii,,tt--11++&alpha;&alpha;&CenterDot;&Center Dot;Mmii,,tt------((66))其中,α表示视频当前帧嵌入到背景模型的速率,称为学习速率,若模型匹配,则Mi,t=1,否则为0,其μ和σ2保持不变;Among them, α represents the rate at which the current frame of the video is embedded into the background model, which is called the learning rate. If the model matches, then Mi,t = 1, otherwise it is 0, and its μ and σ2 remain unchanged;由于Σi,t较小和权值大的高斯概率分布模型更有可能用于近似表示背景像素分布模型,为此,对视频每帧图像中的像素值按照w/σ值的大小递减的顺序对K个高斯概率分布模型排序,将前B个高斯概率分布作为背景,构成背景图像BI,见公式(7);Since Σi,t is small and the Gaussian probability distribution model with large weight is more likely to be used to approximate the background pixel distribution model, for this reason, the pixel values in each frame of the video are in the order of decreasing w/σ value Sort the K Gaussian probability distribution models, and use the first B Gaussian probability distributions as the background to form a background image BI, see formula (7);BB==argminargminBB((&Sigma;&Sigma;kk==11BBwwkk>>TT))------((77))其中,T为背景模型设定的阈值,T取值范围[0.7,0.8]。Among them, T is the threshold set by the background model, and the value range of T is [0.7,0.8].
CN201510645189.4A2015-10-082015-10-08A kind of moving target detecting method of adaptive complex sceneExpired - Fee RelatedCN105261037B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201510645189.4ACN105261037B (en)2015-10-082015-10-08A kind of moving target detecting method of adaptive complex scene

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201510645189.4ACN105261037B (en)2015-10-082015-10-08A kind of moving target detecting method of adaptive complex scene

Publications (2)

Publication NumberPublication Date
CN105261037Atrue CN105261037A (en)2016-01-20
CN105261037B CN105261037B (en)2018-11-02

Family

ID=55100708

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201510645189.4AExpired - Fee RelatedCN105261037B (en)2015-10-082015-10-08A kind of moving target detecting method of adaptive complex scene

Country Status (1)

CountryLink
CN (1)CN105261037B (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106157303A (en)*2016-06-242016-11-23浙江工商大学A kind of method based on machine vision to Surface testing
CN106205217A (en)*2016-06-242016-12-07华中科技大学Unmanned plane automatic testing method based on machine vision and unmanned plane method of control
CN106651923A (en)*2016-12-132017-05-10中山大学Method and system for video image target detection and segmentation
CN107507221A (en)*2017-07-282017-12-22天津大学With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model
CN107563272A (en)*2017-06-142018-01-09南京理工大学Target matching method in a kind of non-overlapping visual field monitoring system
CN107909608A (en)*2017-10-302018-04-13北京航天福道高技术股份有限公司The moving target localization method and device suppressed based on mutual information and local spectrum
CN108376406A (en)*2018-01-092018-08-07公安部上海消防研究所A kind of Dynamic Recurrent modeling and fusion tracking method for channel blockage differentiation
CN108550163A (en)*2018-04-192018-09-18湖南理工学院Moving target detecting method in a kind of complex background scene
CN108960253A (en)*2018-06-272018-12-07魏巧萍A kind of object detection system
CN109145805A (en)*2018-08-152019-01-04深圳市豪恩汽车电子装备股份有限公司Moving target detection method and system under vehicle-mounted environment
CN109166137A (en)*2018-08-012019-01-08上海电力学院For shake Moving Object in Video Sequences detection algorithm
CN109345472A (en)*2018-09-112019-02-15重庆大学 An infrared moving small target detection method for complex scenes
CN109727266A (en)*2019-01-082019-05-07青岛一舍科技有限公司A method of the target person photo based on the pure view background of video acquisition
CN109784164A (en)*2018-12-122019-05-21北京达佳互联信息技术有限公司Prospect recognition methods, device, electronic equipment and storage medium
CN109978917A (en)*2019-03-122019-07-05黑河学院A kind of Dynamic Object Monitoring System monitoring device and its monitoring method
CN110348305A (en)*2019-06-062019-10-18西北大学A kind of Extracting of Moving Object based on monitor video
CN110472569A (en)*2019-08-142019-11-19旭辉卓越健康信息科技有限公司A kind of method for parallel processing of personnel detection and identification based on video flowing
CN111311644A (en)*2020-01-152020-06-19电子科技大学Moving target detection method based on video SAR
CN111583279A (en)*2020-05-122020-08-25重庆理工大学 A Superpixel Image Segmentation Method Based on PCBA
CN111652910A (en)*2020-05-222020-09-11重庆理工大学 A Target Tracking Algorithm Based on Object Spatial Relationship
CN111723634A (en)*2019-12-172020-09-29中国科学院上海微系统与信息技术研究所 An image detection method, device, electronic device and storage medium
CN112446889A (en)*2020-07-012021-03-05龚循安Medical video reading method based on ultrasound
CN113052940A (en)*2021-03-142021-06-29西北工业大学Space-time associated map real-time construction method based on sonar
CN113673362A (en)*2021-07-282021-11-19浙江大华技术股份有限公司Method and device for determining motion state of object, computer equipment and storage medium
CN114463389A (en)*2022-04-142022-05-10广州联客信息科技有限公司Moving target detection method and detection system
CN116182871A (en)*2023-04-262023-05-30河海大学Sea cable detection robot attitude estimation method based on second-order hybrid filtering
CN117952935A (en)*2023-12-272024-04-30中国长江电力股份有限公司 Photovoltaic panel shadow-induced hot spot recognition method based on visible light image threshold segmentation
CN120088456A (en)*2025-02-142025-06-03世纪中科(北京)科技有限公司 An intruder detection method and device for perimeter security system

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
孟介成: "运动目标检测的研究与DSP实现", 《中国优秀硕士学位论文全文数据库信息科技辑(月刊)》*
杨帆等编著: "《数字图像处理与分析》", 31 October 2007, 北京:北京航空航天大学出版社*
杨霖: "一种适应光照突变的运动目标检测方法", 《科技创新与应用》*
闫河等: "基于特征融合的粒子滤波目标跟踪新方法", 《光电子·激光》*
陈鹏: "一种新的最大熵阈值图像分割方法", 《计算机科学》*
高凯亮等: "一种混合高斯背景模型下的像素分类运动目标检测方法", 《南京大学学报(自然科学)》*

Cited By (37)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106205217B (en)*2016-06-242018-07-13华中科技大学Unmanned plane automatic testing method based on machine vision and unmanned plane method of control
CN106205217A (en)*2016-06-242016-12-07华中科技大学Unmanned plane automatic testing method based on machine vision and unmanned plane method of control
CN106157303A (en)*2016-06-242016-11-23浙江工商大学A kind of method based on machine vision to Surface testing
CN106651923A (en)*2016-12-132017-05-10中山大学Method and system for video image target detection and segmentation
CN107563272A (en)*2017-06-142018-01-09南京理工大学Target matching method in a kind of non-overlapping visual field monitoring system
CN107563272B (en)*2017-06-142023-06-20南京理工大学Target matching method in non-overlapping vision monitoring system
CN107507221A (en)*2017-07-282017-12-22天津大学With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model
CN107909608A (en)*2017-10-302018-04-13北京航天福道高技术股份有限公司The moving target localization method and device suppressed based on mutual information and local spectrum
CN108376406A (en)*2018-01-092018-08-07公安部上海消防研究所A kind of Dynamic Recurrent modeling and fusion tracking method for channel blockage differentiation
CN108550163A (en)*2018-04-192018-09-18湖南理工学院Moving target detecting method in a kind of complex background scene
CN108960253A (en)*2018-06-272018-12-07魏巧萍A kind of object detection system
CN109166137A (en)*2018-08-012019-01-08上海电力学院For shake Moving Object in Video Sequences detection algorithm
CN109145805A (en)*2018-08-152019-01-04深圳市豪恩汽车电子装备股份有限公司Moving target detection method and system under vehicle-mounted environment
CN109345472B (en)*2018-09-112021-07-06重庆大学 An infrared moving small target detection method for complex scenes
CN109345472A (en)*2018-09-112019-02-15重庆大学 An infrared moving small target detection method for complex scenes
CN109784164A (en)*2018-12-122019-05-21北京达佳互联信息技术有限公司Prospect recognition methods, device, electronic equipment and storage medium
CN109784164B (en)*2018-12-122020-11-06北京达佳互联信息技术有限公司Foreground identification method and device, electronic equipment and storage medium
CN109727266A (en)*2019-01-082019-05-07青岛一舍科技有限公司A method of the target person photo based on the pure view background of video acquisition
CN109978917A (en)*2019-03-122019-07-05黑河学院A kind of Dynamic Object Monitoring System monitoring device and its monitoring method
CN110348305A (en)*2019-06-062019-10-18西北大学A kind of Extracting of Moving Object based on monitor video
CN110348305B (en)*2019-06-062021-06-25西北大学 A moving target extraction method based on surveillance video
CN110472569A (en)*2019-08-142019-11-19旭辉卓越健康信息科技有限公司A kind of method for parallel processing of personnel detection and identification based on video flowing
CN111723634B (en)*2019-12-172024-04-16中国科学院上海微系统与信息技术研究所Image detection method and device, electronic equipment and storage medium
CN111723634A (en)*2019-12-172020-09-29中国科学院上海微系统与信息技术研究所 An image detection method, device, electronic device and storage medium
CN111311644A (en)*2020-01-152020-06-19电子科技大学Moving target detection method based on video SAR
CN111311644B (en)*2020-01-152021-03-30电子科技大学Moving target detection method based on video SAR
CN111583279A (en)*2020-05-122020-08-25重庆理工大学 A Superpixel Image Segmentation Method Based on PCBA
CN111652910A (en)*2020-05-222020-09-11重庆理工大学 A Target Tracking Algorithm Based on Object Spatial Relationship
CN112446889A (en)*2020-07-012021-03-05龚循安Medical video reading method based on ultrasound
CN113052940A (en)*2021-03-142021-06-29西北工业大学Space-time associated map real-time construction method based on sonar
CN113052940B (en)*2021-03-142024-03-15西北工业大学Space-time correlation map real-time construction method based on sonar
CN113673362A (en)*2021-07-282021-11-19浙江大华技术股份有限公司Method and device for determining motion state of object, computer equipment and storage medium
CN114463389A (en)*2022-04-142022-05-10广州联客信息科技有限公司Moving target detection method and detection system
CN114463389B (en)*2022-04-142022-07-22广州联客信息科技有限公司Moving target detection method and detection system
CN116182871A (en)*2023-04-262023-05-30河海大学Sea cable detection robot attitude estimation method based on second-order hybrid filtering
CN117952935A (en)*2023-12-272024-04-30中国长江电力股份有限公司 Photovoltaic panel shadow-induced hot spot recognition method based on visible light image threshold segmentation
CN120088456A (en)*2025-02-142025-06-03世纪中科(北京)科技有限公司 An intruder detection method and device for perimeter security system

Also Published As

Publication numberPublication date
CN105261037B (en)2018-11-02

Similar Documents

PublicationPublication DateTitle
CN105261037B (en)A kind of moving target detecting method of adaptive complex scene
CN103971386B (en)A kind of foreground detection method under dynamic background scene
CN107169985A (en)A kind of moving target detecting method based on symmetrical inter-frame difference and context update
CN105488811B (en)A kind of method for tracking target and system based on concentration gradient
CN105404847B (en)A kind of residue real-time detection method
CN103530893B (en)Based on the foreground detection method of background subtraction and movable information under camera shake scene
CN103617632B (en)A kind of moving target detecting method of combination neighbor frame difference method and mixed Gauss model
CN107220949A (en)The self adaptive elimination method of moving vehicle shade in highway monitoring video
WO2018095082A1 (en)Rapid detection method for moving target in video monitoring
CN107507221A (en)With reference to frame difference method and the moving object detection and tracking method of mixed Gauss model
CN103679677B (en) A dual-mode image decision-level fusion tracking method based on model mutual update
CN109255326B (en)Traffic scene smoke intelligent detection method based on multi-dimensional information feature fusion
CN104318266B (en)A kind of image intelligent analyzes and processes method for early warning
CN102629385A (en)Object matching and tracking system based on multiple camera information fusion and method thereof
CN105590325B (en)High-resolution remote sensing image dividing method based on blurring Gauss member function
CN103544502A (en)High-resolution remote-sensing image ship extraction method based on SVM
CN104537688A (en)Moving object detecting method based on background subtraction and HOG features
CN108256462A (en)A kind of demographic method in market monitor video
CN107123130A (en)Kernel correlation filtering target tracking method based on superpixel and hybrid hash
CN104036526A (en)Gray target tracking method based on self-adaptive window
CN106570885A (en)Background modeling method based on brightness and texture fusion threshold value
CN104318589A (en)ViSAR-based anomalous change detection and tracking method
CN107871315B (en) A video image motion detection method and device
CN106650824B (en)Moving object classification method based on support vector machines
Lian et al.A novel method on moving-objects detection based on background subtraction and three frames differencing

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20181102

Termination date:20191008


[8]ページ先頭

©2009-2025 Movatter.jp