Movatterモバイル変換


[0]ホーム

URL:


CN102509414B - Smog detection method based on computer vision - Google Patents

Smog detection method based on computer vision
Download PDF

Info

Publication number
CN102509414B
CN102509414BCN 201110365784CN201110365784ACN102509414BCN 102509414 BCN102509414 BCN 102509414BCN 201110365784CN201110365784CN 201110365784CN 201110365784 ACN201110365784 ACN 201110365784ACN 102509414 BCN102509414 BCN 102509414B
Authority
CN
China
Prior art keywords
motion
frame
area
image
ith
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110365784
Other languages
Chinese (zh)
Other versions
CN102509414A (en
Inventor
桑农
顾舒航
王岳环
宋萌萌
袁志伟
李驰
杜俭
郭敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and TechnologyfiledCriticalHuazhong University of Science and Technology
Priority to CN 201110365784priorityCriticalpatent/CN102509414B/en
Publication of CN102509414ApublicationCriticalpatent/CN102509414A/en
Application grantedgrantedCritical
Publication of CN102509414BpublicationCriticalpatent/CN102509414B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于计算机视觉的烟雾检测方法,首先检测场景中的运动区域,然后通过运动区域的特征加权求和得到每个运动区域的初始属性评判,最后通过帧间运动区域关联的方法,确定视频序列中运动区域是否属于同一目标,对同一目标进行综合分析,判断该目标是否为烟雾。本发明采用帧间目标关联方法确定帧间运动区域的关系,对运动目标属性进行综合判断。该方法具有复杂度低、降低虚警率的特点,能够及时准确的发现监控场景中出现的烟雾。

The invention discloses a smoke detection method based on computer vision, which firstly detects the motion area in the scene, then obtains the initial attribute evaluation of each motion area through the weighted summation of the features of the motion area, and finally uses the method of inter-frame motion area association , determine whether the moving area in the video sequence belongs to the same target, conduct comprehensive analysis on the same target, and judge whether the target is smoke. The present invention adopts an inter-frame object association method to determine the relationship between inter-frame motion regions, and comprehensively judges the attribute of the motion object. The method has the characteristics of low complexity and low false alarm rate, and can timely and accurately detect the smoke in the monitoring scene.

Description

Translated fromChinese
一种基于计算机视觉的烟雾检测方法A Smoke Detection Method Based on Computer Vision

技术领域technical field

本发明属于计算机视觉方法,具体涉及基于计算机视觉的烟雾检测方法,可应用于火灾报警监控。The invention belongs to computer vision methods, in particular to a computer vision-based smoke detection method, which can be applied to fire alarm monitoring.

背景技术Background technique

传统的基于烟雾探测器的火灾报警系统由于对烟雾的高灵敏度和低成本等特性在火灾防控方面取得了广泛的应用。但是由于其特殊的工作原理,即探测器必须与一定浓度的烟雾接触才能报警,使得它无法应用于大的空间以及露天环境。此外,烟雾扩散至报警探测器的时间加长了烟雾的发现时间,不利于火灾的及早发现。Traditional fire alarm systems based on smoke detectors have been widely used in fire prevention and control due to their high sensitivity to smoke and low cost. However, due to its special working principle, that is, the detector must be in contact with a certain concentration of smog to alarm, so it cannot be applied to large spaces and open-air environments. In addition, the time for the smoke to spread to the alarm detector prolongs the detection time of the smoke, which is not conducive to the early detection of the fire.

计算机视觉主要研究从图像数据中获取信息的方法,在基于视频监控的火灾报警系统中,可以通过计算机视觉方法对视频图像内容进行分析,获得对监控区域场景的初步理解,而不需要与烟雾接触产生化学反应,因此能够监控大空间以及露天区域;同时,基于视频监控的火灾报警系统能够获得丰富的现场图像信息数据,可以及时提供对着火位置,火势大小的初步判断,第一时间提供火情信息,降低火灾损失。Computer vision mainly studies the method of obtaining information from image data. In a fire alarm system based on video surveillance, the content of video images can be analyzed by computer vision methods to obtain a preliminary understanding of the scene in the monitored area without contact with smoke A chemical reaction is produced, so it can monitor large spaces and open-air areas; at the same time, the fire alarm system based on video surveillance can obtain rich on-site image information data, and can provide a preliminary judgment on the location of the fire and the size of the fire in time, and provide the fire situation at the first time information to reduce fire damage.

烟雾检测属于计算机视觉领域中特定目标的检测识别问题,一些研究人员提出了基于烟雾不同特征的检测算法。目前实际使用中的烟雾检测算法主要有以下几种:Smoke detection belongs to the detection and recognition of specific targets in the field of computer vision. Some researchers have proposed detection algorithms based on different characteristics of smoke. The smoke detection algorithms currently in use mainly include the following:

1)基于颜色信息的烟雾检测1) Smoke detection based on color information

颜色信息是图形的重要信息,通过在彩色图形中寻找特定颜色的区域,能够发现潜在的目标区域,从而实现烟雾的检测。然而,利用颜色信息进行烟雾检测也存在一些明显的不足,例如受相似颜色目标的干扰;此外,能否针对不同颜色的烟雾建立合适的颜色模型,也是限制颜色信息在烟雾检测中应用的一个重要限制。Color information is the important information of graphics. By looking for specific color areas in color graphics, potential target areas can be found, so as to realize smoke detection. However, the use of color information for smoke detection also has some obvious shortcomings, such as being interfered by similar color targets; in addition, whether a suitable color model can be established for different colors of smoke is also an important factor that limits the application of color information in smoke detection. limit.

2)基于运动信息的烟雾检测2) Smoke detection based on motion information

烟雾的运动存在特定的规律(烟往高处扩散),通过计算场景中的光流,发现目标的光流运动特性,能够将烟雾与不具备这些运动特性的目标区分开来。然而,光流计算的准确性,监控区域的成像条件等都对烟雾的准确检测结果有很大影响。There are specific rules in the movement of smoke (smoke spreads to high places). By calculating the optical flow in the scene and finding the optical flow movement characteristics of the target, the smoke can be distinguished from the target that does not have these movement characteristics. However, the accuracy of optical flow calculation and the imaging conditions of the monitoring area all have a great influence on the accurate detection of smoke.

3)基于小波分析的烟雾检测3) Smoke detection based on wavelet analysis

小波分析方法作为信号处理,尤其是图像处理中的重要工具,在图像处理领域的很多问题中都有重要应用。通过对场景图像进行小波变换,得到图像的小波域信息,能够在频域和空域同时对图像进行分析。有学者研究了图像中烟雾区域同非烟雾区域在小波域的差别,研究了一系列基于小波变换的烟雾检测方法,如小波域能量损失与保留能量的关系、小波系数的统计规律等,获得了较好的效果。但是小波分析方法往往只针对特定形态的烟雾,难以满足一些特定场合的应用需求。As an important tool in signal processing, especially in image processing, wavelet analysis method has important applications in many problems in the field of image processing. By performing wavelet transform on the scene image, the wavelet domain information of the image can be obtained, and the image can be analyzed in the frequency domain and the space domain at the same time. Some scholars have studied the difference between the smoke area and the non-smog area in the image in the wavelet domain, and studied a series of smoke detection methods based on wavelet transform, such as the relationship between energy loss and retained energy in the wavelet domain, and the statistical laws of wavelet coefficients. better effect. However, the wavelet analysis method is often only aimed at specific forms of smoke, which is difficult to meet the application requirements of some specific occasions.

虽然研究人员提出了不同的烟雾检测算法,但是由于烟雾的形状变化多种多样,不同燃烧物产生的烟雾的浓度、灰度差异很大,加上检测的背景各不相同,目前很难找到能够很好的描述图像中烟雾的特征。Although researchers have proposed different smoke detection algorithms, due to the variety of smoke shapes, the concentration and gray level of smoke produced by different combustion objects are very different, and the detection backgrounds are different. A good description of the characteristics of the smoke in the image.

发明内容Contents of the invention

本发明的目的在于提供一种基于计算机视觉的烟雾检测方法,首先通过在每帧视频中计算运动区域的特征对区域的类别属性进行初始分析,然后根据帧间运动区域的关系对运动目标属性进行综合判断,能够实现室内外大空间范围内的实时烟雾检测,为大型仓库等场所的火灾防控工作提供技术支持。The purpose of the present invention is to provide a smoke detection method based on computer vision. Firstly, the category attribute of the region is initially analyzed by calculating the characteristics of the moving region in each frame of video, and then the moving target attribute is analyzed according to the relationship between the moving regions between frames. Comprehensive judgment can realize real-time smoke detection in a large indoor and outdoor space, and provide technical support for fire prevention and control in large warehouses and other places.

一种基于计算机视觉的烟雾检测方法,具体为:A smoke detection method based on computer vision, specifically:

检测第t帧图像的运动区域,运动区域的序号记为i;Detect the motion area of the t-th frame image, and the serial number of the motion area is marked as i;

提取第t帧第i个运动区域的一个以上的特征;Extract more than one feature of the i-th motion region in the t-th frame;

计算第t帧第i个运动区域的特征加权和得到该运动区域的属性得分;Calculate the feature weighted sum of the i-th motion area in the t-th frame to obtain the attribute score of the motion area;

计算第t帧第i个运动区域与第t-1帧图像的所有运动区域的距离;Calculate the distance between the i-th motion area in the t-th frame and all the motion areas in the t-1 frame image;

确定第t帧第i个运动区域与第t-1帧图像的所有运动区域的距离的最小值;Determine the minimum value of the distance between the i-th motion area of the t-th frame and all the motion areas of the t-1 frame image;

若第t帧第i个运动区域与第t-1帧图像的所有运动区域的距离的最小值小于最小距离阈值,则依据第t-1帧图像该最小值对应运动区域的属性得分更新第t帧第i个运动区域的属性得分;If the minimum value of the distance between the i-th motion area in the t-th frame and all the motion areas in the t-1 frame image is less than the minimum distance threshold, update the t-th motion area according to the attribute score of the motion area corresponding to the minimum value in the t-1-th frame image The attribute score of the i-th motion region in the frame;

若第t帧第i个运动区域与第t-1帧图像的所有运动区域的距离的最小值大于等于距离阈值,则计算第t帧第i个运动区域与第t-2帧图像的所有运动区域的距离;If the minimum value of the distance between the i-th motion area in the t-th frame and all the motion areas in the t-1-th frame image is greater than or equal to the distance threshold, then calculate all the motions between the i-th motion area in the t-th frame and the t-2-th frame image the distance of the area;

确定第t帧第i个运动区域与第t-2帧图像的所有运动区域的距离的最小值;Determine the minimum value of the distance between the i-th motion area of the t-th frame and all the motion areas of the t-2 frame image;

若第t帧第i个运动区域与第t-2帧图像的所有运动区域的距离的最小值小于最小距离阈值,则依据第t-2帧图像该最小值对应运动区域的属性得分更新第t帧第i个运动区域的属性得分;If the minimum value of the distance between the i-th motion area of the t-th frame and all the motion areas of the t-2 frame image is less than the minimum distance threshold, update the t-th motion area according to the attribute score of the motion area corresponding to the minimum value of the t-2 frame image The attribute score of the i-th motion region in the frame;

若第t帧第i个运动区域的属性得分超过报警阈值,则认定存在烟雾。If the attribute score of the i-th motion region in the t-th frame exceeds the alarm threshold, it is determined that there is smoke.

进一步地,若第t帧第i个运动区域与第t-2帧图像的所有运动区域的距离的最小值大于等于最小距离阈值,则第t帧第i个运动区域的属性得分保持不变。Further, if the minimum distance between the i-th motion area in the t-th frame and all the motion areas in the t-2 frame image is greater than or equal to the minimum distance threshold, the attribute score of the i-th motion area in the t-th frame remains unchanged.

进一步地,所述依据第t-1帧图像该最小值对应运动区域的属性得分更新第t帧第i个运动区域的属性得分步骤具体为:第t帧第i个运动区域的属性得分

Figure BDA0000109554460000041
Figure BDA0000109554460000042
为第t-1帧图像该最小值对应运动区域的属性得分,0.8≤a≤0.95。Further, the step of updating the attribute score of the i-th motion area in the t-th frame according to the attribute score of the motion area corresponding to the minimum value of the t-1th frame image is specifically: the attribute score of the i-th motion area in the t-th frame
Figure BDA0000109554460000041
Figure BDA0000109554460000042
For the t-1th frame image, the minimum value corresponds to the attribute score of the motion region, 0.8≤a≤0.95.

进一步地,所述依据第t-2帧图像该最小值对应运动区域的属性得分更新第t帧第i个运动区域的属性得分步骤具体为:第t帧第i个运动区域的属性得分

Figure BDA0000109554460000044
为第t-2帧图像该最小值对应运动区域的属性得分,0.75≤b≤0.9。Further, the step of updating the attribute score of the i-th motion area in the t-th frame according to the attribute score of the motion area corresponding to the minimum value of the t-2th frame image is specifically: the attribute score of the i-th motion area in the t-th frame
Figure BDA0000109554460000044
For the t-2th frame image, the minimum value corresponds to the attribute score of the motion region, 0.75≤b≤0.9.

进一步地,所述运动区域的特征包括:灰度均值、运动区域内前继历史帧图像中的平均灰度均值穿越次数、运动区域内前继历史帧图像中灰度最大增加图像与灰度最大减少图像均值的比值、前继历史帧图像中灰度最大变化图像在运动区域的均值和方差、运动区域内较大梯度像素点与区域面积的比值。Further, the characteristics of the moving area include: mean gray value, average gray mean crossing times in the previous historical frame images in the moving area, the maximum increase in gray level and the maximum gray level in the previous historical frame images in the moving area. Reduce the ratio of the image mean value, the mean and variance of the image with the largest grayscale change in the previous historical frame image in the motion area, and the ratio of the larger gradient pixel point to the area area in the motion area.

进一步地,所述计算第t帧第i个运动区域与第t-1帧图像的所有运动区域的距离步骤具体为:Further, the step of calculating the distance between the i-th motion area in the t-th frame and all the motion areas in the t-1-th frame image is specifically:

在第i个运动区域和第t-1帧图像的第j个运动区域内分别选个

Figure BDA0000109554460000045
Figure BDA0000109554460000046
个方块;In the i-th motion area and the j-th motion area of the t-1th frame image, select a
Figure BDA0000109554460000045
and
Figure BDA0000109554460000046
blocks;

计算第i个运动区域与第t-1帧图像的第j个运动区域的距离Di,jt,t-1=ΣLit/2Rankmmax(ΣLjt-1/2Ranknmax(dm,ni,j))Ljt-1×Lit/4,其中,Calculate the distance between the i-th motion area and the j-th motion area of the t-1th frame image D. i , j t , t - 1 = Σ L i t / 2 Rank m max ( Σ L j t - 1 / 2 Rank no max ( d m , no i , j ) ) L j t - 1 × L i t / 4 , in,

ddmm,,nnoii,,jj==||μμmmtt,,ii--μμnnott--11,,jj||//λλmeanmean++||σσmmtt,,ii--σσnnott--11,,jj||//λλvariancevariance++((θxθxmmtt,,ii--θxθxnnott--11,,jj))22++((θyθymmtt,,ii--θyθynnott--11,,jj))22//λλlocationlocation

m表示第t帧第i个运动区域的第m个方块,n表示第t-1帧图像第j个运动区域的第n个方块,,λmean,λvariance,λlocation分别为权重参数,

Figure BDA0000109554460000053
Figure BDA0000109554460000054
分别为第t帧第i个运动区域内的第l个方块的均值和方差,
Figure BDA0000109554460000055
为第t帧第i个运动区域内的第l个方块的位置,
Figure BDA0000109554460000057
分别为第t-1帧中第j个运动区域j内的第l个方块的均值和方差,
Figure BDA0000109554460000058
为第t-1帧中第j个运动区域内的第l个方块的位置,
Figure BDA0000109554460000059
表示以n为自变量求最大的
Figure BDA00001095544600000511
的和,
Figure BDA00001095544600000512
表示以m为自变量求最大的
Figure BDA00001095544600000513
个φ(m)的和。m represents the mth block of the i-th motion area in the t-th frame, n represents the n-th block of the j-th motion area in the t-1th frame image, λmean , λvariance , λlocation are weight parameters respectively,
Figure BDA0000109554460000053
and
Figure BDA0000109554460000054
are the mean and variance of the l-th block in the i-th motion area in the t-th frame, respectively,
Figure BDA0000109554460000055
is the position of the l-th block in the i-th motion area in the t-th frame, and
Figure BDA0000109554460000057
are the mean and variance of the lth block in the jth motion region j in the t-1th frame, respectively,
Figure BDA0000109554460000058
is the position of the lth block in the jth motion area in the t-1th frame,
Figure BDA0000109554460000059
Indicates to find the maximum with n as the independent variable indivual
Figure BDA00001095544600000511
of and,
Figure BDA00001095544600000512
Indicates to find the maximum with m as the independent variable
Figure BDA00001095544600000513
The sum of φ(m).

进一步地,所述检测第t帧图像的运动区域步骤包括:Further, the step of detecting the motion area of the t-th frame image includes:

生成运动前景图像步骤:Steps to generate moving foreground image:

第t帧图像Ft中的每一像素点Ft(x,y),分别与第t-Δt1,t-Δt2,t-Δt3帧图像中对应的像素点Ft-Δt1(x,y),Ft-Δt2(x,y),Ft-Δt3(x,y)相减并取绝对值,得到差值difft-Δt3(x,y),difft-Δt2(x,y),difft-Δt3(x,y),取差值的最大值与运动检测阈值ΔF作比较,若大于阈值ΔF则认为该点处存在运动,则将该像素点

Figure BDA00001095544600000516
置为255,否则置0,从而得到运动前景图像;Each pixel point Ft (x, y) in the t-th frame image Ft is respectively corresponding to the pixel point in the t-Δt1 , t-Δt2 , t-Δt3 -frame image f t - Δt 1 ( x , the y ) , f t - Δt 2 ( x , the y ) , f t - Δt 3 ( x , the y ) Subtract and take the absolute value to get the difference diff t - Δt 3 ( x , the y ) , diff t - Δt 2 ( x , the y ) , diff t - Δt 3 ( x , the y ) , Take the maximum value of the difference and compare it with the motion detection threshold ΔF. If it is greater than the threshold ΔF, it is considered that there is motion at the point, and the pixel point
Figure BDA00001095544600000516
Set to 255, otherwise set to 0, so as to get the moving foreground image;

运动前景图像滤波步骤;Moving foreground image filtering step;

连通域标记步骤。Connected domain labeling step.

进一步地,ΔF取值为10至30之间。Further, the value of ΔF is between 10 and 30.

本发明的技术效果体现在:本发明采用帧间目标关联方法确定帧间运动区域的关系,对运动目标属性进行综合判断。该方法具有复杂度低、降低虚警率的特点,能够及时准确的发现监控场景中出现的烟雾。The technical effect of the present invention is embodied in that the present invention adopts an inter-frame object association method to determine the relationship of inter-frame motion regions, and comprehensively judge the attributes of the motion objects. The method has the characteristics of low complexity and low false alarm rate, and can timely and accurately detect the smoke in the monitoring scene.

附图说明Description of drawings

图1为本发明方法总体流程图;Fig. 1 is the overall flowchart of the method of the present invention;

图2为两幅存在烟雾的视频场景截图;Figure 2 is a screenshot of two video scenes with smoke;

图3为图2中场景进行运动区域检测的结果示意图;Fig. 3 is a schematic diagram of the result of motion region detection in the scene in Fig. 2;

图4为烟雾检测结果示意图。Figure 4 is a schematic diagram of smoke detection results.

具体实施方式Detailed ways

下面结合具体实例对本发明进行详细描述。The present invention will be described in detail below in conjunction with specific examples.

设需要检测视频序列F监控的场景中是否存在烟雾区域,参见图1,本发明以如下方式运行:Suppose whether there is a smog area in the scene that needs to detect video sequence F monitoring, referring to Fig. 1, the present invention operates as follows:

(1)运动目标检测(1) Moving target detection

保存视频序列中的前ζ帧图像至图像序列Image_list,从视频序列中的第ζ+1帧图像开始,可以开始检测当前图像Ft中的运动目标,运动目标检测包括以下步骤:Save the previous ζ frame image in the video sequence to the image sequence Image_list, start from the ζ+1 frame image in the video sequence, you can start to detect the moving target in the current image Ft , and the moving target detection includes the following steps:

(1.1)生成运动前景图像(1.1) Generate moving foreground image

当前图像Ft中的每一像素点Ft(x,y),分别与t-Δt1,t-Δt2,t-Δt3帧图像中对应的像素点Ft-Δt1(x,y),Ft-Δt2(x,y),Ft-Δt3(x,y)相减并取绝对值,得到差值difft-Δt3(x,y),difft-Δt2(x,y),difft-Δt3(x,y),取差值的最大值与运动检测阈值ΔF作比较,若大于阈值则认为该点处存在运动,则将该像素点

Figure BDA0000109554460000063
置为255,否则置0,从而得到运动前景图像,即:Each pixel point Ft (x, y) in the current image Ft is respectively corresponding to the pixel point in the t-Δt1 , t-Δt2 , t-Δt3 frame images f t - Δt 1 ( x , the y ) , f t - Δt 2 ( x , the y ) , f t - Δt 3 ( x , the y ) Subtract and take the absolute value to get the difference diff t - Δt 3 ( x , the y ) , diff t - Δt 2 ( x , the y ) , diff t - Δt 3 ( x , the y ) , Take the maximum value of the difference and compare it with the motion detection threshold ΔF. If it is greater than the threshold, it is considered that there is motion at the point, and the pixel point is
Figure BDA0000109554460000063
Set it to 255, otherwise set it to 0, so as to get the moving foreground image, namely:

Ff^^tt((xx,,ythe y))==255255,,ififmaxmax((diffdifftt--ΔtΔt11((xx,,ythe y)),,diffdifftt--ΔtΔt22((xx,,ythe y)),,diffdifftt--ΔtΔt33((xx,,ythe y))))>>ΔFΔF00,,elseelse

其中ΔF为设定的运动检测阈值,根据视频图像的质量、需要检测的烟雾浓度等人为设定,一般取值为10至30之间。Among them, ΔF is the set motion detection threshold, which is artificially set according to the quality of the video image, the smoke concentration to be detected, etc., and the value is generally between 10 and 30.

(1.2)对运动前景图像进行滤波(1.2) Filter the moving foreground image

为了消除上述得到的前景图像

Figure BDA0000109554460000072
中存在的孤立噪声点和连接断开的目标区域,本实例中选用中值滤波器对
Figure BDA0000109554460000073
进行滤波处理。In order to eliminate the foreground image obtained above
Figure BDA0000109554460000072
The isolated noise points and disconnected target areas in the middle, in this example, the median filter pair
Figure BDA0000109554460000073
Perform filtering.

中值滤波是基于排序统计理论的一种能有效抑制噪声的非线性信号处理技术,中值滤波的基本原理是把图像中某像素点的颜色值用该像素点的一个邻域中各像素点颜色值排序后的中间值代替,让周围像素的颜色值更接近真实值,从而消除孤立的噪声点,本实施例中对

Figure BDA0000109554460000074
进行中值滤波时选用的邻域为该像素的8邻域,即选取8邻域中的所有像素的灰度值的中间值作为该像素点滤波后的结果。所谓像素点(x,y)的邻域是指该像素具有4个水平和垂直的相邻像素,其坐标为(x+1,y),(x-1,y),(x,y+1),(x,y-1),这四个点称之为(x,y)的4邻域,同时(x,y)的4个对角的相邻像素具有如下坐标:(x+1,x+1),(x+1,y-1),(x-1,y+1),(x-1,y-1)。所有的这8个点称之为(x,y)的8邻域,若(x,y)位于图像的边界,则它的8邻域中的某些点落入图像的外边。Median filtering is a nonlinear signal processing technology that can effectively suppress noise based on sorting statistics theory. The basic principle of median filtering is to use the color value of a pixel in the image with each pixel in a neighborhood of the pixel The intermediate value after sorting the color value is replaced, so that the color value of the surrounding pixels is closer to the real value, thereby eliminating isolated noise points. In this embodiment,
Figure BDA0000109554460000074
The neighborhood selected for median filtering is the 8-neighborhood of the pixel, that is, the median value of the gray values of all pixels in the 8-neighborhood is selected as the filtered result of the pixel. The so-called neighborhood of a pixel point (x, y) means that the pixel has 4 horizontal and vertical adjacent pixels, and its coordinates are (x+1, y), (x-1, y), (x, y+ 1), (x, y-1), these four points are called the 4 neighborhoods of (x, y), and the 4 diagonal adjacent pixels of (x, y) have the following coordinates: (x+ 1, x+1), (x+1, y-1), (x-1, y+1), (x-1, y-1). All these 8 points are called the 8 neighborhoods of (x, y). If (x, y) is located at the boundary of the image, some points in its 8 neighborhoods fall outside the image.

(1.3)连通域标记:(1.3) Connected domain labeling:

二值图像

Figure BDA0000109554460000075
经过滤波处理之后,将其中像素值为255且彼此位于对方的8邻域中的像素用同一数值标记出来,标记后的图像中具有相同数值的所有像素则隶属于同一个连通域,将得到的所有Nt个连通域保存在目标队列Objetc_list;Binary image
Figure BDA0000109554460000075
After the filtering process, the pixels whose pixel value is 255 and are located in the 8 neighbors of each other are marked with the same value, and all the pixels with the same value in the marked image belong to the same connected domain, and the obtained All Nt connected domains are stored in the target queue Objetc_list;

若Nt=0,则当前场景中没有运动目标,跳转至步骤(4);If Nt =0, then there is no moving target in the current scene, jump to step (4);

若Nt≠0,则场景中存在运动目标,继续执行步骤(2);If Nt ≠0, there is a moving target in the scene, continue to step (2);

(2)对步骤(1)得到的Nt个区域,通过计算区域内的特征,得到每个区域的初始得分

Figure BDA0000109554460000081
(2) For the Nt regions obtained in step (1), the initial score of each region is obtained by calculating the features in the region
Figure BDA0000109554460000081

(2.1)计算区域内图像的灰度均值

Figure BDA0000109554460000082
(2.1) Calculate the gray mean value of the image in the area
Figure BDA0000109554460000082

Figure BDA0000109554460000083
为区域内图像的亮度,能够反映图像的亮暗情况。定义为:
Figure BDA0000109554460000083
It is the brightness of the image in the area, which can reflect the brightness and darkness of the image. defined as:

Meanmeaniitt==ΣΣ((xx,,ythe y))∈∈IIiittFftt((xx,,ythe y))//Areaareaiitt;;

其中

Figure BDA0000109554460000085
表示点(x,y)属于当前t帧图像中运动区域i的范围内,
Figure BDA0000109554460000086
表示t帧图像中运动区域i的面积;in
Figure BDA0000109554460000085
Indicates that the point (x, y) belongs to the range of the motion area i in the current t frame image,
Figure BDA0000109554460000086
Indicates the area of the motion region i in the t-frame image;

(2.2)计算区域内过去ζ帧图像中的平均灰度均值穿越次数

Figure BDA0000109554460000087
Figure BDA0000109554460000088
为区域内平均均值穿越次数,反映区域内运动的整体频率信息,定义为:(2.2) Calculation of the average gray value crossing times in the past ζ frame images in the area
Figure BDA0000109554460000087
Figure BDA0000109554460000088
is the average mean crossing times in the region, reflecting the overall frequency information of the movement in the region, defined as:

MCRMCRiitt--ζζ,,tt==ΣΣ((xx,,ythe y))∈∈IIiittMCRMCRtt--ζζ,,tt((xx,,ythe y))//Areaareaiitt;;

其中MCRt-ζ,t(x,y)为点(x,y)在时间范围[t-ζ,t]内的均值穿越次数,即在过去ζ帧图像中,相邻两帧图像灰度值穿过所有ζ帧图像中该点处灰度均值Mt-ζ,t(x,y)的次数。MCRt-ζ,t(x,y)的计算方法为:令MCRt-ζ,t(x,y)=0,ω=t-ζ,…,t-1:Among them, MCRt-ζ, t (x, y) is the mean crossing times of the point (x, y) in the time range [t-ζ, t], that is, in the past ζ-frame images, the gray levels of two adjacent frames The number of times the value passes through the gray mean Mt-ζ,t (x, y) at this point in all ζ frame images. MCRt-ζ, the calculation method of t (x, y) is: Make MCRt-ζ, t (x, y)=0, ω=t-ζ,..., t-1:

若(Fω(x,y)-Mt-ζ,t(x,y))×(Mt-ζ,t(x,y)-Fω+1(x,y))<0;则MCRt-ζ,t(x,y)=MCRt-ζ,t(x,y)+1;If (Fω (x, y)-Mt-ζ, t (x, y))×(Mt-ζ, t (x, y)-Fω+1 (x, y))<0; then MCRt-ζ,t (x,y)=MCRt-ζ,t (x,y)+1;

(2.3)计算在过去ζ帧图像内灰度最大增加图像TCincreaset-ζ,t、灰度最大减小图像TCdecreaset-ζ,t和灰度最大变化图像TCchanget-ζ,t,从而计算每个区域的统计信息TCquotientit-&zeta;,t,TCmeanit-&zeta;,t,TC var ianceit-&zeta;,t;(2.3) Calculate the maximum grayscale increase image TCincreaset-ζ, t , the grayscale maximum decrease image TCdecreaset-ζ, t and the grayscale maximum change image TCchanget-ζ, t in the past ζ frame image, thereby calculating each Statistics for regions TCquotient i t - &zeta; , t , TCmean i t - &zeta; , t , TC var iance i t - &zeta; , t ;

灰度最大增加(减少/变化)图像是指,在过去ζ帧图像中,每个像素点相邻两帧灰度增加(减少/变化)最大的值组成的图像;The grayscale maximum increase (decrease/change) image refers to, in the past ζ frame images, the image formed by the maximum value of each pixel's grayscale increase (decrease/change) in two adjacent frames;

TCincreaset-ζ,t(x,y)=maxq∈[t-ζ,t-1](Fq+1(x,y)-Fq(x,y));TCincreaset-ζ,t (x,y)=maxq∈[t-ζ,t-1] (Fq+1 (x,y)-Fq (x,y));

TCdecreaset-ζ,t(x,y)=maxq∈[t-ζ,t-1](Fq(x,y)-Fq+1(x,y));TCdecreaset-ζ,t (x,y)=maxq∈[t-ζ,t-1] (Fq (x,y)-Fq+1 (x,y));

TCchanget-ζ,t(x,y)=maxq∈[t-ζ,t-1](|Fq(x,y)-Fq+1(x,y)|);TCchanget-ζ, t (x, y) = maxq∈[t-ζ, t-1] (|Fq (x, y)-Fq+1 (x, y)|);

若TCincreaset-ζ,t(x,y)(或TCdecreaset-ζ,t(x,y))小于0,即点(x,y)处灰度值在过去ζ帧图像中持续减少(或增大),则将该处置为0。If TCincreaset-ζ, t (x, y) (or TCdecreaset-ζ, t (x, y)) is less than 0, that is, the gray value at point (x, y) continues to decrease in the past ζ frame images (or increase), the treatment is set to 0.

Figure BDA0000109554460000091
指运动区域内最大增加图像与最大减少图像均值的比值,反映了ζ帧图像时间内该区域亮度的变化情况;
Figure BDA0000109554460000091
Refers to the ratio of the average value of the maximum increase image to the maximum decrease image in the motion area, reflecting the change of the brightness of the area within the ζ frame image time;

TCquotientTCquotientiitt--&zeta;&zeta;,,tt==&Sigma;&Sigma;((xx,,ythe y))&Element;&Element;IIiittTCincreaseTC increasett--&zeta;&zeta;,,tt((xx,,ythe y))//&Sigma;&Sigma;((xx,,ythe y))&Element;&Element;IIiittTCdecreaseTC decreasett--&zeta;&zeta;,,tt((xx,,ythe y))

Figure BDA0000109554460000093
为最大变化图像TCchanget-ζ,t在区域i内的均值和方差,反映了ζ帧图像时间内该区域内灰度值变化的不均匀程度;
Figure BDA0000109554460000093
is the maximum change image TCchanget-ζ, the mean and variance of t in area i, which reflects the unevenness of the gray value change in the area within the ζ frame image time;

(2.4)计算区域内大梯度像素点与区域面积的比值

Figure BDA0000109554460000094
(2.4) Calculating the ratio of the large gradient pixel points in the region to the area of the region
Figure BDA0000109554460000094

Figure BDA0000109554460000095
表示区域i内,梯度大于阈值ΔGrad的像素占整个区域面积的比值:
Figure BDA0000109554460000095
Indicates the ratio of the pixels whose gradient is greater than the threshold ΔGrad in the area i to the entire area:

GRADquotientGRADquotientiitt==##((xx,,ythe y))&Element;&Element;IIiitt((Ffttgradgrad((xx,,ythe y))>>&Delta;Grad&Delta;Grad))//Areaareaiitt

其中,

Figure BDA0000109554460000097
为图像Ft在点(x,y)的梯度,
Figure BDA0000109554460000098
表示在区域i范围内,满足条件
Figure BDA0000109554460000099
像素点的个数;in,
Figure BDA0000109554460000097
is the gradient of image Ft at point (x, y),
Figure BDA0000109554460000098
Indicates that within the scope of area i, the condition is met
Figure BDA0000109554460000099
The number of pixels;

其中ΔGrad为梯度阈值,可以根据场景中的边缘信息的多少预先设定,也可以依赖区域的某些特征设定自适应的阈值。本例中,使用该区域的平均亮度

Figure BDA00001095544600000910
作为梯度阈值,即较亮的地方可以允许存在较明显的边缘,而较暗的地方不应出现明显的边缘信息。Among them, ΔGrad is the gradient threshold, which can be preset according to the amount of edge information in the scene, or an adaptive threshold can be set depending on some characteristics of the region. In this example, the average brightness of the area is used
Figure BDA00001095544600000910
As a gradient threshold, brighter places can allow more obvious edges, while darker places should not have obvious edge information.

本例中计算梯度使用sobel算子,sobel算子是实践中计算数字梯度时最常用的算子之一。通过使用模板:In this example, the sobel operator is used to calculate the gradient, and the sobel operator is one of the most commonly used operators for calculating digital gradients in practice. By using templates:

Figure BDA0000109554460000101
Figure BDA0000109554460000102
Figure BDA0000109554460000101
Figure BDA0000109554460000102

分别对图像Ft进行卷积得到卷积结果

Figure BDA0000109554460000103
并令
Figure BDA0000109554460000104
求得梯度图像。Convolve the image Ft separately to get the convolution result
Figure BDA0000109554460000103
and order
Figure BDA0000109554460000104
Find the gradient image.

(2.5)根据步骤(2.1)-(2.4)所计算的区域特征,计算每个区域的初始属性得分

Figure BDA0000109554460000105
(2.5) According to the regional characteristics calculated in steps (2.1)-(2.4), calculate the initial attribute score of each region
Figure BDA0000109554460000105

Figure BDA0000109554460000106
是一个反映目标区域i属性的数,由等特征计算得到,
Figure BDA0000109554460000108
值越大,则认为运动目标区域i越可能是烟雾;但是这里
Figure BDA0000109554460000109
并不是严格意义上的区域i为烟雾的概率,因为它不满足概率函数的非负性也没有经过归一化,因此,称
Figure BDA00001095544600001010
为区域i的得分;本例中
Figure BDA00001095544600001011
TCquotientit-&zeta;,t,TCmeanit-&zeta;,t,TC var ianceit-&zeta;,t,GRADquotientit的加权和即:
Figure BDA0000109554460000106
is a number reflecting the attribute of the target area i, by and other features are calculated,
Figure BDA0000109554460000108
The larger the value is, the more likely the moving target area i is considered to be smoke; but here
Figure BDA0000109554460000109
It is not the probability that area i is smoke in the strict sense, because it does not satisfy the non-negativity of the probability function and has not been normalized, so it is called
Figure BDA00001095544600001010
is the score of region i; in this example
Figure BDA00001095544600001011
for TCquotient i t - &zeta; , t , TCmean i t - &zeta; , t , TC var iance i t - &zeta; , t , GRADquotient i t The weighted sum of is:

ppiitt==WWTT&times;&times;FeatureFeatures

其中W右上角的T表示矩阵转置,Feature为该区域特征组成的向量:(1,Meantit,MCRit-&zeta;,t,TCquotientit-&zeta;,t,TCmeanit-&zeta;,t,TC var ianceit-&zeta;,t,GRADquotientit)TW为各特征的权值向量(α0,α1,α2,α3,α5,α5,α6)T;W可以通过机器学习的方法由样本训练得到;Among them, T in the upper right corner of W represents matrix transposition, and Feature is a vector composed of features in this area: ( 1 , Meant i t , MCR i t - &zeta; , t , TCquotient i t - &zeta; , t , TCmean i t - &zeta; , t , TC var iance i t - &zeta; , t , GRADquotient i t ) T W is the weight vector (α0 , α1 , α2 , α3 , α5 , α5 , α6 )T of each feature; W can be obtained from sample training by machine learning;

(3)帧间运动区域关联,确定时间序列上运动区域的关系,得到各区域最终得分(3) Inter-frame motion area association, determine the relationship between motion areas in the time series, and obtain the final score of each area

(3.1)对当前帧中所有Nt个区域,在每个区域中随机选取(本例中

Figure BDA00001095544600001018
0<β<1)个5乘5大小的方块,计算所有
Figure BDA00001095544600001019
个方块的统计信息及位置信息,其中第l个方块的均值、方差和位置分别记为
Figure BDA0000109554460000111
将目标区域i中采样得到的所有小方块的统计信息和位置信息存入Objetc_list中目标i对应的数据结构中;(3.1) For all Nt regions in the current frame, randomly select each region (in this example
Figure BDA00001095544600001018
0<β<1) 5 by 5 squares, calculate all
Figure BDA00001095544600001019
The statistical information and position information of each square, where the mean, variance and position of the lth square are recorded as
Figure BDA0000109554460000111
and Store the statistical information and position information of all the small squares sampled in the target area i into the data structure corresponding to the target i in Objetc_list;

(3.2)对当前帧中的所有Nt个目标区域,根据目标区域i中的

Figure BDA0000109554460000113
个方块计算目标区域与t-1帧图像中所有运动区域的距离(即特征差异),形成t帧图像与t-1帧图像中所有目标区域的帧间距离矩阵DMATt,t-1(Nt行Nt-1列),DMATt,t-1中第i行j列元素为t帧图像中目标区域i与t-1帧图像中目标区域j之间的距离
Figure BDA0000109554460000114
(3.2) For all Nt target regions in the current frame, according to
Figure BDA0000109554460000113
Calculate the distance between the target area and all moving areas in the t-1 frame image (i.e. the feature difference), and form the frame-to-frame distance matrix DMATt, t-1 (Nt row Nt-1 column), DMATt, the i-th row j column element in t-1 is the distance between the target area i in the t frame image and the target area j in the t-1 frame image
Figure BDA0000109554460000114

两帧图像间运动区域的帧间距离矩阵DMATt,t-1形式如下:The form of the inter-frame distance matrix DMATt, t-1 of the motion area between two frames of images is as follows:

DMATDMATtt,,tt--11==DD.1,11,1tt,,tt--11DD.1,21,2tt,,tt--11......DD.11,,NNtt--11--11tt,,tt--11DD.11,,NNtt--11tt,,tt--11DD.2,12,1tt,,tt--11DD.2,22,2tt,,tt--11......DD.22,,NNtt--11--11tt,,tt--11DD.22,,NNtt--11tt,,tt--11............DD.ii,,jjtt,,tt--11............DD.NNtt--1,11,1tt,,tt--11DD.NNtt--1,21,2tt,,tt--11......DD.NNtt--11,,NNtt--11--11tt,,tt--11DD.NNtt--11,,NNtt--11tt,,tt--11DD.NNtt,,11tt,,tt--11DD.NNtt,,22tt,,tt--11......DD.NNtt,,NNtt--11--11tt,,tt--11DD.NNtt,,NNtt--11tt,,tt--11

DMATt,t-1中第i行j列元素为t帧图像中目标区域i与t-1帧图像中目标区域j之间的距离

Figure BDA0000109554460000116
可通过计算两目标区域内随机采样的小方块间的距离得到:DMATt, the i-th row j column element in t-1 is the distance between the target area i in the t frame image and the target area j in the t-1 frame image
Figure BDA0000109554460000116
It can be obtained by calculating the distance between randomly sampled small squares in the two target areas:

首先定义任意两个小方块的距离,以t帧图像区域i中采样方块m与t-1帧图像区域j中采样方块n的距离为例:First, define the distance between any two small squares, as the distance between sampling square m in t-frame image area i and sampling square n in t-1 frame image area j For example:

ddmm,,nnoii,,jj==||&mu;&mu;mmtt,,ii--&mu;&mu;nnott--11,,jj||//&lambda;&lambda;meanmean++||&sigma;&sigma;mmtt,,ii--&sigma;&sigma;nnott--11,,jj||//&lambda;&lambda;variancevariance++((&theta;x&theta;xmmtt,,ii--&theta;x&theta;xnnott--11,,jj))22++((&theta;y&theta;ymmtt,,ii--&theta;y&theta;ynnott--11,,jj))22//&lambda;&lambda;locationlocation

其中λmean,λvariance,λlocation为设置的参数,用以调整面积、方差和距离的权重;对于运动的烟雾而言,灰度变化较小,而各部分烟雾区域的方差可能较大,并且运动缓慢,基于烟雾的上述特点本例中选取的λmean,λvariance,λlocation分别为40、600、4,在实际应用中也可根据具体需要对三者的权重作出适当调整。对于区域i和j中随机采样得到的

Figure BDA0000109554460000121
个小方块,计算它们两两之间的距离,可以得到运动区域i,j的区域间距离矩阵
Figure BDA0000109554460000122
Among them, λmean , λvariance , and λlocation are set parameters, which are used to adjust the weight of area, variance and distance; for moving smoke, the gray level changes small, and the variance of each part of the smoke area may be large, and The movement is slow. Based on the above characteristics of the smoke, the λmean , λvariance , and λlocation selected in this example are 40, 600, and 4 respectively. In practical applications, the weights of the three can also be adjusted appropriately according to specific needs. For random sampling in regions i and j
Figure BDA0000109554460000121
A small square, calculate the distance between them, you can get the inter-area distance matrix of the motion area i, j
Figure BDA0000109554460000122

dm atdm atii,,jjtt,,tt--11==dd1,11,1ii,,jjdd1,21,2ii,,jj......dd11,,LLjjtt--11--11ii,,jjdd11,,LLjjtt--11ii,,jjdd2,12,1ii,,jjdd2,22,2ii,,jj......dd22,,LLjjtt--11--11ii,,jjdd22,,LLjjtt--11ii,,jj............ddmm,,nnoii,,jj............ddLLiitt--1,11,1ii,,jjddLLiitt--1,21,2ii,,jj......ddLLiitt--11,,LLjjtt--11--11ii,,jjddLLiitt--11,,LLjjtt--11ii,,jjddLLiitt,,11ii,,jjddLLiitt,,22ii,,jj......ddLLiitt,,LLjjtt--11--11ii,,jjddLLiitt,,LLjjtt--11ii,,jj

通过区域间距离矩阵,可以得到两运动区域的距离:Through the distance matrix between regions, the distance between two motion regions can be obtained:

DD.ii,,jjtt,,tt--11==&Sigma;&Sigma;LLiitt//22RankRankmmmaxmax((&Sigma;&Sigma;LLjjtt--11//22RankRanknnomaxmax((ddmm,,nnoii,,jj))))LLjjtt--11&times;&times;LLiitt//44

其中

Figure BDA0000109554460000125
符号表示以n为自变量求最大的
Figure BDA0000109554460000126
Figure BDA0000109554460000127
的和,即求区域间距离矩阵
Figure BDA0000109554460000128
中每一行中最大
Figure BDA0000109554460000129
个元素的和,得到一个
Figure BDA00001095544600001210
行的向量φ;
Figure BDA00001095544600001211
表示以m为自变量求最大的
Figure BDA00001095544600001212
个φ(m)的和,即求向量φ中最大的
Figure BDA00001095544600001213
个元素的和,再归一化得到两运动区域间的距离
Figure BDA00001095544600001214
in
Figure BDA0000109554460000125
The symbol means to find the maximum with n as the independent variable
Figure BDA0000109554460000126
indivual
Figure BDA0000109554460000127
The sum of , that is, to find the interregional distance matrix
Figure BDA0000109554460000128
in each line of the maximum
Figure BDA0000109554460000129
The sum of elements yields a
Figure BDA00001095544600001210
row vector φ;
Figure BDA00001095544600001211
Indicates to find the maximum with m as the independent variable
Figure BDA00001095544600001212
The sum of φ(m), that is, to find the largest in the vector φ
Figure BDA00001095544600001213
The sum of elements, and then normalized to get the distance between the two motion areas
Figure BDA00001095544600001214

(3.3)通过DMATt,t-1求t-1帧图像所有目标块中与当前目标块i最近的距离并设

Figure BDA00001095544600001216
是t-1帧图像中与当前帧t中运动区域i最近的运动区域所对应的指标;(3.3) Find the closest distance to the current target block i in all target blocks of thet-1 frame image through DMAT t, t-1 juxtaposed
Figure BDA00001095544600001216
is the index corresponding to the motion area nearest to the motion area i in the current frame t in the t-1 frame image;

Figure BDA00001095544600001217
认为两目标区域匹配,即对应同一运动目标,则使用更新系数a更新得分,即p~it=pit+a&times;p~Jimint-1;like
Figure BDA00001095544600001217
It is considered that the two target areas match, that is, they correspond to the same moving target, and the update coefficient a is used to update the score, that is, p ~ i t = p i t + a &times; p ~ J i min t - 1 ;

其中ε为最小距离阈值,与λmean,λvariance,λlocation的选取有关,在本例中使用40、600、4的情况下,ε取值为8;a为更新系数,表示目标受之前得分影响的大小,一般取0.8~0.95之间;更新得分后,跳转至步骤(3.6);Among them, ε is the minimum distance threshold, which is related to the selection of λmean , λvariance , and λlocation . In this example, when 40, 600, and 4 are used, the value of ε is 8; a is the update coefficient, indicating that the target is affected by the previous score. The size of the impact is generally between 0.8 and 0.95; after updating the score, jump to step (3.6);

否则,认为t-1帧图像中没有当前运动区域的匹配,需要至t-2帧图像中寻找匹配目标,继续执行步骤(3.4)。Otherwise, it is considered that there is no match for the current motion region in the t-1 frame image, and it is necessary to find a matching target in the t-2 frame image, and continue to perform step (3.4).

Figure BDA0000109554460000131
的计算方法为:通过计算
Figure BDA0000109554460000132
得到t帧图像和t-1帧图像的帧间距离矩阵DMATt,t-1,对DMATt,t-1中的每一行i,求该行中所有Nt-1个元素的最小值即可得到区域i与t-1帧图像中最近运动区域的距离
Figure BDA0000109554460000133
最小值所在的列数为所对应最近运动区域的指标
Figure BDA0000109554460000134
Figure BDA0000109554460000131
The calculation method is: by calculating
Figure BDA0000109554460000132
Get the inter-frame distance matrix DMATt, t-1 of t-frame image and t-1 frame image, for each row i in DMATt, t-1 , find the minimum value of all Nt-1 elements in the row, namely The distance between area i and the nearest moving area in the t-1 frame image can be obtained
Figure BDA0000109554460000133
The number of columns where the minimum value is located is the index of the corresponding recent movement area
Figure BDA0000109554460000134

(3.4)若运动区域i无法与t-1帧图像中运动区域关联,根据运动区域i中的

Figure BDA0000109554460000135
个方块信息计算运动区域与t-2帧图像中所有运动区域的距离,即计算t帧图像与t-2帧图像帧间距离矩阵DMATt,t-2第i行中所有元素的值(对于已经与t-1帧图像中区域关联的区域,无需再计算其与t-2帧图像中目标区域的距离),DMATt,t-2中第i行k列元素为t帧图像中目标区域i与t-1帧图像中目标区域k之间的距离
Figure BDA0000109554460000136
(3.4) If the motion area i cannot be associated with the motion area in the t-1 frame image, according to the
Figure BDA0000109554460000135
Calculate the distance between the motion area and all motion areas in the t-2 frame image for each square information, that is, calculate the distance matrix DMATt between the t frame image and the t-2 frame image, and the values of all elements in the i-th row of t-2 (for The area that has been associated with the area in the t-1 frame image does not need to calculate the distance between it and the target area in the t-2 frame image), DMATt, the i-th row k column element in t-2 is the target area in the t frame image The distance between i and the target area k in the t-1 frame image
Figure BDA0000109554460000136

(3.5)求t-2帧图像所有目标块中与当前目标块最近的距离

Figure BDA0000109554460000137
并设
Figure BDA0000109554460000138
是t-2帧图像中与当前帧t中运动区域i最近的运动区域的指标。(3.5) Find the shortest distance from all target blocks in the t-2 frame image to the current target block
Figure BDA0000109554460000137
juxtaposed
Figure BDA0000109554460000138
is the index of the motion region closest to the motion region i in the current frame t in the t-2 frame image.

则认为两目标区域匹配,即对应同一运动目标,则以更新系数b对得分进行更新,即

Figure BDA00001095544600001310
其中b<a为更新系数,表示目标受之前得分影响的大小,一般取0.75~0.9之间;like Then it is considered that the two target areas match, that is, they correspond to the same moving target, and the score is updated with the update coefficient b, namely
Figure BDA00001095544600001310
Among them, b<a is the update coefficient, indicating the size of the target affected by the previous score, generally between 0.75 and 0.9;

否则,认为t-2帧图像中没有当前运动区域的匹配,当前运动区域为新出现的运动目标

Figure BDA00001095544600001311
Otherwise, it is considered that there is no match of the current motion area in the t-2 frame image, and the current motion area is a new moving target
Figure BDA00001095544600001311

(3.6)判断

Figure BDA00001095544600001312
是否大于报警阈值η,若
Figure BDA00001095544600001313
则认为目标区域为烟雾,报警;(3.6) Judgment
Figure BDA00001095544600001312
Is it greater than the alarm threshold η, if
Figure BDA00001095544600001313
Then it is considered that the target area is smog, and the alarm is issued;

其中,η的选取与更新系数a、b以及对报警的灵敏度要求有关,本例中a、b分别等于0.9、0.8的情况下,η取值为3.5取得了较平衡的检测效果;Wherein, the selection of η is related to the update coefficients a, b and the sensitivity requirements to the alarm. In this example, when a and b are equal to 0.9 and 0.8 respectively, the value of η is 3.5 to achieve a more balanced detection effect;

(4)完成相应内存操作(4) Complete the corresponding memory operation

(4.1)在保存的图像序列Image_list中,释放所保存的第t-ζ帧图像信息,并保存当前第t帧图像信息。(4.1) In the saved image sequence Image_list, release the saved image information of the t-ζth frame, and save the image information of the current tth frame.

(4.2)释放目标区域链表Objetc_list中所保存的t-2帧图像中运动区域的信息;(4.2) release the information of the motion area in the t-2 frame images stored in the target area linked list Objetc_list;

(4.3)令t=t+1,继续执行步骤(1);(4.3) make t=t+1, continue to carry out step (1);

图2为两幅存在烟雾的视频场景截图,图3为图2中场景进行运动区域检测的结果示意图,图4为烟雾检测结果图,其中黑色线条为关联区域的运动轨迹,白色区域为最终得分

Figure BDA0000109554460000141
为负的区域;浅灰色区域为单帧图像中被识别为烟雾,但时间序列上整体分析还没有达到烟雾标准的区域(包括单帧图像中检测出的虚警和刚出现的烟雾),即
Figure BDA0000109554460000142
的区域;深灰色部分为报警区域。Figure 2 is a screenshot of two video scenes with smoke, Figure 3 is a schematic diagram of the motion area detection result of the scene in Figure 2, and Figure 4 is a smoke detection result map, in which the black line is the motion trajectory of the associated area, and the white area is the final score
Figure BDA0000109554460000141
is a negative area; the light gray area is identified as smoke in a single frame image, but the overall analysis of the time series has not yet reached the smoke standard area (including the false alarm detected in the single frame image and the smoke that just appeared), that is
Figure BDA0000109554460000142
area; the dark gray part is the alarm area.

Claims (6)

1. A smoke detection method based on computer vision specifically comprises the following steps:
detecting a motion area of the t frame image, and recording the serial number of the motion area as i;
extracting more than one characteristic of the ith motion area of the t frame;
calculating the characteristic weighted sum of the ith motion area of the t frame to obtain the attribute score of the motion area;
calculating the distance between the ith motion area of the t frame and all motion areas of the t-1 frame image;
determining the minimum value of the distances between the ith motion area of the t frame and all the motion areas of the t-1 frame image;
if the minimum value of the distances between the ith motion area of the t-th frame and all the motion areas of the t-1-th frame image is smaller than the minimum distance threshold value, updating the attribute score of the ith motion area of the t-th frame according to the attribute score of the motion area corresponding to the minimum value of the t-1-th frame image;
if the minimum value of the distances between the ith motion area of the t-th frame and all the motion areas of the t-1-th frame image is larger than or equal to the distance threshold value, calculating the distances between the ith motion area of the t-th frame and all the motion areas of the t-2-th frame image;
determining the minimum value of the distances between the ith motion area of the t frame and all motion areas of the t-2 frame image;
if the minimum value of the distances between the ith motion area of the t-th frame and all the motion areas of the t-2-th frame image is smaller than the minimum distance threshold value, updating the attribute score of the ith motion area of the t-th frame according to the attribute score of the motion area corresponding to the minimum value of the t-2-th frame image; if the minimum value of the distances between the ith motion area of the tth frame and all the motion areas of the image of the t-2 frame is greater than or equal to the minimum distance threshold value, keeping the attribute score of the ith motion area of the tth frame unchanged;
if the attribute score of the ith motion area of the t frame exceeds an alarm threshold, determining that smoke exists;
the step of calculating the distance between the ith motion area of the t-th frame and all the motion areas of the t-1 frame image specifically comprises the following steps:
respectively selecting the ith motion area and the jth motion area of the t-1 frame image
Figure FDA00003274662200021
AnA square block;
calculating the distance between the ith motion region and the jth motion region of the t-1 frame image
Figure FDA00003274662200023
Wherein,
Figure FDA00003274662200024
m represents the m-th block in the ith motion area of the t-th frame, n represents the n-th block in the jth motion area of the t-1-th frame image,
Figure FDA000032746622000217
λvariance,λlocationrespectively, the weight parameters are the weight parameters,
Figure FDA00003274662200025
and
Figure FDA00003274662200026
respectively the mean and variance of the ith square in the ith motion region of the tth frame,
Figure FDA00003274662200027
for the location of the ith square in the ith motion region of the tth frame,
Figure FDA00003274662200028
and
Figure FDA00003274662200029
respectively the mean and variance of the ith block in the jth motion region in the t-1 th frame,is the location of the ith square in the jth motion region in the t-1 th frame,
Figure FDA000032746622000211
indicating that n is the maximum of the independent variable
Figure FDA000032746622000212
AnThe sum of (a) and (b),
Figure FDA000032746622000214
means for maximizing m as an independent variable
Figure FDA000032746622000215
The sum of (1).
2. The smoke detection method according to claim 1, wherein the step of updating the attribute score of the ith motion region of the tth frame according to the attribute score of the motion region corresponding to the minimum value of the image of the t-1 th frame specifically comprises: attribute score for ith motion region of tth frame
Figure FDA000032746622000216
And a is more than or equal to 0.8 and less than or equal to 0.95 for the attribute score of the motion area corresponding to the minimum value of the t-1 frame image.
3. The smoke detection method according to claim 1, wherein the step of updating the attribute score of the ith motion region of the tth frame according to the attribute score of the motion region corresponding to the minimum value of the image of the t-2 th frame specifically comprises: attribute score for ith motion region of tth frame
Figure FDA00003274662200031
B is more than or equal to 0.75 and less than or equal to 0.9, and is the attribute score of the motion area corresponding to the minimum value of the t-2 th frame image.
4. The smoke detection method of claim 1, wherein the characteristics of the motion region comprise: the average gray level, the crossing times of the average gray level average in the previous historical frame image in the motion area, the ratio of the image with the maximum increase of gray level to the image with the maximum decrease of gray level in the previous historical frame image in the motion area, the average and the variance of the image with the maximum change of gray level in the previous historical frame image in the motion area, and the ratio of the pixel point with larger gradient in the motion area to the area of the area.
5. The smoke detection method of claim 1, wherein the detecting a motion region of the tth frame image comprises:
generating a moving foreground image:
t frame image FtEach pixel point F int(x, y) with the t- Δ t1,t-Δt2,t-Δt3Corresponding pixel point in frame image
Figure FDA00003274662200033
Subtracting and taking the absolute value to obtain the difference value
Figure FDA00003274662200034
Comparing the maximum value of the difference value with a motion detection threshold value delta F, if the maximum value of the difference value is larger than the motion detection threshold value delta F, determining that motion exists at the point, and determining that the pixel point has motion
Figure FDA00003274662200032
Setting to 255, otherwise setting to 0, thereby obtaining a moving foreground image;
filtering the moving foreground image;
and a connected domain marking step.
6. A smoke detection method according to claim 5, wherein Δ F is between 10 and 30.
CN 2011103657842011-11-172011-11-17Smog detection method based on computer visionExpired - Fee RelatedCN102509414B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN 201110365784CN102509414B (en)2011-11-172011-11-17Smog detection method based on computer vision

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN 201110365784CN102509414B (en)2011-11-172011-11-17Smog detection method based on computer vision

Publications (2)

Publication NumberPublication Date
CN102509414A CN102509414A (en)2012-06-20
CN102509414Btrue CN102509414B (en)2013-09-18

Family

ID=46221490

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN 201110365784Expired - Fee RelatedCN102509414B (en)2011-11-172011-11-17Smog detection method based on computer vision

Country Status (1)

CountryLink
CN (1)CN102509414B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102982313B (en)*2012-10-312015-08-05深圳辉锐天眼科技有限公司The method of Smoke Detection
JP2018005642A (en)*2016-07-052018-01-11株式会社日立製作所Fluid substance analyzer
CN106778488B (en)*2016-11-222019-07-16中国民航大学 Low illumination smoke video detection method based on image correlation
CN107977638B (en)*2017-12-112020-05-26智美达(江苏)数字技术有限公司Video monitoring alarm method, device, computer equipment and storage medium
CN109142176B (en)*2018-09-292024-01-12佛山市云米电器科技有限公司Smoke subarea space rechecking method based on space association
CN109612573B (en)*2018-12-062021-01-12南京林业大学 A Canopy Fire and Ground Fire Detection Method Based on Noise Spectrum Analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7505604B2 (en)*2002-05-202009-03-17Simmonds Precision Prodcuts, Inc.Method for detection and recognition of fog presence within an aircraft compartment using video images
CN101441712A (en)*2008-12-252009-05-27北京中星微电子有限公司Flame video recognition method and fire hazard monitoring method and system
CN102013008A (en)*2010-09-162011-04-13北京智安邦科技有限公司Smoke detection method based on support vector machine and device
CN102163280A (en)*2011-04-122011-08-24华中科技大学Method for identifying, tracking and converting target based on confidence degree and multi-frame judgement

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR100987786B1 (en)*2008-07-232010-10-13(주)에이치엠씨 Fire Detection System Using Smoke Detection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7505604B2 (en)*2002-05-202009-03-17Simmonds Precision Prodcuts, Inc.Method for detection and recognition of fog presence within an aircraft compartment using video images
CN101441712A (en)*2008-12-252009-05-27北京中星微电子有限公司Flame video recognition method and fire hazard monitoring method and system
CN102013008A (en)*2010-09-162011-04-13北京智安邦科技有限公司Smoke detection method based on support vector machine and device
CN102163280A (en)*2011-04-122011-08-24华中科技大学Method for identifying, tracking and converting target based on confidence degree and multi-frame judgement

Also Published As

Publication numberPublication date
CN102509414A (en)2012-06-20

Similar Documents

PublicationPublication DateTitle
CN109460753B (en)Method for detecting floating object on water
CN102982313B (en)The method of Smoke Detection
CN110378288B (en)Deep learning-based multi-stage space-time moving target detection method
CN107833221B (en) A water leak detection method based on multi-channel feature fusion and machine learning
US20240005759A1 (en)Lightweight fire smoke detection method, terminal device, and storage medium
CN105023008B (en)The pedestrian of view-based access control model conspicuousness and multiple features recognition methods again
Wang et al.Fire smoke detection based on texture features and optical flow vector of contour
CN102509414B (en)Smog detection method based on computer vision
CN110874592A (en)Forest fire smoke image detection method based on total bounded variation
CN107169985A (en)A kind of moving target detecting method based on symmetrical inter-frame difference and context update
CN106650600A (en)Forest smoke and fire detection method based on video image analysis
CN109961042B (en) A Smoke Detection Method Combining Deep Convolutional Neural Networks and Visual Change Graphs
KR101414670B1 (en)Object tracking method in thermal image using online random forest and particle filter
CN102682303A (en)Crowd exceptional event detection method based on LBP (Local Binary Pattern) weighted social force model
CN106203334A (en) A method for flame detection in indoor scenes
CN103475800B (en)Method and device for detecting foreground in image sequence
CN102201146A (en)Active infrared video based fire smoke detection method in zero-illumination environment
CN103530893A (en)Foreground detection method in camera shake scene based on background subtraction and motion information
CN108074234A (en)A kind of large space flame detecting method based on target following and multiple features fusion
CN111598928A (en)Abrupt change moving target tracking method based on semantic evaluation and region suggestion
CN103955949A (en)Moving target detection method based on Mean-shift algorithm
CN102393902A (en)Vehicle color detection method based on H_S two-dimensional histogram and regional color matching
CN116228772A (en)Quick detection method and system for fresh food spoilage area
CN116665015B (en) A method for detecting weak and small targets in infrared sequence images based on YOLOv5
CN106815576A (en)Target tracking method based on consecutive hours sky confidence map and semi-supervised extreme learning machine

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20130918

Termination date:20191117


[8]ページ先頭

©2009-2025 Movatter.jp