


技术领域technical field
本发明涉及监控技术领域,特别涉及一种人员聚集检测方法和装置。The present invention relates to the technical field of monitoring, and in particular, to a method and device for detecting a gathering of people.
背景技术Background technique
在现有技术中的视频监控中,当监控场景中出现人员聚集状态时,对监控区域的管理风险和控制难度都会有所增加,需要采用与正常状态不同的方案,对监控场景进行管理。In the video surveillance in the prior art, when people gather in the surveillance scene, the management risk and control difficulty of the surveillance area will increase, and a different solution from the normal state needs to be adopted to manage the surveillance scene.
由于现代视频监控系统部署量巨大,摄像机数量众多,在所有监控场景中发现人员聚集现象,需要大量的人力长时间的对所有摄像头进行监视,既耗费人力成本,也容易造成漏报。因此,通过视频分析手段,自动检测监控场景中的发现人员聚集现象就成了智能监控系统的需求。Due to the huge deployment of modern video surveillance systems and the large number of cameras, crowds are found in all surveillance scenarios, requiring a lot of manpower to monitor all cameras for a long time, which not only consumes labor costs, but also easily causes false negatives. Therefore, through the means of video analysis, it becomes a requirement of the intelligent monitoring system to automatically detect the phenomenon of gathering of people in the monitoring scene.
现有实现中提出一种基于视频的人员聚集检测方法。根据连续的视频图像进行监控区域学习,获得监控区域的当前背景图像;对前景图像进行阈值分割获得分割图像,对目标图像进行的连通区域进行像素统计,根据目标图像中各个连通区域的面积和预设阈值面积,判断是否存在人员聚集区域。In the existing implementation, a video-based people gathering detection method is proposed. Perform monitoring area learning based on continuous video images to obtain the current background image of the monitoring area; perform threshold segmentation on the foreground image to obtain segmented images, and perform pixel statistics on the connected areas of the target image. Set a threshold area to determine whether there is a crowded area.
该方法通过采用对静态区域图像阈值分割的方法获取聚集区域,聚集判断条件较简单,未充分利用多帧视频的信息,场景适用性较差,在较复杂场景条件下融合产生错误检测,检测的准确率较差。This method obtains the aggregation area by using the threshold segmentation method of the static area image, the aggregation judgment condition is relatively simple, the information of the multi-frame video is not fully utilized, and the scene applicability is poor. Poor accuracy.
发明内容SUMMARY OF THE INVENTION
有鉴于此,本申请提供一种人员聚集检测方法和装置,能够提高人员聚集检测的准确性。In view of this, the present application provides a method and device for detecting a gathering of people, which can improve the accuracy of detecting a gathering of people.
为解决上述技术问题,本申请的第一方面提供一种人员聚集检测方法,该方法包括:In order to solve the above-mentioned technical problems, a first aspect of the present application provides a method for detecting a gathering of people, the method comprising:
从采集到的视频流中提取待检测目标,并跟踪所述待检测目标,获得跟踪链表;Extract the target to be detected from the collected video stream, and track the target to be detected to obtain a tracking linked list;
根据所述跟踪链表,确定每一待检测目标的移动速度;According to the tracking linked list, determine the moving speed of each target to be detected;
判断该移动速度是否小于第一预设阈值,并根据移动速度小于第一预设阈值的多个待检测目标在一帧图像中所对应的多个区域,确定该帧图像的静态区域;Determine whether the moving speed is less than the first preset threshold, and determine the static area of the frame image according to multiple areas corresponding to a plurality of targets to be detected whose moving speed is less than the first preset threshold in a frame image;
将视频流中存在静态区域的图像作为候选聚集图像;当确定候选聚集图像的数量N达到第二预设阈值时,通过预设的人员聚集预测模型判断所述候选聚集图像是否为人员聚集图像,并统计判断结果为人员聚集图像的个数M;An image with a static area in the video stream is used as a candidate aggregation image; when it is determined that the number N of candidate aggregation images reaches a second preset threshold, it is determined whether the candidate aggregation image is a people aggregation image by using a preset personnel aggregation prediction model, And the statistical judgment result is the number M of people gathered images;
当确定M大于第三预设阈值时,确定发生人员聚集事件。When it is determined that M is greater than the third preset threshold, it is determined that a person gathering event occurs.
本申请的第二方面提供一种人员聚集检测装置,该装置包括:获取单元、第一确定单元、第二确定单元、第三确定单元、第四确定单元、统计单元和第五确定单元;A second aspect of the present application provides a person gathering detection device, the device includes: an acquisition unit, a first determination unit, a second determination unit, a third determination unit, a fourth determination unit, a statistics unit, and a fifth determination unit;
所述获取单元,用于从采集到的视频流中提取待检测目标,并跟踪所述待检测目标,获得跟踪链表;The acquisition unit is used to extract the target to be detected from the collected video stream, and to track the target to be detected to obtain a tracking linked list;
所述第一确定单元,用于根据所述获取单元获取的跟踪链表,确定每一待检测目标的移动速度;The first determining unit is configured to determine the moving speed of each target to be detected according to the tracking linked list acquired by the acquiring unit;
所述第二确定单元,用于判断所述第一确定单元确定的移动速度是否小于第一预设阈值,并根据移动速度小于第一预设阈值的多个待检测目标在一帧图像中所对应的多个区域,确定该帧图像的静态区域;The second determining unit is configured to judge whether the moving speed determined by the first determining unit is less than the first preset threshold, and determine whether the moving speed determined by the first determining unit is less than the first preset threshold. Corresponding multiple areas, determine the static area of the frame image;
所述第三确定单元,用于将视频流中存在所述第二确定单元确定的静态区域的图像作为候选聚集图像;确定候选聚集图像的数量N是否达到第二预设阈值;The third determining unit is configured to use an image in the video stream in the static area determined by the second determining unit as a candidate aggregated image; determine whether the number N of candidate aggregated images reaches a second preset threshold;
所述第四确定单元,用于当所述第三单元确定候选聚集图像的数量N达到第二预设阈值时,通过预设的人员聚集预测模型判断所述候选聚集图像是否为人员聚集图像;The fourth determining unit is configured to, when the third unit determines that the number N of candidate aggregated images reaches a second preset threshold, determine whether the candidate aggregated images are persons aggregated images by using a preset people aggregation prediction model;
所述统计单元,用于统计所述第四确定单元判断结果为人员聚集图像的个数M;The statistical unit is used to count the number M of the images that the fourth determination unit determines that the result is that people gather;
所述第五确定单元,用于当确定所述统计单元统计的M大于第三预设阈值时,确定发生人员聚集事件。The fifth determining unit is configured to determine that a person gathering event occurs when it is determined that M counted by the statistical unit is greater than a third preset threshold.
本申请的第三方面提供一种非瞬时计算机可读存储介质,所述非瞬时计算机可读存储介质存储指令,所述指令在由处理器执行时使得所述处理器执行如所述的人员聚集检测方法的步骤。A third aspect of the present application provides a non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause the processor to perform a gathering of people as described The steps of the detection method.
一种电子设备,包括如上述非瞬时计算机可读存储介质、以及可访问所述非瞬时计算机可读存储介质的所述处理器。An electronic device comprising a non-transitory computer-readable storage medium as described above, and the processor accessible to the non-transitory computer-readable storage medium.
本申请中通过静态区域识别和深度学习获得的预设的人员聚集预测模型相结合,对视频图像的多帧图像进行二次识别,充分利用了人员聚集事件发生的过程信息,能够提高人员聚集检测的准确性。In this application, the combination of static area recognition and the preset people gathering prediction model obtained by deep learning is used to perform secondary identification of multiple frames of video images, making full use of the process information of the occurrence of people gathering events, which can improve the detection of people gathering. accuracy.
附图说明Description of drawings
图1为本申请实施例中人员聚集检测流程示意图;FIG. 1 is a schematic flow chart of a personnel gathering detection process in an embodiment of the present application;
图2为本申请实施例中发生聚集事件的位置区域示意图;2 is a schematic diagram of a location area where an aggregation event occurs in an embodiment of the present application;
图3为本申请实施例中应用于上述技术的装置结构示意图。FIG. 3 is a schematic structural diagram of a device applied to the above technology in an embodiment of the present application.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,下面结合附图并举实施例,对本发明的技术方案进行详细说明。In order to make the objectives, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be described in detail below with reference to the accompanying drawings and embodiments.
本申请实施例中提供一种人员聚集检测方法,通过静态区域识别和深度学习获得的预设的人员聚集预测模型相结合,对视频图像的多帧图像进行二次识别,充分利用了人员聚集事件发生的过程信息,能够提高人员聚集检测的准确性。The embodiment of the present application provides a method for detecting people gathering, which combines static region recognition and a preset people gathering prediction model obtained by deep learning to perform secondary identification of multiple frames of video images, and makes full use of people gathering events. The process information that occurs can improve the accuracy of people gathering detection.
本申请应用于公共场合、重要区域的人员聚集事件的检测,下面结合附图,详细说明本申请实施例中实现人员聚集检测过程。The present application is applied to the detection of people gathering events in public places and important areas. The following describes in detail the process of implementing personnel gathering detection in the embodiments of the present application with reference to the accompanying drawings.
为了描述方便,实现人员聚集检测的设备在下文中简称为检测设备。For the convenience of description, the device for realizing the detection of people gathering is hereinafter referred to as the detection device for short.
参见图1,图1为本申请实施例中人员聚集检测流程示意图。具体步骤为:Referring to FIG. 1 , FIG. 1 is a schematic diagram of a flow of personnel gathering detection in an embodiment of the present application. The specific steps are:
步骤101,检测设备从采集到的视频流中提取待检测目标,并跟踪所述待检测目标,获得跟踪链表。
采集视频流可以通过视频监控设备,如摄像头等实时获取监控的场景中的视频图像,并传输给检测设备,检测设备接收视频监控设备发送的视频流并进行存储,即可实时获取视频流。To collect video streams, the video images in the monitored scene can be acquired in real time through video surveillance equipment, such as cameras, and transmitted to the detection equipment.
本步骤中从采集到的视频流中提取待检测目标,并跟踪所述待检测目标,获得跟踪链表,包括如下两个步骤:In this step, the target to be detected is extracted from the collected video stream, and the target to be detected is tracked to obtain a tracking linked list, which includes the following two steps:
第一步、获取视频流的每一帧图像的前景图像,并获取前景图像中的待检测目标。The first step is to obtain the foreground image of each frame of the video stream, and obtain the target to be detected in the foreground image.
该步骤可以通过如下两种方式实现,但不限于如下两种实现方式:This step can be implemented in the following two ways, but is not limited to the following two implementation ways:
第一种:The first:
可以通过用于前景检测的前景模型从视频流中提取前景目标,从而以该前景目标为目标人员的检测目标。其中,上述背景建模方法可以包括高斯混合模型(GaussianMixture Model)和ViBe(visual background extractor,视觉背景提取)算法等。The foreground object can be extracted from the video stream through the foreground model for foreground detection, so that the foreground object can be used as the detection target of the target person. Wherein, the above background modeling method may include Gaussian mixture model (GaussianMixture Model) and ViBe (visual background extractor, visual background extraction) algorithm and the like.
第二种:The second:
可以通过已训练的卷积神经网络(Convolutional Neural Network)从上述视频流中提取特征目标,从而以该特征目标为目标人员的检测目标。其中,上述卷积神经网络需预先通过人员特征的训练,可以识别出视频中一帧图像中出现的特征目标,作为一种实施例,可以用人员的肢体对卷积神经网络进行训练,使得后续已训练的卷积神经网络可以从视频流中提取人员的肢体目标,从而获得待检测目标。The feature target can be extracted from the above-mentioned video stream through a trained convolutional neural network (Convolutional Neural Network), so that the feature target can be used as the detection target of the target person. Among them, the above-mentioned convolutional neural network needs to be trained on the characteristics of people in advance, and can identify the characteristic targets appearing in a frame of images in the video. The trained convolutional neural network can extract the human limb objects from the video stream to obtain the object to be detected.
第二步、对每一帧图像的前景图像中的每个待检测目标进行跟踪,获取跟踪链表。In the second step, each target to be detected in the foreground image of each frame of image is tracked, and a tracking linked list is obtained.
在获得待检测目标后,可以跟踪获得的待检测目标,并将跟踪结果记录到跟踪表中。After the target to be detected is obtained, the obtained target to be detected can be tracked, and the tracking result is recorded in the tracking table.
具体实现时,可以通过卡尔曼滤波、粒子滤波或者多目标跟踪技术等方式对上述检测目标进行跟踪。During specific implementation, the detection target may be tracked by means of Kalman filter, particle filter, or multi-target tracking technology.
在具体实现时,可以使用跟踪链表针对每个检测目标生成一个跟踪表项。也可以使用一个跟踪链表针对一个检测目标生成一个跟踪链表。During specific implementation, a tracking list can be used to generate a tracking entry for each detection target. A tracking list can also be generated for a detection target using a tracking list.
本申请实施例中的跟踪链表至少包括:待检测目标的标识、所述待检测目标所处的视频帧的视频帧标识、以及所述待检测目标的历史坐标的映射关系。The tracking linked list in the embodiment of the present application includes at least: an identifier of a target to be detected, a video frame identifier of a video frame where the target to be detected is located, and a mapping relationship of historical coordinates of the target to be detected.
待检测目标的历史坐标为检测目标的轮廓对应的外接矩形框的中心点的坐标。The historical coordinates of the target to be detected are the coordinates of the center point of the circumscribed rectangular frame corresponding to the outline of the detected target.
具体实现跟踪链表时,可以通过如下表格的形式实现。参见表1,表1为本申请实施例中跟踪链表所包含的内容。When the tracking linked list is specifically implemented, it can be implemented in the form of the following table. Referring to Table 1, Table 1 is the content included in the tracking linked list in the embodiment of the present application.
表1Table 1
在表1中给出了待检测目标1在视频帧2、3和8中出现时的历史坐标,各时刻对应的坐标信息,也就是二维坐标信息。如t1时刻的坐标可以为(3,5)等。In Table 1, the historical coordinates of the object to be detected 1 appearing in the video frames 2, 3 and 8 are given, and the coordinate information corresponding to each moment, that is, the two-dimensional coordinate information. For example, the coordinates at time t1 may be (3, 5) and so on.
步骤102,该检测设备根据所述跟踪链表,确定每一待检测目标的移动速度。
本步骤中根据所述跟踪链表,确定每一待检测目标的移动速度,包括:In this step, the moving speed of each target to be detected is determined according to the tracking linked list, including:
根据所述跟踪表中每一待检测目标的历史坐标,计算预设时长内每一待检测目标的移动速度。According to the historical coordinates of each target to be detected in the tracking table, the moving speed of each target to be detected within a preset time period is calculated.
具体实现时,待检测目标的移动速度可以根据如下公式计算待检测目标的移动速度v[t]:In specific implementation, the moving speed of the target to be detected can be calculated according to the following formula, the moving speed v[t] of the target to be detected:
其中,(x[t],y[t])为在时刻t时该待检测目标在所述跟踪表中的历史坐标,(x[t-T],y[t-T])为在时刻t-T时该检测目标在所述跟踪表中的历史坐标,T为预设时长。Wherein, (x[t], y[t]) is the historical coordinates of the target to be detected in the tracking table at time t, and (x[t-T], y[t-T]) is the detection at time t-T The historical coordinates of the target in the tracking table, and T is the preset duration.
以表1为例,假设t1和t2之间的时间间隔为预设时长T,则在时刻t2时待检测目标1的移动速度v[t2]:Taking Table 1 as an example, assuming that the time interval between t1 and t2 is the preset duration T, the moving speed v[t2] of the target 1 to be detected at time t2 is:
假设待检测目标1在时刻t2(13:45:38)时的坐标为(6,9),在时刻t1(13:45:36)时的坐标为(3,5),则在时刻t2时的移动速度确定为:2.5CM/S,这里的位移以CM为单位,实际应用中可以根据实际需要确定移动单位。Assuming that the coordinates of target 1 to be detected at time t2 (13:45:38) are (6,9), and the coordinates at time t1 (13:45:36) are (3,5), then at time t2 The moving speed is determined as: 2.5CM/S, the displacement here is in CM, and the moving unit can be determined according to actual needs in practical applications.
本申请实施例中通过在第T时刻开始计算检测目标的移动速度,第T时刻之前的检测目标的移动速度可以忽略不计,T可以根据实际需要配置,为了对速度计算的更精确,可以设置的小一点。In the embodiment of the present application, the moving speed of the detection target is calculated at the T-th time, and the moving speed of the detection target before the T-th time can be ignored. T can be configured according to actual needs. In order to calculate the speed more accurately, you can set the smaller.
步骤103,该检测设备判断该移动速度是否小于第一预设阈值,并根据移动速度小于第一预设阈值的多个待检测目标在一帧图像中所对应的多个区域,确定该帧图像的静态区域。
本步骤中根据移动速度小于第一预设阈值的多个待检测目标在一帧图像中所对应的多个区域,确定该帧图像的静态区域,包括:In this step, according to a plurality of regions corresponding to a plurality of targets to be detected whose moving speed is less than the first preset threshold in a frame of image, the static region of the frame of image is determined, including:
当一帧图像中存在K个移动速度小于第一预设阈值的K个待检测目标时,将所述K个待检测目标在该帧图像中的并集对应的最大外接矩形框作为该帧图像静态区域,或者直接将所述K个待检测目标在该帧图像中的并集直接作为该帧图像的静态区域。其中,K大于第四预设值,第四预设值的值根据实际需要设备,如5、8等。When there are K objects to be detected whose moving speeds are less than the first preset threshold in a frame of image, the largest bounding rectangle corresponding to the union of the K objects to be detected in the frame image is used as the frame image Static area, or directly use the union of the K objects to be detected in the frame image as the static area of the frame image. Wherein, K is greater than the fourth preset value, and the value of the fourth preset value is based on the actual needs of the equipment, such as 5, 8 and so on.
参见图2,图2为本申请实施例中静态区域示意图。图2中针对7个检测目标对应的目标区域的并集所对应的外接矩形框作为静态区域,每个检测目标对应的区域也使用矩形框标示,图2中检测目标6和检测目标7对应的区域存在重叠部分。Referring to FIG. 2, FIG. 2 is a schematic diagram of a static area in an embodiment of the present application. In Figure 2, the bounding rectangle corresponding to the union of the target areas corresponding to the seven detection targets is used as a static area, and the area corresponding to each detection target is also marked with a rectangular frame. In Figure 2, the corresponding detection target 6 and detection target 7 Regions overlap.
本申请实施例中存在多个目标的移动速度小区第一预设值时,多个目标对应区域的并集对应的区域才作为静态区域,存在静态区域的图像才会作为候选聚集图像进行二次识别;这样做可以防止频繁进行人员聚集图像的二次识别。In the embodiment of the present application, when there are the first preset values of the moving speed cells of multiple targets, the area corresponding to the union of the corresponding areas of the multiple targets is used as the static area, and the image with the static area is used as the candidate aggregate image for the second time. Recognition; doing so prevents frequent secondary recognition of people gathering images.
步骤104,该检测设备将视频流中存在静态区域的图像作为候选聚集图像;当确定候选聚集图像的数量N达到第二预设阈值时,通过预设的人员聚集预测模型判断所述候选聚集图像是否为人员聚集图像,并统计判断结果为人员聚集图像的个数M。
本申请实施例中的人员聚集预测模型是以多张聚集事件的图像和非聚集事件的图像作为样本图像训练得到的,具体训练过程如下:The personnel aggregation prediction model in the embodiment of the present application is obtained by training multiple images of aggregated events and images of non-aggregated events as sample images. The specific training process is as follows:
该人员聚集预测模型由卷积神经网络模型和回归学习模型组成;基于卷积神经网络模型输入一张图像时,输出聚集与非聚集的置信度;在回归学习模型中设置聚集置信度阈值,基于所述回归学习模型可以输入卷积神经网络模型的输出,即聚集与非聚集的置信度;输出该图像是否为人员聚集图像标识,当该聚集的置信度大于聚集置信度阈值时,输出为人员聚集图像标识;否则,输出为非人员聚集图像标识。The personnel aggregation prediction model consists of a convolutional neural network model and a regression learning model; when an image is input based on the convolutional neural network model, the confidence of aggregation and non-aggregation is output; the aggregation confidence threshold is set in the regression learning model, based on The regression learning model can input the output of the convolutional neural network model, that is, the confidence level of aggregation and non-aggregation; output whether the image is a person aggregation image identification, when the aggregation confidence degree is greater than the aggregation confidence degree threshold, the output is a person Aggregated image ID; otherwise, the output is a non-person aggregated image ID.
其中,聚集置信度阈值可以根据实际需要设置,本申请实施例中不进行限制。The aggregation confidence threshold may be set according to actual needs, which is not limited in this embodiment of the present application.
卷积神经网络模型的建立,具体为:The establishment of the convolutional neural network model, specifically:
通过将A张聚集事件的图像和B张非聚集事件的图像作为样本数据,在卷积神经网络中进行学习,获得识别聚集与非聚集图像的能力,进而建立卷积神经网络模型。其中,卷积神经网络根据输入的数据和类别标签进行学习,获得识别聚集与非聚集图像的能力。卷积神经网络可采用Googlenet、ResNet、VGG、Alexnet等深度学习网络。By taking A images of aggregated events and B images of non-aggregated events as sample data, learning in a convolutional neural network can obtain the ability to identify aggregated and non-aggregated images, and then build a convolutional neural network model. Among them, the convolutional neural network learns according to the input data and category labels, and obtains the ability to identify aggregated and non-aggregated images. Convolutional neural networks can use deep learning networks such as Googlenet, ResNet, VGG, and Alexnet.
该检测设备当确定候选聚集图像的数量N未达到第二预设阈值时,继续获取候选聚集图像。When it is determined that the number N of candidate aggregated images does not reach the second preset threshold, the detection device continues to acquire candidate aggregated images.
其中,M、N根据实际应用环境进行设置,不做具体限制,M为整数,且不大于N、N为大于0的整数。Among them, M and N are set according to the actual application environment, and no specific limitation is imposed, M is an integer, and is not greater than N, and N is an integer greater than 0.
基于训练好的人员聚集预设模型输入一张图像,可输出该图像是否为人员聚集图像标识。An image is input based on the trained preset model of people gathering, and whether the image is a person gathering image identification can be output.
步骤105,当确定M大于第三预设阈值时,该检测设备确定发生人员聚集事件。
当M不大于第三预设值时,清空当前候选聚集图像,并根据实时采集到的视频流再次获取候选聚集图像。When M is not greater than the third preset value, the current candidate aggregated image is cleared, and the candidate aggregated image is acquired again according to the video stream collected in real time.
本申请实施例中当检测设备确定发生人员聚集事件时,所述方法进一步包括:In the embodiment of the present application, when the detection device determines that a personnel gathering event occurs, the method further includes:
检测设备发出告警信息,所述告警信息包括发生聚集事件标志,以及发生聚集事件的位置区域。The detection device sends out alarm information, where the alarm information includes a sign of an aggregation event and a location area where the aggregation event occurs.
发生聚集事件标志即显示当前发生聚集事件,具体实现时可以使用文字、使用符号实现,也可以使用红色、黄色等图案实现等,且不限于上述实现方式;The occurrence of the aggregation event sign means that the current aggregation event occurs, and the specific implementation can be realized by using text, symbols, or using patterns such as red and yellow, and is not limited to the above implementation methods;
发生聚集事件的位置区域即可以为通过在图像中标示出人员聚集的区域;本申请具体实现时给出但不限于下述表示方式:The location area where the gathering event occurs can be the area where people gather by marking the image; this application provides but is not limited to the following representations during the specific implementation:
选择出聚集的置信度最高的图像作为要显示的图像,并且将该图像中的静态区域显示。The image with the highest aggregated confidence is selected as the image to be displayed, and the static area in the image is displayed.
当发出告警时,可以由管理员根据当前实际环境采取对应的预警措施,如报警、警告、疏导等;When an alarm is issued, the administrator can take corresponding early-warning measures according to the current actual environment, such as alarm, warning, dredging, etc.;
也可以在设备上针对告警信息预先配置处理策略,即配置告警信息和处理策略的对应关系;It is also possible to pre-configure the processing strategy for the alarm information on the device, that is, to configure the corresponding relationship between the alarm information and the processing strategy;
当发生告警时,使用告警信息与配置的告警信息匹配,若匹配成功,则使用对应的告警信息对应的处理策略进行处理,如报警,通过广播的方式警告、疏导等。When an alarm occurs, the alarm information is used to match the configured alarm information. If the match is successful, the processing strategy corresponding to the corresponding alarm information is used for processing, such as alarming, warning and diversion by broadcasting.
及时告警的实现方式能够快速,及时通知到聚集人员,保证人员的人生安全。The implementation of timely alarm can quickly and timely notify the gathering personnel to ensure the life safety of the personnel.
基于同样的发明构思,本申请还提出一种人员聚集检测装置。参见图3,图3为本申请实施例中应用于上述技术的装置结构示意图。该装置包括:获取单元301、第一确定单元302、第二确定单元303、第三确定单元304、第四确定单元305、统计单元306和第五确定单元307;Based on the same inventive concept, the present application also proposes a person gathering detection device. Referring to FIG. 3 , FIG. 3 is a schematic structural diagram of a device applied to the above technology in an embodiment of the present application. The apparatus includes: an
获取单元301,用于从采集到的视频流中提取待检测目标,并跟踪所述待检测目标,获得跟踪链表;An
第一确定单元302,用于根据获取单元301获取的跟踪链表,确定每一待检测目标的移动速度;The first determining
第二确定单元303,用于判断第一确定单元302确定的移动速度是否小于第一预设阈值,并根据移动速度小于第一预设阈值的多个待检测目标在一帧图像中所对应的多个区域,确定该帧图像的静态区域;The second determining
第三确定单元304,用于将视频流中存在第二确定单元303确定的静态区域的图像作为候选聚集图像;确定候选聚集图像的数量N是否达到第二预设阈值;The third determining
第四确定单元305,用于当所述第三单元确定候选聚集图像的数量N达到第二预设阈值时,通过预设的人员聚集预测模型判断所述候选聚集图像是否为人员聚集图像;a
统计单元306,用于统计第四确定单元305判断结果为人员聚集图像的个数M;The
第五确定单元307,用于当确定统计单元306统计的M大于第三预设阈值时,确定发生人员聚集事件。The
较佳地,Preferably,
获取单元301,具体用于从采集到的视频流中提取待检测目标,并跟踪所述待检测目标,获得跟踪链表时,包括:获取视频流的每一帧图像的前景图像;对每一帧图像的前景图像中的每个待检测目标进行跟踪,获取跟踪链表。The obtaining
较佳地,Preferably,
第一确定单元302,具体用于根据所述跟踪链表,确定每一待检测目标的移动速度时,包括:根据所述跟踪表中每一待检测目标的历史坐标,计算预设时长内每一待检测目标的移动速度;其中,所述跟踪链表包括所述待检测目标的标识、所述待检测目标所处的视频帧的视频帧标识、以及所述待检测目标的历史坐标的映射关系。The first determining
较佳地,Preferably,
第一确定单元302,具体用于根据所述跟踪表中每一待检测目标的历史坐标,计算预设时长内每一待检测目标的移动速度时,包括:根据如下公式计算待检测目标的移动速度v[t]:其中,(x[t],y[t])为在时刻t时该待检测目标在所述跟踪表中的历史坐标,(x[t-T],y[t-T])为在时刻t-T时该检测目标在所述跟踪表中的历史坐标,T为预设时长。The first determining
较佳地,Preferably,
第五确定单元307,进一步用于当确定M不大于第三预设值时,清空当前候选聚集图像,并触发获取单元301根据实时采集到的视频流再次获取候选聚集图像。The
较佳地,Preferably,
所述人员聚集预测模型是以多张聚集事件的图像和非聚集事件的图像作为样本图像训练得到的。The personnel aggregation prediction model is obtained by training a plurality of images of aggregated events and images of non-aggregated events as sample images.
上述实施例的单元可以集成于一体,也可以分离部署;可以合并为一个单元,也可以进一步拆分成多个子单元。The units in the foregoing embodiments may be integrated into one body, or may be deployed separately; may be combined into one unit, or may be further split into multiple subunits.
此外,本申请实施例中还提供一种非瞬时计算机可读存储介质,所述非瞬时计算机可读存储介质存储指令,所述指令在由处理器执行时使得所述处理器执行所述人员聚集检测方法的步骤。In addition, an embodiment of the present application further provides a non-transitory computer-readable storage medium, where the non-transitory computer-readable storage medium stores instructions, when executed by a processor, the instructions cause the processor to execute the personnel gathering The steps of the detection method.
另外,还提供一种电子设备,包括如所述非瞬时计算机可读存储介质、以及可访问所述非瞬时计算机可读存储介质的所述处理器。In addition, there is also provided an electronic device including the non-transitory computer-readable storage medium, and the processor that can access the non-transitory computer-readable storage medium.
综上所述,本申请通过静态区域识别和深度学习相结合,对视频图像的多帧图像进行二次识别,充分利用了人员聚集事件发生的过程信息,能够提高人员聚集检测的准确性。To sum up, the present application uses the combination of static area recognition and deep learning to perform secondary recognition of multiple frames of video images, fully utilizes the process information of the occurrence of people gathering events, and can improve the accuracy of personnel gathering detection.
本申请实施例中采用离线深度学习多帧二次识别的方法,充分学习事件发生的过程信息,使事件判断更加准确;In the embodiment of the present application, the method of offline deep learning multi-frame secondary identification is adopted to fully learn the process information of the event occurrence, so that the event judgment is more accurate;
深度学习方法结合静态区域识别的辅助判断,区别于只使用基础信息方法或深度学习方法进行判断,能有效减低误报。同时能输出发生聚集区域的位置,信息输出更丰富,方便于报警的后处理。The deep learning method combined with the auxiliary judgment of static area recognition is different from only using the basic information method or the deep learning method for judgment, which can effectively reduce false positives. At the same time, the location of the gathering area can be output, and the information output is more abundant, which is convenient for post-processing of the alarm.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the present invention. within the scope of protection.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811523516.9ACN111325048B (en) | 2018-12-13 | 2018-12-13 | Personnel gathering detection method and device |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811523516.9ACN111325048B (en) | 2018-12-13 | 2018-12-13 | Personnel gathering detection method and device |
| Publication Number | Publication Date |
|---|---|
| CN111325048Atrue CN111325048A (en) | 2020-06-23 |
| CN111325048B CN111325048B (en) | 2023-05-26 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811523516.9AActiveCN111325048B (en) | 2018-12-13 | 2018-12-13 | Personnel gathering detection method and device |
| Country | Link |
|---|---|
| CN (1) | CN111325048B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111985385A (en)* | 2020-08-14 | 2020-11-24 | 杭州海康威视数字技术股份有限公司 | Behavior detection method, device and equipment |
| CN112270671A (en)* | 2020-11-10 | 2021-01-26 | 杭州海康威视数字技术股份有限公司 | Image detection method, image detection device, electronic equipment and storage medium |
| CN113536932A (en)* | 2021-06-16 | 2021-10-22 | 中科曙光国际信息产业有限公司 | Crowd gathering prediction method and device, computer equipment and storage medium |
| CN113837034A (en)* | 2021-09-08 | 2021-12-24 | 云从科技集团股份有限公司 | Aggregated population monitoring method, device and computer storage medium |
| CN114494350A (en)* | 2022-01-28 | 2022-05-13 | 北京中电兴发科技有限公司 | Personnel gathering detection method and device |
| CN115761636A (en)* | 2022-11-21 | 2023-03-07 | 苏州浪潮智能科技有限公司 | Method, system, equipment and storage medium for detecting people gathering |
| CN116844100A (en)* | 2023-04-10 | 2023-10-03 | 海信集团控股股份有限公司 | An event detection method and electronic device |
| CN117079192A (en)* | 2023-10-12 | 2023-11-17 | 东莞先知大数据有限公司 | Method, device, equipment and medium for estimating number of rope skipping when personnel are shielded |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101325690A (en)* | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
| CN102164270A (en)* | 2011-01-24 | 2011-08-24 | 浙江工业大学 | Intelligent video monitoring method and system capable of exploring abnormal events |
| CN103473791A (en)* | 2013-09-10 | 2013-12-25 | 惠州学院 | Method for automatically recognizing abnormal velocity event in surveillance video |
| CN103839065A (en)* | 2014-02-14 | 2014-06-04 | 南京航空航天大学 | Extraction method for dynamic crowd gathering characteristics |
| US20140152836A1 (en)* | 2012-11-30 | 2014-06-05 | Stephen Jeffrey Morris | Tracking people and objects using multiple live and recorded surveillance camera video feeds |
| US20140241574A1 (en)* | 2011-04-11 | 2014-08-28 | Tao Wang | Tracking and recognition of faces using selected region classification |
| US20140348382A1 (en)* | 2013-05-22 | 2014-11-27 | Hitachi, Ltd. | People counting device and people trajectory analysis device |
| WO2016014724A1 (en)* | 2014-07-23 | 2016-01-28 | Gopro, Inc. | Scene and activity identification in video summary generation |
| CN105447458A (en)* | 2015-11-17 | 2016-03-30 | 深圳市商汤科技有限公司 | Large scale crowd video analysis system and method thereof |
| US20170345181A1 (en)* | 2016-05-27 | 2017-11-30 | Beijing Kuangshi Technology Co., Ltd. | Video monitoring method and video monitoring system |
| WO2018133666A1 (en)* | 2017-01-17 | 2018-07-26 | 腾讯科技(深圳)有限公司 | Method and apparatus for tracking video target |
| CN108810616A (en)* | 2018-05-31 | 2018-11-13 | 广州虎牙信息科技有限公司 | Object localization method, image display method, device, equipment and storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101325690A (en)* | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
| CN102164270A (en)* | 2011-01-24 | 2011-08-24 | 浙江工业大学 | Intelligent video monitoring method and system capable of exploring abnormal events |
| US20140241574A1 (en)* | 2011-04-11 | 2014-08-28 | Tao Wang | Tracking and recognition of faces using selected region classification |
| US20140152836A1 (en)* | 2012-11-30 | 2014-06-05 | Stephen Jeffrey Morris | Tracking people and objects using multiple live and recorded surveillance camera video feeds |
| US20140348382A1 (en)* | 2013-05-22 | 2014-11-27 | Hitachi, Ltd. | People counting device and people trajectory analysis device |
| CN103473791A (en)* | 2013-09-10 | 2013-12-25 | 惠州学院 | Method for automatically recognizing abnormal velocity event in surveillance video |
| CN103839065A (en)* | 2014-02-14 | 2014-06-04 | 南京航空航天大学 | Extraction method for dynamic crowd gathering characteristics |
| WO2016014724A1 (en)* | 2014-07-23 | 2016-01-28 | Gopro, Inc. | Scene and activity identification in video summary generation |
| CN105447458A (en)* | 2015-11-17 | 2016-03-30 | 深圳市商汤科技有限公司 | Large scale crowd video analysis system and method thereof |
| US20170345181A1 (en)* | 2016-05-27 | 2017-11-30 | Beijing Kuangshi Technology Co., Ltd. | Video monitoring method and video monitoring system |
| WO2018133666A1 (en)* | 2017-01-17 | 2018-07-26 | 腾讯科技(深圳)有限公司 | Method and apparatus for tracking video target |
| CN108810616A (en)* | 2018-05-31 | 2018-11-13 | 广州虎牙信息科技有限公司 | Object localization method, image display method, device, equipment and storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111985385A (en)* | 2020-08-14 | 2020-11-24 | 杭州海康威视数字技术股份有限公司 | Behavior detection method, device and equipment |
| CN111985385B (en)* | 2020-08-14 | 2023-08-29 | 杭州海康威视数字技术股份有限公司 | Behavior detection method, device and equipment |
| CN112270671A (en)* | 2020-11-10 | 2021-01-26 | 杭州海康威视数字技术股份有限公司 | Image detection method, image detection device, electronic equipment and storage medium |
| CN112270671B (en)* | 2020-11-10 | 2023-06-02 | 杭州海康威视数字技术股份有限公司 | Image detection method, device, electronic equipment and storage medium |
| CN113536932A (en)* | 2021-06-16 | 2021-10-22 | 中科曙光国际信息产业有限公司 | Crowd gathering prediction method and device, computer equipment and storage medium |
| CN113837034A (en)* | 2021-09-08 | 2021-12-24 | 云从科技集团股份有限公司 | Aggregated population monitoring method, device and computer storage medium |
| CN114494350A (en)* | 2022-01-28 | 2022-05-13 | 北京中电兴发科技有限公司 | Personnel gathering detection method and device |
| CN114494350B (en)* | 2022-01-28 | 2022-10-14 | 北京中电兴发科技有限公司 | Personnel gathering detection method and device |
| CN115761636A (en)* | 2022-11-21 | 2023-03-07 | 苏州浪潮智能科技有限公司 | Method, system, equipment and storage medium for detecting people gathering |
| CN116844100A (en)* | 2023-04-10 | 2023-10-03 | 海信集团控股股份有限公司 | An event detection method and electronic device |
| CN117079192A (en)* | 2023-10-12 | 2023-11-17 | 东莞先知大数据有限公司 | Method, device, equipment and medium for estimating number of rope skipping when personnel are shielded |
| CN117079192B (en)* | 2023-10-12 | 2024-01-02 | 东莞先知大数据有限公司 | Method, device, equipment and medium for estimating number of rope skipping when personnel are shielded |
| Publication number | Publication date |
|---|---|
| CN111325048B (en) | 2023-05-26 |
| Publication | Publication Date | Title |
|---|---|---|
| CN111325048B (en) | Personnel gathering detection method and device | |
| CN108062349B (en) | Video surveillance method and system based on video structured data and deep learning | |
| CN111091098B (en) | Training method of detection model, detection method and related device | |
| CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
| CN111860318A (en) | Construction site pedestrian loitering detection method, device, equipment and storage medium | |
| CN108053427A (en) | A kind of modified multi-object tracking method, system and device based on KCF and Kalman | |
| CN107133607B (en) | Crowd counting method and system based on video surveillance | |
| CN108052859A (en) | A kind of anomaly detection method, system and device based on cluster Optical-flow Feature | |
| CN103986910A (en) | A method and system for counting passenger flow based on intelligent analysis camera | |
| CN103425967A (en) | Pedestrian flow monitoring method based on pedestrian detection and tracking | |
| CN112232211A (en) | An intelligent video surveillance system based on deep learning | |
| CN105160319A (en) | Method for realizing pedestrian re-identification in monitor video | |
| CN102855508B (en) | Opening type campus anti-following system | |
| CN108197575A (en) | A kind of abnormal behaviour recognition methods detected based on target detection and bone point and device | |
| CN109740411A (en) | Intelligent monitoring system, monitoring method and rapid alarm method based on face recognition | |
| CN114648748A (en) | Motor vehicle illegal parking intelligent identification method and system based on deep learning | |
| CN108540752A (en) | The methods, devices and systems that target object in video monitoring is identified | |
| CN112287823A (en) | A method of facial mask recognition based on video surveillance | |
| CN113362374A (en) | High-altitude parabolic detection method and system based on target tracking network | |
| KR20140132140A (en) | Method and apparatus for video surveillance based on detecting abnormal behavior using extraction of trajectories from crowd in images | |
| CN110516600A (en) | A kind of bus passenger flow detection method based on Face datection | |
| CN117893941A (en) | Method, device, equipment and storage medium for identifying wearing state of working clothes | |
| CN109977796A (en) | Trail current detection method and device | |
| CN105447463B (en) | Across the camera to automatically track system that substation is identified based on characteristics of human body | |
| CN107180229A (en) | Anomaly detection method based on the direction of motion in a kind of monitor video |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right | Effective date of registration:20250729 Address after:Rooms 602 and 605, No. 85 Xiangxue Avenue Middle, Huangpu District, Guangzhou City, Guangdong Province 510000 Patentee after:Guangzhou Gaohang Technology Transfer Co.,Ltd. Country or region after:China Address before:Hangzhou City, Zhejiang province 310051 Binjiang District Qianmo Road No. 555 Patentee before:Hangzhou Hikvision Digital Technology Co.,Ltd. Country or region before:China | |
| TR01 | Transfer of patent right |