


技术领域technical field
本发明涉及监控技术领域,特别涉及一种人员聚集检测方法和装置。The invention relates to the technical field of monitoring, in particular to a method and device for detecting people gathering.
背景技术Background technique
在现有技术中的视频监控中,当监控场景中出现人员聚集状态时,对监控区域的管理风险和控制难度都会有所增加,需要采用与正常状态不同的方案,对监控场景进行管理。In the video surveillance in the prior art, when people gather in the surveillance scene, the management risk and control difficulty of the surveillance area will increase, and it is necessary to adopt a different scheme from the normal state to manage the surveillance scene.
由于现代视频监控系统部署量巨大,摄像机数量众多,在所有监控场景中发现人员聚集现象,需要大量的人力长时间的对所有摄像头进行监视,既耗费人力成本,也容易造成漏报。因此,通过视频分析手段,自动检测监控场景中的发现人员聚集现象就成了智能监控系统的需求。Due to the huge deployment of modern video surveillance systems and the large number of cameras, it takes a lot of manpower to monitor all the cameras for a long time if people gather in all monitoring scenes, which is not only labor-intensive, but also likely to cause false positives. Therefore, through the means of video analysis, it becomes the demand of the intelligent monitoring system to automatically detect the phenomenon of people gathering in the monitoring scene.
现有实现中提出一种基于视频的人员聚集检测方法。根据连续的视频图像进行监控区域学习,获得监控区域的当前背景图像;对前景图像进行阈值分割获得分割图像,对目标图像进行的连通区域进行像素统计,根据目标图像中各个连通区域的面积和预设阈值面积,判断是否存在人员聚集区域。In the existing implementation, a video-based people gathering detection method is proposed. Carry out monitoring area learning based on continuous video images to obtain the current background image of the monitoring area; perform threshold segmentation on the foreground image to obtain segmented images, perform pixel statistics on the connected areas of the target image, and obtain the current background image of the monitored area according to the area of each connected area in the target image. Set the threshold area to judge whether there is a crowd gathering area.
该方法通过采用对静态区域图像阈值分割的方法获取聚集区域,聚集判断条件较简单,未充分利用多帧视频的信息,场景适用性较差,在较复杂场景条件下融合产生错误检测,检测的准确率较差。This method obtains the aggregation area by thresholding the image of the static area. The aggregation judgment condition is relatively simple, the information of multi-frame video is not fully utilized, and the scene applicability is poor. In the complex scene conditions, the fusion produces false detection. The accuracy rate is poor.
发明内容Contents of the invention
有鉴于此,本申请提供一种人员聚集检测方法和装置,能够提高人员聚集检测的准确性。In view of this, the present application provides a method and device for detecting people gathering, which can improve the accuracy of detecting people gathering.
为解决上述技术问题,本申请的第一方面提供一种人员聚集检测方法,该方法包括:In order to solve the above technical problems, the first aspect of the present application provides a method for detecting people gathering, the method comprising:
从采集到的视频流中提取待检测目标,并跟踪所述待检测目标,获得跟踪链表;Extracting the target to be detected from the collected video stream, and tracking the target to be detected to obtain a tracking linked list;
根据所述跟踪链表,确定每一待检测目标的移动速度;Determine the moving speed of each target to be detected according to the tracking linked list;
判断该移动速度是否小于第一预设阈值,并根据移动速度小于第一预设阈值的多个待检测目标在一帧图像中所对应的多个区域,确定该帧图像的静态区域;judging whether the moving speed is less than a first preset threshold, and determining a static area of the frame of image according to a plurality of areas corresponding to a plurality of targets to be detected whose moving speed is less than the first preset threshold in a frame of image;
将视频流中存在静态区域的图像作为候选聚集图像;当确定候选聚集图像的数量N达到第二预设阈值时,通过预设的人员聚集预测模型判断所述候选聚集图像是否为人员聚集图像,并统计判断结果为人员聚集图像的个数M;Using images with static areas in the video stream as candidate aggregated images; when determining that the number N of candidate aggregated images reaches a second preset threshold, judging whether the candidate aggregated images are personnel aggregated images through a preset personnel aggregation prediction model, And the statistical judgment result is the number M of people gathering images;
当确定M大于第三预设阈值时,确定发生人员聚集事件。When it is determined that M is greater than the third preset threshold, it is determined that a people gathering event occurs.
本申请的第二方面提供一种人员聚集检测装置,该装置包括:获取单元、第一确定单元、第二确定单元、第三确定单元、第四确定单元、统计单元和第五确定单元;The second aspect of the present application provides a human gathering detection device, which includes: an acquisition unit, a first determination unit, a second determination unit, a third determination unit, a fourth determination unit, a statistics unit and a fifth determination unit;
所述获取单元,用于从采集到的视频流中提取待检测目标,并跟踪所述待检测目标,获得跟踪链表;The acquisition unit is configured to extract the target to be detected from the collected video stream, and track the target to be detected to obtain a tracking linked list;
所述第一确定单元,用于根据所述获取单元获取的跟踪链表,确定每一待检测目标的移动速度;The first determining unit is configured to determine the moving speed of each target to be detected according to the tracking linked list acquired by the acquiring unit;
所述第二确定单元,用于判断所述第一确定单元确定的移动速度是否小于第一预设阈值,并根据移动速度小于第一预设阈值的多个待检测目标在一帧图像中所对应的多个区域,确定该帧图像的静态区域;The second determining unit is configured to judge whether the moving speed determined by the first determining unit is smaller than a first preset threshold, and according to the number of objects to be detected whose moving speed is smaller than the first preset threshold in one frame of image For multiple corresponding areas, determine the static area of the frame image;
所述第三确定单元,用于将视频流中存在所述第二确定单元确定的静态区域的图像作为候选聚集图像;确定候选聚集图像的数量N是否达到第二预设阈值;The third determining unit is configured to use images in the video stream that have the static area determined by the second determining unit as candidate aggregated images; determine whether the number N of candidate aggregated images reaches a second preset threshold;
所述第四确定单元,用于当所述第三单元确定候选聚集图像的数量N达到第二预设阈值时,通过预设的人员聚集预测模型判断所述候选聚集图像是否为人员聚集图像;The fourth determining unit is configured to judge whether the candidate gathered images are people gathered images according to a preset people gathering prediction model when the third unit determines that the number N of candidate gathered images reaches a second preset threshold;
所述统计单元,用于统计所述第四确定单元判断结果为人员聚集图像的个数M;The statistical unit is used to count the number M of people-gathering images judged by the fourth determination unit;
所述第五确定单元,用于当确定所述统计单元统计的M大于第三预设阈值时,确定发生人员聚集事件。The fifth determining unit is configured to determine that a people gathering event occurs when it is determined that the M counted by the statistical unit is greater than a third preset threshold.
本申请的第三方面提供一种非瞬时计算机可读存储介质,所述非瞬时计算机可读存储介质存储指令,所述指令在由处理器执行时使得所述处理器执行如所述的人员聚集检测方法的步骤。The third aspect of the present application provides a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium stores instructions, and when executed by a processor, the instructions cause the processor to perform the above-mentioned personnel gathering The steps of the detection method.
一种电子设备,包括如上述非瞬时计算机可读存储介质、以及可访问所述非瞬时计算机可读存储介质的所述处理器。An electronic device includes the above-mentioned non-transitory computer-readable storage medium, and the processor capable of accessing the non-transitory computer-readable storage medium.
本申请中通过静态区域识别和深度学习获得的预设的人员聚集预测模型相结合,对视频图像的多帧图像进行二次识别,充分利用了人员聚集事件发生的过程信息,能够提高人员聚集检测的准确性。In this application, the preset personnel gathering prediction model obtained through static area recognition and deep learning is combined to perform secondary recognition on multi-frame images of video images, making full use of the process information of personnel gathering events, which can improve the detection of personnel gathering accuracy.
附图说明Description of drawings
图1为本申请实施例中人员聚集检测流程示意图;FIG. 1 is a schematic diagram of the detection process of people gathering in the embodiment of the present application;
图2为本申请实施例中发生聚集事件的位置区域示意图;Figure 2 is a schematic diagram of the location area where the aggregation event occurs in the embodiment of the present application;
图3为本申请实施例中应用于上述技术的装置结构示意图。FIG. 3 is a schematic structural diagram of a device applied to the above technology in an embodiment of the present application.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,下面结合附图并举实施例,对本发明的技术方案进行详细说明。In order to make the object, technical solution and advantages of the present invention clearer, the technical solution of the present invention will be described in detail below with reference to the accompanying drawings and examples.
本申请实施例中提供一种人员聚集检测方法,通过静态区域识别和深度学习获得的预设的人员聚集预测模型相结合,对视频图像的多帧图像进行二次识别,充分利用了人员聚集事件发生的过程信息,能够提高人员聚集检测的准确性。The embodiment of the present application provides a people gathering detection method, which combines the static area recognition and the preset people gathering prediction model obtained by deep learning to perform secondary recognition on multiple frames of video images, making full use of people gathering events Occurring process information can improve the accuracy of personnel gathering detection.
本申请应用于公共场合、重要区域的人员聚集事件的检测,下面结合附图,详细说明本申请实施例中实现人员聚集检测过程。This application is applied to the detection of people gathering events in public places and important areas. The process of realizing people gathering detection in this embodiment of the application will be described in detail below with reference to the accompanying drawings.
为了描述方便,实现人员聚集检测的设备在下文中简称为检测设备。For the convenience of description, the device that realizes the detection of people gathering is hereinafter referred to as the detection device for short.
参见图1,图1为本申请实施例中人员聚集检测流程示意图。具体步骤为:Referring to FIG. 1 , FIG. 1 is a schematic diagram of a process for detecting people gathering in an embodiment of the present application. The specific steps are:
步骤101,检测设备从采集到的视频流中提取待检测目标,并跟踪所述待检测目标,获得跟踪链表。
采集视频流可以通过视频监控设备,如摄像头等实时获取监控的场景中的视频图像,并传输给检测设备,检测设备接收视频监控设备发送的视频流并进行存储,即可实时获取视频流。To collect video streams, video images in the monitored scene can be obtained in real time through video surveillance equipment, such as cameras, and transmitted to detection equipment.
本步骤中从采集到的视频流中提取待检测目标,并跟踪所述待检测目标,获得跟踪链表,包括如下两个步骤:In this step, the target to be detected is extracted from the collected video stream, and the target to be detected is tracked to obtain a tracking linked list, which includes the following two steps:
第一步、获取视频流的每一帧图像的前景图像,并获取前景图像中的待检测目标。The first step is to obtain the foreground image of each frame image of the video stream, and obtain the target to be detected in the foreground image.
该步骤可以通过如下两种方式实现,但不限于如下两种实现方式:This step can be implemented in the following two ways, but not limited to the following two ways:
第一种:The first:
可以通过用于前景检测的前景模型从视频流中提取前景目标,从而以该前景目标为目标人员的检测目标。其中,上述背景建模方法可以包括高斯混合模型(GaussianMixture Model)和ViBe(visual background extractor,视觉背景提取)算法等。A foreground object can be extracted from a video stream by using a foreground model for foreground detection, so that the foreground object can be used as the detection object of the target person. Wherein, the above-mentioned background modeling method may include a Gaussian Mixture Model (GaussianMixture Model) and a ViBe (visual background extractor, visual background extraction) algorithm and the like.
第二种:The second type:
可以通过已训练的卷积神经网络(Convolutional Neural Network)从上述视频流中提取特征目标,从而以该特征目标为目标人员的检测目标。其中,上述卷积神经网络需预先通过人员特征的训练,可以识别出视频中一帧图像中出现的特征目标,作为一种实施例,可以用人员的肢体对卷积神经网络进行训练,使得后续已训练的卷积神经网络可以从视频流中提取人员的肢体目标,从而获得待检测目标。The feature target can be extracted from the above video stream through a trained convolutional neural network (Convolutional Neural Network), so that the feature target can be used as the detection target of the target person. Among them, the above-mentioned convolutional neural network needs to be trained on the characteristics of personnel in advance, and can identify the characteristic target appearing in a frame of image in the video. As an embodiment, the convolutional neural network can be trained with the limbs of personnel, so that the subsequent The trained convolutional neural network can extract the body target of the person from the video stream, so as to obtain the target to be detected.
第二步、对每一帧图像的前景图像中的每个待检测目标进行跟踪,获取跟踪链表。The second step is to track each target to be detected in the foreground image of each frame of image to obtain a tracking linked list.
在获得待检测目标后,可以跟踪获得的待检测目标,并将跟踪结果记录到跟踪表中。After obtaining the object to be detected, the obtained object to be detected can be tracked, and the tracking result can be recorded in the tracking table.
具体实现时,可以通过卡尔曼滤波、粒子滤波或者多目标跟踪技术等方式对上述检测目标进行跟踪。During specific implementation, the above-mentioned detected targets may be tracked by means of Kalman filter, particle filter, or multi-target tracking technology.
在具体实现时,可以使用跟踪链表针对每个检测目标生成一个跟踪表项。也可以使用一个跟踪链表针对一个检测目标生成一个跟踪链表。In specific implementation, a tracking list can be used to generate a tracking entry for each detection target. A tracking linked list can also be used to generate a tracking linked list for a detection target.
本申请实施例中的跟踪链表至少包括:待检测目标的标识、所述待检测目标所处的视频帧的视频帧标识、以及所述待检测目标的历史坐标的映射关系。The tracking linked list in the embodiment of the present application at least includes: the identification of the target to be detected, the video frame ID of the video frame where the target to be detected is located, and the mapping relationship of the historical coordinates of the target to be detected.
待检测目标的历史坐标为检测目标的轮廓对应的外接矩形框的中心点的坐标。The historical coordinates of the target to be detected are the coordinates of the center point of the circumscribed rectangular frame corresponding to the outline of the detected target.
具体实现跟踪链表时,可以通过如下表格的形式实现。参见表1,表1为本申请实施例中跟踪链表所包含的内容。When implementing the tracking linked list, it can be implemented in the form of the following table. Referring to Table 1, Table 1 shows the content contained in the tracking linked list in the embodiment of the present application.
表1Table 1
在表1中给出了待检测目标1在视频帧2、3和8中出现时的历史坐标,各时刻对应的坐标信息,也就是二维坐标信息。如t1时刻的坐标可以为(3,5)等。Table 1 gives the historical coordinates of the target to be detected 1 when it appears in video frames 2, 3 and 8, and the coordinate information corresponding to each moment, that is, the two-dimensional coordinate information. For example, the coordinates at time t1 may be (3, 5) and so on.
步骤102,该检测设备根据所述跟踪链表,确定每一待检测目标的移动速度。
本步骤中根据所述跟踪链表,确定每一待检测目标的移动速度,包括:In this step, according to the tracking linked list, the moving speed of each target to be detected is determined, including:
根据所述跟踪表中每一待检测目标的历史坐标,计算预设时长内每一待检测目标的移动速度。According to the historical coordinates of each target to be detected in the tracking table, the moving speed of each target to be detected within a preset time period is calculated.
具体实现时,待检测目标的移动速度可以根据如下公式计算待检测目标的移动速度v[t]:In specific implementation, the moving speed of the target to be detected can be calculated according to the following formula v[t] of the moving speed of the target to be detected:
其中,(x[t],y[t])为在时刻t时该待检测目标在所述跟踪表中的历史坐标,(x[t-T],y[t-T])为在时刻t-T时该检测目标在所述跟踪表中的历史坐标,T为预设时长。Wherein, (x[t], y[t]) is the historical coordinate of the object to be detected in the tracking table at time t, and (x[t-T], y[t-T]) is the detection target at time t-T. The historical coordinates of the target in the tracking table, T is the preset duration.
以表1为例,假设t1和t2之间的时间间隔为预设时长T,则在时刻t2时待检测目标1的移动速度v[t2]:Taking Table 1 as an example, assuming that the time interval between t1 and t2 is the preset duration T, then the moving speed v[t2] of the target 1 to be detected at time t2:
假设待检测目标1在时刻t2(13:45:38)时的坐标为(6,9),在时刻t1(13:45:36)时的坐标为(3,5),则在时刻t2时的移动速度确定为:2.5CM/S,这里的位移以CM为单位,实际应用中可以根据实际需要确定移动单位。Assuming that the coordinates of target 1 to be detected at time t2 (13:45:38) are (6,9), and the coordinates at time t1 (13:45:36) are (3,5), then at time t2 The moving speed is determined as: 2.5CM/S, and the displacement here is in CM. In practical applications, the moving unit can be determined according to actual needs.
本申请实施例中通过在第T时刻开始计算检测目标的移动速度,第T时刻之前的检测目标的移动速度可以忽略不计,T可以根据实际需要配置,为了对速度计算的更精确,可以设置的小一点。In the embodiment of the present application, by starting to calculate the moving speed of the detection target at the time T, the moving speed of the detection target before the T time can be ignored, and T can be configured according to actual needs. In order to calculate the speed more accurately, it can be set smaller.
步骤103,该检测设备判断该移动速度是否小于第一预设阈值,并根据移动速度小于第一预设阈值的多个待检测目标在一帧图像中所对应的多个区域,确定该帧图像的静态区域。
本步骤中根据移动速度小于第一预设阈值的多个待检测目标在一帧图像中所对应的多个区域,确定该帧图像的静态区域,包括:In this step, according to a plurality of regions corresponding to a plurality of targets to be detected whose moving speed is less than the first preset threshold in a frame image, the static region of the frame image is determined, including:
当一帧图像中存在K个移动速度小于第一预设阈值的K个待检测目标时,将所述K个待检测目标在该帧图像中的并集对应的最大外接矩形框作为该帧图像静态区域,或者直接将所述K个待检测目标在该帧图像中的并集直接作为该帧图像的静态区域。其中,K大于第四预设值,第四预设值的值根据实际需要设备,如5、8等。When there are K objects to be detected whose moving speed is less than the first preset threshold in a frame image, the largest circumscribed rectangular frame corresponding to the union of the K objects to be detected in the frame image is used as the frame image static area, or directly use the union of the K targets to be detected in the frame image as the static area of the frame image. Wherein, K is greater than the fourth preset value, and the value of the fourth preset value is set according to actual needs, such as 5, 8, and so on.
参见图2,图2为本申请实施例中静态区域示意图。图2中针对7个检测目标对应的目标区域的并集所对应的外接矩形框作为静态区域,每个检测目标对应的区域也使用矩形框标示,图2中检测目标6和检测目标7对应的区域存在重叠部分。Referring to FIG. 2 , FIG. 2 is a schematic diagram of a static area in an embodiment of the present application. In Figure 2, the circumscribed rectangular frame corresponding to the union of the target areas corresponding to the 7 detection targets is used as a static area, and the area corresponding to each detection target is also marked with a rectangular frame. In Figure 2, the corresponding detection target 6 and detection target 7 Regions overlap.
本申请实施例中存在多个目标的移动速度小区第一预设值时,多个目标对应区域的并集对应的区域才作为静态区域,存在静态区域的图像才会作为候选聚集图像进行二次识别;这样做可以防止频繁进行人员聚集图像的二次识别。In the embodiment of the present application, when the first preset value of the moving speed cell of multiple targets exists, the area corresponding to the union of the corresponding areas of multiple targets is used as a static area, and the image with the static area is used as a candidate aggregation image for secondary Identification; doing so prevents frequent secondary identification of images of people gathering.
步骤104,该检测设备将视频流中存在静态区域的图像作为候选聚集图像;当确定候选聚集图像的数量N达到第二预设阈值时,通过预设的人员聚集预测模型判断所述候选聚集图像是否为人员聚集图像,并统计判断结果为人员聚集图像的个数M。
本申请实施例中的人员聚集预测模型是以多张聚集事件的图像和非聚集事件的图像作为样本图像训练得到的,具体训练过程如下:The personnel gathering prediction model in the embodiment of the present application is obtained by using multiple images of gathering events and images of non-gathering events as sample images for training. The specific training process is as follows:
该人员聚集预测模型由卷积神经网络模型和回归学习模型组成;基于卷积神经网络模型输入一张图像时,输出聚集与非聚集的置信度;在回归学习模型中设置聚集置信度阈值,基于所述回归学习模型可以输入卷积神经网络模型的输出,即聚集与非聚集的置信度;输出该图像是否为人员聚集图像标识,当该聚集的置信度大于聚集置信度阈值时,输出为人员聚集图像标识;否则,输出为非人员聚集图像标识。The personnel aggregation prediction model is composed of a convolutional neural network model and a regression learning model; when an image is input based on the convolutional neural network model, the confidence of aggregation and non-aggregation is output; the aggregation confidence threshold is set in the regression learning model, based on The regression learning model can input the output of the convolutional neural network model, that is, the confidence degree of aggregation and non-aggregation; output whether the image is a personnel aggregation image identification, and when the aggregation confidence is greater than the aggregation confidence threshold, the output is personnel aggregated image ID; otherwise, the output is a non-person aggregated image ID.
其中,聚集置信度阈值可以根据实际需要设置,本申请实施例中不进行限制。Wherein, the aggregation confidence threshold can be set according to actual needs, and is not limited in this embodiment of the present application.
卷积神经网络模型的建立,具体为:The establishment of the convolutional neural network model is as follows:
通过将A张聚集事件的图像和B张非聚集事件的图像作为样本数据,在卷积神经网络中进行学习,获得识别聚集与非聚集图像的能力,进而建立卷积神经网络模型。其中,卷积神经网络根据输入的数据和类别标签进行学习,获得识别聚集与非聚集图像的能力。卷积神经网络可采用Googlenet、ResNet、VGG、Alexnet等深度学习网络。By using the images of A aggregation event and B images of non-aggregation events as sample data, learning in the convolutional neural network, the ability to identify aggregation and non-aggregation images is obtained, and then the convolutional neural network model is established. Among them, the convolutional neural network learns according to the input data and category labels, and acquires the ability to identify clustered and non-clustered images. The convolutional neural network can use deep learning networks such as Googlenet, ResNet, VGG, and Alexnet.
该检测设备当确定候选聚集图像的数量N未达到第二预设阈值时,继续获取候选聚集图像。When the detection device determines that the number N of candidate aggregated images does not reach the second preset threshold, it continues to acquire candidate aggregated images.
其中,M、N根据实际应用环境进行设置,不做具体限制,M为整数,且不大于N、N为大于0的整数。Wherein, M and N are set according to the actual application environment without specific limitation, M is an integer and not greater than N, and N is an integer greater than 0.
基于训练好的人员聚集预设模型输入一张图像,可输出该图像是否为人员聚集图像标识。Input an image based on the trained people gathering preset model, and output whether the image is a people gathering image identification.
步骤105,当确定M大于第三预设阈值时,该检测设备确定发生人员聚集事件。
当M不大于第三预设值时,清空当前候选聚集图像,并根据实时采集到的视频流再次获取候选聚集图像。When M is not greater than the third preset value, the current candidate aggregated image is cleared, and the candidate aggregated image is acquired again according to the video stream collected in real time.
本申请实施例中当检测设备确定发生人员聚集事件时,所述方法进一步包括:In the embodiment of the present application, when the detection device determines that a gathering event of people occurs, the method further includes:
检测设备发出告警信息,所述告警信息包括发生聚集事件标志,以及发生聚集事件的位置区域。The detection device sends out alarm information, where the alarm information includes an aggregation event occurrence flag and a location area where the aggregation event occurs.
发生聚集事件标志即显示当前发生聚集事件,具体实现时可以使用文字、使用符号实现,也可以使用红色、黄色等图案实现等,且不限于上述实现方式;The occurrence of an aggregation event flag indicates the current occurrence of an aggregation event, which can be realized by using words, symbols, or red, yellow and other patterns, etc., and is not limited to the above-mentioned implementation methods;
发生聚集事件的位置区域即可以为通过在图像中标示出人员聚集的区域;本申请具体实现时给出但不限于下述表示方式:The location area where the gathering event occurs can be defined as the area where people gather by marking in the image; the specific implementation of this application gives but is not limited to the following representations:
选择出聚集的置信度最高的图像作为要显示的图像,并且将该图像中的静态区域显示。The image with the highest confidence gathered is selected as the image to be displayed, and the static area in the image is displayed.
当发出告警时,可以由管理员根据当前实际环境采取对应的预警措施,如报警、警告、疏导等;When an alarm is issued, the administrator can take corresponding early warning measures according to the current actual environment, such as alarm, warning, and guidance;
也可以在设备上针对告警信息预先配置处理策略,即配置告警信息和处理策略的对应关系;You can also pre-configure the processing strategy for the alarm information on the device, that is, configure the corresponding relationship between the alarm information and the processing strategy;
当发生告警时,使用告警信息与配置的告警信息匹配,若匹配成功,则使用对应的告警信息对应的处理策略进行处理,如报警,通过广播的方式警告、疏导等。When an alarm occurs, the alarm information is used to match the configured alarm information. If the match is successful, the processing strategy corresponding to the corresponding alarm information is used for processing, such as alarm, warning, and guidance through broadcasting.
及时告警的实现方式能够快速,及时通知到聚集人员,保证人员的人生安全。The realization of timely alarm can quickly and timely notify the gathered personnel to ensure the safety of personnel.
基于同样的发明构思,本申请还提出一种人员聚集检测装置。参见图3,图3为本申请实施例中应用于上述技术的装置结构示意图。该装置包括:获取单元301、第一确定单元302、第二确定单元303、第三确定单元304、第四确定单元305、统计单元306和第五确定单元307;Based on the same inventive concept, the present application also proposes a people gathering detection device. Referring to FIG. 3 , FIG. 3 is a schematic structural diagram of a device applied to the above technology in an embodiment of the present application. The device includes: an
获取单元301,用于从采集到的视频流中提取待检测目标,并跟踪所述待检测目标,获得跟踪链表;An
第一确定单元302,用于根据获取单元301获取的跟踪链表,确定每一待检测目标的移动速度;The first determining
第二确定单元303,用于判断第一确定单元302确定的移动速度是否小于第一预设阈值,并根据移动速度小于第一预设阈值的多个待检测目标在一帧图像中所对应的多个区域,确定该帧图像的静态区域;The second determining
第三确定单元304,用于将视频流中存在第二确定单元303确定的静态区域的图像作为候选聚集图像;确定候选聚集图像的数量N是否达到第二预设阈值;The third determining
第四确定单元305,用于当所述第三单元确定候选聚集图像的数量N达到第二预设阈值时,通过预设的人员聚集预测模型判断所述候选聚集图像是否为人员聚集图像;The fourth determining
统计单元306,用于统计第四确定单元305判断结果为人员聚集图像的个数M;A
第五确定单元307,用于当确定统计单元306统计的M大于第三预设阈值时,确定发生人员聚集事件。The fifth determining
较佳地,Preferably,
获取单元301,具体用于从采集到的视频流中提取待检测目标,并跟踪所述待检测目标,获得跟踪链表时,包括:获取视频流的每一帧图像的前景图像;对每一帧图像的前景图像中的每个待检测目标进行跟踪,获取跟踪链表。The
较佳地,Preferably,
第一确定单元302,具体用于根据所述跟踪链表,确定每一待检测目标的移动速度时,包括:根据所述跟踪表中每一待检测目标的历史坐标,计算预设时长内每一待检测目标的移动速度;其中,所述跟踪链表包括所述待检测目标的标识、所述待检测目标所处的视频帧的视频帧标识、以及所述待检测目标的历史坐标的映射关系。The first determining
较佳地,Preferably,
第一确定单元302,具体用于根据所述跟踪表中每一待检测目标的历史坐标,计算预设时长内每一待检测目标的移动速度时,包括:根据如下公式计算待检测目标的移动速度v[t]:其中,(x[t],y[t])为在时刻t时该待检测目标在所述跟踪表中的历史坐标,(x[t-T],y[t-T])为在时刻t-T时该检测目标在所述跟踪表中的历史坐标,T为预设时长。The first determining
较佳地,Preferably,
第五确定单元307,进一步用于当确定M不大于第三预设值时,清空当前候选聚集图像,并触发获取单元301根据实时采集到的视频流再次获取候选聚集图像。The fifth determining
较佳地,Preferably,
所述人员聚集预测模型是以多张聚集事件的图像和非聚集事件的图像作为样本图像训练得到的。The personnel gathering prediction model is trained by using multiple images of gathering events and images of non-gathering events as sample images.
上述实施例的单元可以集成于一体,也可以分离部署;可以合并为一个单元,也可以进一步拆分成多个子单元。The units in the above embodiments can be integrated or deployed separately; they can be combined into one unit, or can be further split into multiple sub-units.
此外,本申请实施例中还提供一种非瞬时计算机可读存储介质,所述非瞬时计算机可读存储介质存储指令,所述指令在由处理器执行时使得所述处理器执行所述人员聚集检测方法的步骤。In addition, an embodiment of the present application also provides a non-transitory computer-readable storage medium, the non-transitory computer-readable storage medium stores instructions, and when the instructions are executed by a processor, the processor executes the personnel gathering The steps of the detection method.
另外,还提供一种电子设备,包括如所述非瞬时计算机可读存储介质、以及可访问所述非瞬时计算机可读存储介质的所述处理器。In addition, an electronic device is also provided, including the non-transitory computer-readable storage medium and the processor capable of accessing the non-transitory computer-readable storage medium.
综上所述,本申请通过静态区域识别和深度学习相结合,对视频图像的多帧图像进行二次识别,充分利用了人员聚集事件发生的过程信息,能够提高人员聚集检测的准确性。To sum up, this application combines static area recognition and deep learning to perform secondary recognition on multi-frame images of video images, making full use of the process information of people gathering events, and can improve the accuracy of people gathering detection.
本申请实施例中采用离线深度学习多帧二次识别的方法,充分学习事件发生的过程信息,使事件判断更加准确;In the embodiment of this application, the offline deep learning multi-frame secondary recognition method is adopted to fully learn the process information of the event occurrence, so as to make the event judgment more accurate;
深度学习方法结合静态区域识别的辅助判断,区别于只使用基础信息方法或深度学习方法进行判断,能有效减低误报。同时能输出发生聚集区域的位置,信息输出更丰富,方便于报警的后处理。The deep learning method combined with the auxiliary judgment of static area recognition is different from the basic information method or deep learning method for judgment, which can effectively reduce false positives. At the same time, the location of the gathering area can be output, and the information output is more abundant, which is convenient for post-processing of the alarm.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明保护的范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included in the present invention. within the scope of protection.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811523516.9ACN111325048B (en) | 2018-12-13 | 2018-12-13 | Personnel gathering detection method and device |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201811523516.9ACN111325048B (en) | 2018-12-13 | 2018-12-13 | Personnel gathering detection method and device |
| Publication Number | Publication Date |
|---|---|
| CN111325048A CN111325048A (en) | 2020-06-23 |
| CN111325048Btrue CN111325048B (en) | 2023-05-26 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201811523516.9AActiveCN111325048B (en) | 2018-12-13 | 2018-12-13 | Personnel gathering detection method and device |
| Country | Link |
|---|---|
| CN (1) | CN111325048B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111985385B (en)* | 2020-08-14 | 2023-08-29 | 杭州海康威视数字技术股份有限公司 | Behavior detection method, device and equipment |
| CN112270671B (en)* | 2020-11-10 | 2023-06-02 | 杭州海康威视数字技术股份有限公司 | Image detection method, device, electronic equipment and storage medium |
| CN113536932A (en)* | 2021-06-16 | 2021-10-22 | 中科曙光国际信息产业有限公司 | Crowd gathering prediction method and device, computer equipment and storage medium |
| CN113837034A (en)* | 2021-09-08 | 2021-12-24 | 云从科技集团股份有限公司 | Aggregated population monitoring method, device and computer storage medium |
| CN114494350B (en)* | 2022-01-28 | 2022-10-14 | 北京中电兴发科技有限公司 | Personnel gathering detection method and device |
| CN115761636A (en)* | 2022-11-21 | 2023-03-07 | 苏州浪潮智能科技有限公司 | Method, system, equipment and storage medium for detecting people gathering |
| CN116844100B (en)* | 2023-04-10 | 2025-09-30 | 海信集团控股股份有限公司 | Event detection method and electronic device |
| CN117079192B (en)* | 2023-10-12 | 2024-01-02 | 东莞先知大数据有限公司 | Method, device, equipment and medium for estimating number of rope skipping when personnel are shielded |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101325690A (en)* | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
| CN102164270A (en)* | 2011-01-24 | 2011-08-24 | 浙江工业大学 | Intelligent video monitoring method and system capable of exploring abnormal events |
| CN103473791A (en)* | 2013-09-10 | 2013-12-25 | 惠州学院 | Method for automatically recognizing abnormal velocity event in surveillance video |
| CN103839065A (en)* | 2014-02-14 | 2014-06-04 | 南京航空航天大学 | Extraction method for dynamic crowd gathering characteristics |
| WO2016014724A1 (en)* | 2014-07-23 | 2016-01-28 | Gopro, Inc. | Scene and activity identification in video summary generation |
| CN105447458A (en)* | 2015-11-17 | 2016-03-30 | 深圳市商汤科技有限公司 | Large scale crowd video analysis system and method thereof |
| WO2018133666A1 (en)* | 2017-01-17 | 2018-07-26 | 腾讯科技(深圳)有限公司 | Method and apparatus for tracking video target |
| CN108810616A (en)* | 2018-05-31 | 2018-11-13 | 广州虎牙信息科技有限公司 | Object localization method, image display method, device, equipment and storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2012139269A1 (en)* | 2011-04-11 | 2012-10-18 | Intel Corporation | Tracking and recognition of faces using selected region classification |
| US9087386B2 (en)* | 2012-11-30 | 2015-07-21 | Vidsys, Inc. | Tracking people and objects using multiple live and recorded surveillance camera video feeds |
| JP6276519B2 (en)* | 2013-05-22 | 2018-02-07 | 株式会社 日立産業制御ソリューションズ | Person counting device and human flow line analyzing device |
| CN105872477B (en)* | 2016-05-27 | 2018-11-23 | 北京旷视科技有限公司 | video monitoring method and video monitoring system |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101325690A (en)* | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
| CN102164270A (en)* | 2011-01-24 | 2011-08-24 | 浙江工业大学 | Intelligent video monitoring method and system capable of exploring abnormal events |
| CN103473791A (en)* | 2013-09-10 | 2013-12-25 | 惠州学院 | Method for automatically recognizing abnormal velocity event in surveillance video |
| CN103839065A (en)* | 2014-02-14 | 2014-06-04 | 南京航空航天大学 | Extraction method for dynamic crowd gathering characteristics |
| WO2016014724A1 (en)* | 2014-07-23 | 2016-01-28 | Gopro, Inc. | Scene and activity identification in video summary generation |
| CN105447458A (en)* | 2015-11-17 | 2016-03-30 | 深圳市商汤科技有限公司 | Large scale crowd video analysis system and method thereof |
| WO2018133666A1 (en)* | 2017-01-17 | 2018-07-26 | 腾讯科技(深圳)有限公司 | Method and apparatus for tracking video target |
| CN108810616A (en)* | 2018-05-31 | 2018-11-13 | 广州虎牙信息科技有限公司 | Object localization method, image display method, device, equipment and storage medium |
| Publication number | Publication date |
|---|---|
| CN111325048A (en) | 2020-06-23 |
| Publication | Publication Date | Title |
|---|---|---|
| CN111325048B (en) | Personnel gathering detection method and device | |
| CN108062349B (en) | Video surveillance method and system based on video structured data and deep learning | |
| CN111091098B (en) | Training method of detection model, detection method and related device | |
| CN108009473B (en) | Video structured processing method, system and storage device based on target behavior attribute | |
| CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
| CN108052859B (en) | A method, system and device for abnormal behavior detection based on clustered optical flow features | |
| CN112084963B (en) | A monitoring and early warning method, system and storage medium | |
| CN111860318A (en) | Construction site pedestrian loitering detection method, device, equipment and storage medium | |
| CN108053427A (en) | A kind of modified multi-object tracking method, system and device based on KCF and Kalman | |
| CN111325089A (en) | Method and apparatus for tracking object | |
| CN107133607B (en) | Crowd counting method and system based on video surveillance | |
| CN104298964B (en) | Method and device for quickly identifying human behavior and action | |
| CN111027370A (en) | Multi-target tracking and behavior analysis detection method | |
| CN112232211A (en) | An intelligent video surveillance system based on deep learning | |
| CN108197575A (en) | A kind of abnormal behaviour recognition methods detected based on target detection and bone point and device | |
| CN114648748A (en) | Motor vehicle illegal parking intelligent identification method and system based on deep learning | |
| KR101472674B1 (en) | Method and apparatus for video surveillance based on detecting abnormal behavior using extraction of trajectories from crowd in images | |
| CN115116127A (en) | A fall detection method based on computer vision and artificial intelligence | |
| CN112733690A (en) | High-altitude parabolic detection method and device and electronic equipment | |
| CN113362374A (en) | High-altitude parabolic detection method and system based on target tracking network | |
| CN108540752A (en) | The methods, devices and systems that target object in video monitoring is identified | |
| CN116311166A (en) | Traffic obstacle recognition method and device and electronic equipment | |
| CN115661735A (en) | Target detection method and device and computer readable storage medium | |
| CN104778676A (en) | Depth ranging-based moving target detection method and system | |
| CN110516600A (en) | A kind of bus passenger flow detection method based on Face datection |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right | ||
| TR01 | Transfer of patent right | Effective date of registration:20250729 Address after:Rooms 602 and 605, No. 85 Xiangxue Avenue Middle, Huangpu District, Guangzhou City, Guangdong Province 510000 Patentee after:Guangzhou Gaohang Technology Transfer Co.,Ltd. Country or region after:China Address before:Hangzhou City, Zhejiang province 310051 Binjiang District Qianmo Road No. 555 Patentee before:Hangzhou Hikvision Digital Technology Co.,Ltd. Country or region before:China |