





技术领域technical field
本发明涉及工地生产安全领域,具体涉及基于机器视觉的危险区域人员闯入监测方法及系统。The invention relates to the field of construction site production safety, in particular to a method and system for monitoring intrusion of persons in dangerous areas based on machine vision.
背景技术Background technique
图像视频中的场景目标物体检测已经成为当前人工智能、计算机视觉领域的一个研究热点。而生产安全问题一直是一个社会关注度极高的问题,每年近百万起安全事故给社会和家庭带来巨大的压力。在施工现场,安全事故很多是由于工人的违规违章造成的。而坑洞是工业施工中,易造成人员损伤的地方,对工作人员的生命安全具有重要意义。但是,现场施工复杂的情况下,管理人员的松懈和施工人员的安全意识缺乏,坑洞是一个极其危险的存在,稍不注意,十多米甚至更高的落差极易造成坠落事件。若不多加管理,会给施工个人及单位带来很大的损失。Scene object detection in images and videos has become a research hotspot in the field of artificial intelligence and computer vision. The production safety problem has always been a problem of high social concern, and nearly one million safety accidents each year bring enormous pressure to the society and families. On construction sites, many safety accidents are caused by workers' violations of laws and regulations. Potholes are places where people are likely to be injured in industrial construction, and are of great significance to the safety of workers. However, in the case of complex on-site construction, lax management personnel and lack of safety awareness of construction personnel, potholes are an extremely dangerous existence. If there is no more management, it will bring great losses to the construction individuals and units.
在传统的工地生产安全中,对于危险区域常采用一下方法:悬挂危险标识、设立围栏、现场监督员监督等,对于这些方法都有各自的缺点和不足。例如,悬挂危险标识可能会有人员疏忽大意没有注意到危险标识,误入进危险区域,然而此时没有及时的提醒和预警机制,极有可能一时的疏忽造成很大的损失;针对设立围栏,若是有人故意非法闯入,对施工区域进行破坏,管理人员可能不能有及时的发现,则对施工场地造成巨大的财产损失;若是在现场设立监督员,由于人存在松散、疏忽等情况,并且设立人员浪费人力。总的来说以上几种方法,都不能适应现如今人工智能高速发展的环境下,对于智能工地的应用和需求,所以迫切需要对此方法进行优化。In traditional construction site production safety, the following methods are often used for dangerous areas: hanging danger signs, setting up fences, on-site supervisor supervision, etc. These methods have their own shortcomings and deficiencies. For example, when a hazard sign is hung, there may be people who are negligent and fail to notice the hazard sign and enter the dangerous area by mistake. However, there is no timely reminder and early warning mechanism at this time, and it is very likely that temporary negligence will cause great losses; for the establishment of fences, If someone deliberately and illegally breaks into the construction area and damages the construction area, the management personnel may not be able to find it in time, which will cause huge property losses to the construction site; People waste manpower. In general, none of the above methods can adapt to the application and demand of intelligent construction sites in the environment of rapid development of artificial intelligence, so there is an urgent need to optimize this method.
发明内容SUMMARY OF THE INVENTION
本发明提供了基于机器视觉的危险区域人员闯入监测方法及系统,用于解决现有的工地安全监管方法效率低,人力成本高的技术问题。The present invention provides a method and system for monitoring personnel intrusion in dangerous areas based on machine vision, which is used to solve the technical problems of low efficiency and high labor cost in the existing construction site safety supervision methods.
为解决上述技术问题,本发明提出的技术方案为:In order to solve the above-mentioned technical problems, the technical scheme proposed by the present invention is:
一种基于机器视觉的危险区域人员闯入监测方法,包括以下步骤:A machine vision-based detection method for personnel intrusion in dangerous areas, comprising the following steps:
获取目标施工区域的监控图像,并将目标施工区域的监控图像输入到预设的危险目标检测模型中,判断目标施工区域内是否存在危险目标:若判断所述目标施工区域内存在危险目标,获取危险目标的坐标,并根据危险目标的坐标确定危险区域的坐标;Obtain the monitoring image of the target construction area, and input the monitoring image of the target construction area into the preset dangerous target detection model, and determine whether there is a dangerous target in the target construction area: if it is judged that there is a dangerous target in the target construction area, obtain The coordinates of the dangerous target, and the coordinates of the dangerous area are determined according to the coordinates of the dangerous target;
识别监控图像中的人员坐标,将所述人员坐标与所述危险区域的坐标进行比较,判断是否存在人员在所述危险区域内,若判断存在人员在所述危险区域内,发送报警信号给用户。Identify the coordinates of the personnel in the monitoring image, compare the coordinates of the personnel with the coordinates of the dangerous area, determine whether there are personnel in the dangerous area, and if it is judged that there are personnel in the dangerous area, send an alarm signal to the user .
优选的,所述危险目标检测模型以Yolov4网络为基础框架,训练样本为标注危险目标类别以及检测框的监控图像,输入量为监控图像,输出量以标注有危险目标类别及其预测框的监控图像。Preferably, the dangerous target detection model is based on the Yolov4 network, the training samples are monitoring images marked with dangerous target categories and detection frames, the input is monitoring images, and the output is monitoring images marked with dangerous target categories and their prediction frames. image.
优选的,将目标施工区域的监控图像输入到预设的危险目标检测模型中,判断目标施工区域内是否存在危险目标,具体为:Preferably, the monitoring image of the target construction area is input into a preset dangerous target detection model to determine whether there is a dangerous target in the target construction area, specifically:
第一步:将监控图像的图像尺寸调整为p*p,其中,p为32的整数倍;Step 1: Adjust the image size of the monitoring image to p*p, where p is an integer multiple of 32;
第二步:将尺寸调整后的监控图像分成大小为s*s的网格,为每个网格分配B个需要预测的预测框bounding box,通过yolov4进行训练模型,来获得每个bounding box所对应的位置、类别信息c和置信度confidence的值;Step 2: Divide the resized monitoring image into grids of size s*s, assign B bounding boxes to be predicted for each grid, and train the model through yolov4 to obtain the information of each bounding box. Corresponding location, category information c and the value of confidence confidence;
其中,所述预测框bounding box的自身位置标记为(x,y,w,h),其中,x和y表示预测框的中心点坐标,w和h表示预测框的长和宽;所述置信度confidence定义计算为:Wherein, the position of the bounding box of the prediction frame is marked as (x, y, w, h), where x and y represent the coordinates of the center point of the prediction frame, and w and h represent the length and width of the prediction frame; the confidence The degree confidence definition is calculated as:
其中,表示第i个格子的第j个预测框boundingbox的置信度,Pr(object)表示当前预测框bounding box是否有危险目标的概率,表示真实检测框和预测检测框之间的IOU比值;每个网格还预测C个条件类别概率:Pr(Classi|object);in, Represents the confidence of the j-th prediction box boundingbox of the i-th grid, Pr(object) represents the probability of whether the current prediction box bounding box has a dangerous target, Represents the IOU ratio between the real detection frame and the predicted detection frame; each grid also predicts C conditional class probabilities: Pr(Classi |object);
某类出现在预测框中的概率以及预测框拟合目标程度表达式为:The probability of a class appearing in the prediction box and the fitting degree of the prediction box to the target are expressed as:
其中,Classi表示第i类类别;Among them, Classi represents the i-th category;
第三步:对所述第二步中计算所获得的预测框的自身位置坐标(x,y,w,h)进行归一化,获得归一化位置坐标(X,Y,W,H);The third step: normalize the position coordinates (x, y, w, h) of the prediction frame obtained by the calculation in the second step, and obtain the normalized position coordinates (X, Y, W, H) ;
第四步:对所述图像中置信度confidence满足阈值的预测框boundingbox进行非极大值抑制处理,标注有危险目标类别及其预测框的监控图像。Step 4: Perform non-maximum suppression processing on the boundingbox of the prediction box whose confidence level confidence meets the threshold in the image, and mark the monitoring image with the dangerous target category and its prediction box.
优选的,所述目标施工区域内的各个危险目标处均设置有对应的警示标示;所述危险目标检测模型通过提取各个危险目标的特征识别以及对应的警示标示的特征来识别各个危险目标。Preferably, each dangerous object in the target construction area is provided with a corresponding warning sign; the dangerous object detection model identifies each dangerous object by extracting the feature identification of each dangerous object and the feature of the corresponding warning sign.
优选的,所述危险目标包括以下任一种或几种的组合:高压电气设备、物品或场所,易燃易爆物品、设备或场所,以及危险作业区域;获取危险目标的坐标,并根据危险目标的坐标确定危险区域的坐标,具体包括以下步骤:Preferably, the dangerous target includes any one or a combination of the following: high-voltage electrical equipment, articles or places, inflammable and explosive articles, equipment or places, and dangerous operation areas; obtain the coordinates of the dangerous target, and determine the The coordinates of the target determine the coordinates of the danger zone, which includes the following steps:
提取危险目标预测框的坐标,并根据危险目标的类别确定安全距离,以所述危险目标预测框的坐标为中心,以所述安全距离为半径划分危险区域。The coordinates of the dangerous target prediction frame are extracted, and the safety distance is determined according to the category of the dangerous target, and the dangerous area is divided with the coordinates of the dangerous target prediction frame as the center and the safety distance as the radius.
优选的,将所述人员坐标与所述危险区域的坐标进行比较,判断是否存在人员在所述危险区域内,包括以下步骤:Preferably, comparing the coordinates of the person with the coordinates of the dangerous area to determine whether there is a person in the dangerous area includes the following steps:
根据所述人员坐标与所述危险区域的坐标计算人员和危险区域的重合度:Calculate the coincidence degree between the personnel and the dangerous area according to the coordinates of the personnel and the dangerous area:
其中,Rperson为目标检测器检测出人在图像中的坐标范围,Rriskarea为目标检测器检测自动划分出图像中危险区域的坐标范围,Jarea表示前两者重合程度;Among them, Rperson is the coordinate range of the person in the image detected by the target detector, Rriskarea is the coordinate range of the target detector that automatically divides the dangerous area in the image, and Jarea indicates the degree of overlap between the first two;
基于所述人员和危险区域的重合度,通过门限函数判断所述人员是否在危险区域内:Based on the degree of coincidence between the person and the danger area, determine whether the person is in the danger area through a threshold function:
其中,Farea表示判断人员是否在危险区域中,t表示对于危险区域和人员在图像中重合程度的阈值。Among them, Farea indicates whether the person is in the danger area, and t indicates the threshold for the degree of overlap between the danger area and the person in the image.
优选的,还包括以下步骤:Preferably, it also includes the following steps:
在判断存在人员在所述危险区域内后,使用deepsort目标跟踪算法对所述人员进行追踪:After judging that there is a person in the dangerous area, use the deepsort target tracking algorithm to track the person:
步骤1:分配跟踪指标集Track indices T={1,...,N}和检测指标集Detectionindices D={1,...,M},并初始化最大循环检测帧数Amax;其中,1,...,N分别为前一帧监控图像中第1,...,第N个人员的特征;1,...,M分别为后一帧监控图像中第1,...,第M个人员的特征;Step 1: Allocate the tracking index set Track indices T={1,...,N} and the detection index set Detectionindices D={1,...,M}, and initialize the maximum number of loop detection frames Amax ; among them, 1 ,...,N are the characteristics of the 1st,...,Nth person in the previous frame of the monitoring image respectively; 1,...,M are the 1st,..., Characteristics of the Mth person;
步骤2:计算前一帧监控图像中第i个人员的特征与后一帧监控图像中第j个人员特征的代价矩阵C=[ci,j],其中,i=1,...,N,j=1,...,M;Step 2: Calculate the cost matrix C=[ci,j ] between the feature of the ith person in the previous frame of monitoring image and the feature of the jth person in the next frame of monitoring image, where i=1,..., N, j=1,...,M;
步骤3:计算卡尔曼预测的、前一帧监控图像中第i个人员特征对应的跟踪框平均轨道的位置和后一帧监控图像中第j个人员特征对应的实际检测框bounding box间的平方马氏距离的代价矩阵B=[bi,j];Step 3: Calculate the square between the average track position of the tracking frame predicted by Kalman and corresponding to the i-th person feature in the previous frame of monitoring image and the actual detection frame bounding box corresponding to the j-th person feature in the next frame of monitoring image. Cost matrix B=[bi,j ] of Mahalanobis distance;
步骤4:进行两次阈值的判断,将余弦代价矩阵中跟踪框和检测框间的平均马氏距离大于阈值的置为无穷大,并且将余弦距离大于阈值的置为较大;Step 4: Make two threshold judgments, and make the average Mahalanobis distance between the tracking frame and the detection frame in the cosine cost matrix greater than the threshold is set to infinity and the cosine distance is greater than the threshold is set to be larger;
步骤5:使用匈牙利算法对跟踪框和检测框进行匹配,并返回匹配结果;Step 5: Use the Hungarian algorithm to match the tracking frame and the detection frame, and return the matching result;
步骤6:对匹配结果进行筛选,删去余弦距离较大对匹配;Step 6: Screen the matching results, and delete the pairs with larger cosine distance;
步骤7:当前循环检测帧数大于最大循环检测帧数Amax时,得到初步的匹配结果,否则执行步骤2。Step 7: When the current cycle detection frame number is greater than the maximum cycle detection frame number Amax , a preliminary matching result is obtained, otherwise, step 2 is performed.
优选的,根据运动匹配度和外观匹配度来权衡匈牙利算法的权重:Preferably, the weight of the Hungarian algorithm is weighed according to the motion matching degree and the appearance matching degree:
计算前一帧监控图像中第i个人员的运动特征与后一帧监控图像中的第j个人员的运动特征之间的运动匹配度其中,所述运动匹配度d(1)计算公式为:Calculate the motion matching degree between the motion feature of the i-th person in the previous frame of surveillance image and the motion feature of the j-th person in the next frame of surveillance image Wherein, the calculation formula of the motion matching degree d(1) is:
其中,d(1)(i,j)中1表示前一帧监控图像中第i个人员与后一帧监控图像中的第j个人员之间有线,0则表示无线,该表达式值表示为第j个检测框和第i条轨迹之间的运动匹配度;是轨迹由卡尔曼滤波器预测得到的在当前时刻观测空间的协方差矩阵的逆矩阵;dj是第j个检测框的bounding box;yi是轨迹在当前时刻的预测量bounding box;Among them, 1 in d(1) (i,j) indicates that there is a wired connection between the i-th person in the previous frame of monitoring image and the j-th person in the next frame of monitoring image, and 0 indicates wireless. The value of this expression represents is the motion matching degree between the j-th detection frame and the i-th trajectory; is the inverse matrix of the covariance matrix of the observation space at the current moment predicted by the trajectory by the Kalman filter; dj is the bounding box of the jth detection frame; yi is the predicted bounding box of the trajectory at the current moment;
将所述运动匹配度输入至预设的运动匹配度门限函数中,判断前一帧监控图像中第i个人员的运动特征与后一帧监控图像中的第j个人员的运动特征是否关联成功;Match the motion to the degree Input into the preset motion matching degree threshold function, and determine whether the motion feature of the i-th person in the previous frame monitoring image is successfully associated with the j-th person's motion feature in the next frame monitoring image;
其中,其中,运动匹配度门限函数如下:Among them, the motion matching degree threshold function is as follows:
其中,用来决定初始匹配的连线,t(1)表示对于运动匹配度设定的阈值;当d(1)(i,j)≤t(1)表示前一帧监控图像中第i个人员的运动特征与后一帧监控图像中的第j个人员的运动特征关联成功;Among them, with To determine the initial matching connection, t(1) represents the threshold set for the motion matching degree; when d(1) (i, j)≤t(1) represents the movement of the i-th person in the previous frame of monitoring image The feature is successfully associated with the motion feature of the jth person in the monitoring image of the next frame;
计算前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员的外观特征的外观匹配度d(2)(i,j),其中,所述外观匹配度d(2)(i,j)计算公式为:Calculate the appearance matching degree d(2) (i, j) of the appearance feature of the ith person in the previous frame of monitoring image and the appearance feature of the jth person in the next frame of monitoring image, wherein the appearance matching degree The calculation formula of d(2) (i,j) is:
其中,rj为表面特征描述因子,用来存放最新的Lk个轨迹的描述因子,为第k条轨迹的第i个表面特征描述因子,上式表示第i个轨迹和第j个轨迹的最小余弦距离;Among them, rj is the surface feature description factor, is used to store the description factors of the latest Lk trajectories, is the ith surface feature description factor of the kth trajectory, The above formula represents the minimum cosine distance between the i-th trajectory and the j-th trajectory;
将所述外观匹配度输入至预设的外观匹配度门限函数中,前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员的外观特征是否关联成功,其中,外观匹配度门限函数如下:match the appearance Input into the preset appearance matching degree threshold function, whether the appearance characteristics of the ith person in the previous frame of monitoring image and the appearance characteristics of the jth person in the next frame of monitoring image are successfully correlated, wherein, the appearance matching degree threshold The function is as follows:
其中,t(2)表示对于外观匹配度设定的阈值,当d(2)(i,j)≤t(2)表示前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员的外观特征关联成功;Among them, t(2) represents the threshold set for the appearance matching degree, when d(2) (i,j)≤t(2) represents the appearance characteristics of the ith person in the previous frame of monitoring image and the monitoring of the next frame. The appearance feature of the jth person in the image is successfully associated;
当前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员的运动匹配度和外观匹配度均关联成功时,根据所述运动匹配度和所述外观匹配度计算前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员的综合匹配度ci,j,其中,综合匹配度ci,j计算公式如下:When the appearance feature of the ith person in the previous frame of monitoring image is successfully associated with the motion matching degree and appearance matching degree of the jth person in the following frame of monitoring image, according to the motion matching degree and the appearance matching degree Calculate the comprehensive matching degreeci,j of the appearance feature of the ith person in the previous frame of monitoring image and the jth person in the next frame of monitoring image, where the comprehensive matching degreeci,j is calculated as follows:
ci,j=λd(1)(i,j)+(1-λ)d(2)(i,j)ci,j =λd(1) (i,j)+(1-λ)d(2) (i,j)
其中,ci,j为前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员之间的综合匹配度,λ为预设的超参数,根据实际经验设定,d(1)(i,j)为运动匹配度,d(2)(i,j)为外观匹配度;Among them, ci,j is the comprehensive matching degree between the appearance feature of the ith person in the previous frame of monitoring image and the jth person in the next frame of monitoring image, λ is a preset hyperparameter, according to actual experience Set, d(1) (i, j) is the motion matching degree, and d(2) (i, j) is the appearance matching degree;
根据所述运动匹配度门限函数和外观匹配度门限函数,计算前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员之间的综合匹配度门限函数值bi,j,并根据所述综合匹配度门限函数值判断前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员的特征是否关联成功,若关联成功,则判断前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员匹配成功,所述综合匹配度门限函数如下:According to the motion matching degree threshold function and the appearance matching degree threshold function, calculate the comprehensive matching degree threshold function value between the appearance feature of the ith person in the previous frame of monitoring image and the jth person in the next frame of monitoring image bi,j , and according to the comprehensive matching degree threshold function value, it is judged whether the appearance feature of the ith person in the previous frame of monitoring image and the feature of the jth person in the next frame of monitoring image are successfully associated, if the association is successful , then it is judged that the appearance feature of the ith person in the previous frame of monitoring image is successfully matched with the jth person in the next frame of monitoring image, and the comprehensive matching degree threshold function is as follows:
其中,只有当bi,j为1时才认为是初步匹配成功。Among them, only when bi,j is 1, it is considered that the preliminary matching is successful.
优选的,当判断存在人员在所述危险区域内时,还包括以下步骤:Preferably, when it is judged that there is a person in the dangerous area, the following steps are also included:
将闯入画面推送到管理人员,并将该闯入画面存档以便日后查看;并且在危险区域现场设置的警告装置发出驱离闯入人员的警告提醒,防止所述闯入人员继续深入。Push the intrusion picture to the management personnel, and archive the intrusion picture for later viewing; and the warning device set up on the dangerous area will issue a warning reminder to drive away the intruder to prevent the intruder from continuing to penetrate.
一种计算机系统,包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述方法的步骤。A computer system includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when the processor executes the computer program.
本发明具有以下有益效果:The present invention has the following beneficial effects:
1、本发明中的基于机器视觉的危险区域人员闯入监测方法及系统,通过获取目标施工区域的监控图像,并将目标施工区域的监控图像输入到预设的危险目标检测模型中,判断目标施工区域内是否存在危险目标:若判断所述目标施工区域内存在危险目标,获取危险目标的坐标,并根据危险目标的坐标确定危险区域的坐标;识别监控图像中的人员坐标,将所述人员坐标与所述危险区域的坐标进行比较,判断是否存在人员在所述危险区域内,若判断存在人员在所述危险区域内,发送报警信号给用户。本发明能自动识别危险区域,并判断危险区域内是否有人,能减少安全监管的人力成本,提高监测效率。1. The method and system for monitoring the intrusion of personnel into dangerous areas based on machine vision in the present invention, by obtaining the monitoring image of the target construction area, and inputting the monitoring image of the target construction area into a preset dangerous target detection model, to determine the target Whether there is a dangerous object in the construction area: If it is judged that there is a dangerous object in the target construction area, obtain the coordinates of the dangerous object, and determine the coordinates of the dangerous area according to the coordinates of the dangerous object; The coordinates are compared with the coordinates of the dangerous area to determine whether there is a person in the dangerous area, and if it is determined that there is a person in the dangerous area, an alarm signal is sent to the user. The invention can automatically identify the dangerous area and judge whether there is a person in the dangerous area, thereby reducing the labor cost of safety supervision and improving the monitoring efficiency.
2、在优选方案中,当存在某个危险区域有人员闯入,系统将所有人员闯入进行记录,方便日后管理人员进行查看,同时系统会及时将该危险区域有人员闯入的警告信息发送给相应的管理人员,管理人员对此作出及时的反应及工作部署,同时在危险区域内监控有报警装置,人员闯入时会产生蜂鸣警告或其他反应,制止人员继续深入;该方法及系统的部署,能很好将计算机视觉应用到如今的生产生活中去,能解放人力提高效率,减少安全事故的产生,让生产者更安心于现有工作,管理人员也更好的对工地全局进行管控,让生产安全得到更好的保障。2. In the preferred solution, when there is a person breaking into a certain dangerous area, the system will record all personnel breaking into it, which is convenient for management personnel to check in the future, and the system will promptly send a warning message that there is a person breaking into the dangerous area. To the corresponding management personnel, the management personnel will make timely response and work deployment, and at the same time monitor the alarm device in the dangerous area, when personnel break in, a buzzer warning or other response will be generated to prevent the personnel from continuing to deepen; the method and system The deployment of computer vision can well apply computer vision to today's production and life, free up manpower to improve efficiency, reduce the occurrence of safety accidents, make producers more at ease with their existing work, and managers can better monitor the overall situation of the construction site. Control, so that production safety is better guaranteed.
3、在优选方案中,本发明采用了单步目标检测算法,跳过第一阶段的生成获选区域,直接产生物体的类别概率和位置坐标值,经过单次检测直接得到最终的检测结果,该方法相较于双步目标检测算法速度而言更佳,更有利于用于我们实时检测非法入侵人员。3. In the preferred solution, the present invention adopts a single-step target detection algorithm, skips the generation of the selected area in the first stage, directly generates the category probability and position coordinate value of the object, and directly obtains the final detection result after a single detection, Compared with the two-step target detection algorithm, this method has better speed and is more conducive to our real-time detection of illegal intruders.
4、在优选方案中,本发明采用Deepsort(深度实时多目标跟踪)方法,对同一目标只做一次统计操作,在有遮挡情况下也能很好避免统计错误。4. In the preferred solution, the present invention adopts the Deepsort (deep real-time multi-target tracking) method to perform only one statistical operation on the same target, which can well avoid statistical errors even under the condition of occlusion.
除了上面所描述的目的、特征和优点之外,本发明还有其它的目的、特征和优点。下面将参照附图,对本发明作进一步详细的说明。In addition to the objects, features and advantages described above, the present invention has other objects, features and advantages. The present invention will be described in further detail below with reference to the accompanying drawings.
附图说明Description of drawings
构成本申请的一部分的附图用来提供对本发明的进一步理解,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定。在附图中:The accompanying drawings constituting a part of the present application are used to provide further understanding of the present invention, and the exemplary embodiments of the present invention and their descriptions are used to explain the present invention and do not constitute an improper limitation of the present invention. In the attached image:
图1为本发明基于机器视觉的危险区域人员闯入监测方法的流程图;Fig. 1 is the flow chart of the monitoring method for personnel intrusion in dangerous areas based on machine vision of the present invention;
图2为本发明优选实施例中的系统硬件网络体系图;Fig. 2 is a system hardware network architecture diagram in a preferred embodiment of the present invention;
图3为本发明优选实施例中的wifi版监控数据传输硬件图;Fig. 3 is the hardware diagram of the wifi version monitoring data transmission in the preferred embodiment of the present invention;
图4为本发明优选实施例中的IOU匹配流程图;Fig. 4 is the IOU matching flow chart in the preferred embodiment of the present invention;
图5为本发明优选实施例中的级联匹配流程图;5 is a flow chart of cascade matching in a preferred embodiment of the present invention;
图6为本发明优选实施例中服务器获得和处理数据流程图。FIG. 6 is a flow chart of data acquisition and processing by a server in a preferred embodiment of the present invention.
具体实施方式Detailed ways
以下结合附图对本发明的实施例进行详细说明,但是本发明可以由权利要求限定和覆盖的多种不同方式实施。The embodiments of the present invention are described in detail below with reference to the accompanying drawings, but the present invention can be implemented in many different ways as defined and covered by the claims.
实施例一:Example 1:
如图1所示,本发明公开了一种基于机器视觉的危险区域人员闯入监测方法,包括以下步骤:As shown in Figure 1, the present invention discloses a method for monitoring the intrusion of personnel in dangerous areas based on machine vision, comprising the following steps:
获取目标施工区域的监控图像,并将目标施工区域的监控图像输入到预设的危险目标检测模型中,判断目标施工区域内是否存在危险目标:若判断所述目标施工区域内存在危险目标,获取危险目标的坐标,并根据危险目标的坐标确定危险区域的坐标;Obtain the monitoring image of the target construction area, and input the monitoring image of the target construction area into the preset dangerous target detection model, and determine whether there is a dangerous target in the target construction area: if it is judged that there is a dangerous target in the target construction area, obtain The coordinates of the dangerous target, and the coordinates of the dangerous area are determined according to the coordinates of the dangerous target;
识别监控图像中的人员坐标,将所述人员坐标与所述危险区域的坐标进行比较,判断是否存在人员在所述危险区域内,若判断存在人员在所述危险区域内,发送报警信号给用户。Identify the coordinates of the personnel in the monitoring image, compare the coordinates of the personnel with the coordinates of the dangerous area, determine whether there are personnel in the dangerous area, and if it is judged that there are personnel in the dangerous area, send an alarm signal to the user .
另外,在本实施例中,本发明还公开了一种计算机系统,包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述方法的步骤。In addition, in this embodiment, the present invention also discloses a computer system, including a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor implements the computer program when the processor executes the computer program. steps of the above method.
本发明能自动识别危险区域,并判断危险区域内是否有人,能减少安全监管的人力成本,提高监测效率。The invention can automatically identify the dangerous area and judge whether there is a person in the dangerous area, thereby reducing the labor cost of safety supervision and improving the monitoring efficiency.
实施例二:Embodiment 2:
实施例二是实施例的拓展实施例,其与实施例一的不同之处在于,对基于机器视觉的危险区域人员闯入监测方法的具体步骤进行了细化。The second embodiment is an extended embodiment of the embodiment, which is different from the first embodiment in that the specific steps of the method for monitoring the intrusion of persons in dangerous areas based on machine vision are refined.
在本实施例中,如图6所示,公开了一种基于计算机视觉自动生成施工危险区域人员闯入监测方法,应用于如图2、图3所示的基于机器视觉的危险区域人员闯入监测系统,具体的实施步骤如下:In this embodiment, as shown in FIG. 6 , a method for automatically generating personnel intrusion monitoring in construction dangerous areas based on computer vision is disclosed, which is applied to the personnel intrusion in dangerous areas based on machine vision as shown in FIGS. 2 and 3 . The specific implementation steps of the monitoring system are as follows:
步骤1:将施工工地的各个重点施工单元的高清摄像头视频流传输到服务器;Step 1: Stream the high-definition camera video of each key construction unit on the construction site to the server;
步骤2:根据步骤1获取的视频流数据将视频中通过目标检测算法对重点施工单元进行危险源识别,然后再产生相应的危险区域。该方法只需将视频传输到模型,模型会自动识别出危险源,并且根据危险源划分相应的危险区域。Step 2: According to the video stream data obtained in Step 1, the target detection algorithm is used to identify the hazard source of the key construction units in the video, and then the corresponding hazard area is generated. This method only needs to transmit the video to the model, and the model will automatically identify the danger source and divide the corresponding danger area according to the danger source.
为使目标检测器自动识别出危险区域,首先需要确定那些区域属于危险区域。在施工工地上常见的危险区域包括:高压电气设备、物品或场所,易燃易爆物品、设备或场所,以及危险作业区域,其中,高压电器设备包括变压器、高压箱;易燃易爆物品、设备或场所包括油库等可燃物体存放处;危险作业区域包括基坑;In order for the object detector to automatically identify hazardous areas, it is first necessary to determine which areas are hazardous areas. Common hazardous areas on construction sites include: high-voltage electrical equipment, items or places, flammable and explosive items, equipment or places, and hazardous operation areas, among which, high-voltage electrical equipment includes transformers, high-voltage boxes; flammable and explosive items, Equipment or places include storage places for combustible objects such as oil depots; dangerous operation areas include foundation pits;
在确定好适用于工地的危险区域类别后,为了让计算机自动识别危险区域,需要构建这些危险区域的有效数据集,数据集的好坏决定我们后续计算机视觉目标检测模型自动生成危险区域的精确度。After determining the categories of hazardous areas applicable to the construction site, in order for the computer to automatically identify the hazardous areas, it is necessary to construct effective data sets of these hazardous areas. The quality of the data sets determines the accuracy of our subsequent computer vision target detection model to automatically generate hazardous areas. .
为了更好地对危险区域识别,在已规定的危险区域中设立特定的警示标示,在后续摄像头提取到的图像中,警示标示的出现也可以辅助目标检测模型进行更加精准的自动的危险区域的划分。In order to better identify the dangerous area, a specific warning sign is set up in the prescribed danger area. In the image extracted by the subsequent camera, the appearance of the warning sign can also assist the target detection model to perform more accurate automatic detection of the danger area. Divide.
在搜集好数据集之后,就需要对模型进行构建。对于危险区域的自动识别采取Yolov4的模型框架,具体包括以下步骤:After collecting the dataset, it is time to build the model. For the automatic identification of dangerous areas, the model framework of Yolov4 is adopted, which specifically includes the following steps:
第一步:获取摄像头传输来的每一帧图像信息,图像尺寸调整为p*p,其中,p为32的整数倍;Step 1: Obtain each frame of image information transmitted by the camera, and adjust the image size to p*p, where p is an integer multiple of 32;
第二步:将第一步获得的图像分成大小为s*s的网格,为每个网格分配B个需要预测的预测框bounding box,通过yolov4进行训练模型,来获得每个bounding box所对应的位置、类别信息c和置信度confidence的值;Step 2: Divide the image obtained in the first step into grids of size s*s, assign B bounding boxes of prediction boxes to be predicted for each grid, and train the model through yolov4 to obtain all the bounding boxes of each bounding box. Corresponding location, category information c and the value of confidence confidence;
其中所述bounding box的自身位置标记为(x,y,w,h),其中,x和y表示预测框的中心点坐标,w和h表示预测框的长和宽;The self-position of the bounding box is marked as (x, y, w, h), where x and y represent the coordinates of the center point of the prediction frame, and w and h represent the length and width of the prediction frame;
其中所述置信度confidence定义计算为:The definition of the confidence level is calculated as:
其中,表示第i个格子的第j个bounding box的置信度,Pr(object)表示当前box是否有对象的概率,表示真实检测框和预测检测框之间的IOU比值,其中,IOU匹配流程如图4所示;in, Represents the confidence of the j-th bounding box of the i-th grid, Pr(object) represents the probability of whether the current box has an object, Represents the IOU ratio between the real detection frame and the predicted detection frame, where the IOU matching process is shown in Figure 4;
每个网格还预测C个条件类别概率:Pr(Classi|object)。Each grid also predicts C conditional class probabilities: Pr(Classi |object).
某类出现在框中的概率以及预测框拟合目标程度表达式为:The probability of a class appearing in the box and the fitting degree of the prediction box to the target are expressed as:
其中,Classi表示第i类类别;Among them, Classi represents the i-th category;
第三步:对所述第二步中计算所获得的预测框的自身位置坐标(x,y,w,h)进行归一化,获得归一化位置坐标(X,Y,W,H);The third step: normalize the position coordinates (x, y, w, h) of the prediction frame obtained by the calculation in the second step, and obtain the normalized position coordinates (X, Y, W, H) ;
第四步:对所述图像中置信度confidence满足阈值的预测框bounding box进行非极大值抑制处理(NMS);Step 4: Perform non-maximum suppression (NMS) on the bounding box of the prediction box whose confidence level in the image meets the threshold;
第五步:在通过上述步骤处理后,目标检测算法识别出具有危险的目标坐标和类别,根据得到的目标位置,生成包含此目标的一个更大范围的危险区域显示到图像中。Step 5: After processing through the above steps, the target detection algorithm identifies the dangerous target coordinates and categories, and according to the obtained target position, generates a larger dangerous area including the target and displays it in the image.
步骤3:根据步骤2识别出的危险区域信息,将其进行绑定各自对应的摄像头,将危险区域的位置信息输送到人物目标检测模型中去。Step 3: According to the dangerous area information identified in step 2, bind it to the corresponding camera, and transmit the location information of the dangerous area to the human target detection model.
步骤4:人物目标检测模型对收到画面进行判断,判断画面中是否出现有人;若有人则判断其位置信息是否在危险区域内。Step 4: The human target detection model judges the received picture, and judges whether there is a person in the picture; if there is a person, judge whether its position information is in the dangerous area.
为了确定人员是否在危险区域内部,需要对人员与危险区域进行两物体重合判断。In order to determine whether the person is inside the dangerous area, it is necessary to judge the coincidence of the two objects between the person and the dangerous area.
该方法的两物重合计算公式为:The formula for calculating the coincidence of two objects in this method is:
其中,Rperson为目标检测器检测出人在图像中的范围,Rriskarea为目标检测器检测自动划分出图像中危险区域的范围,Jarea表示前两者重合程度;Among them, Rperson is the range that the target detector detects in the image, Rriskarea is the range that the target detector detects and automatically divides the dangerous area in the image, and Jarea represents the degree of overlap between the first two;
对于该两者的重合度,设置了一个门限函数:For the coincidence of the two, a threshold function is set:
Farea=1[Jarea≥t]Farea = 1[Jarea ≥t]
其中,Farea表示判断人物是否在危险区域中,t表示对于危险区域和人物在图像中重合程度的阈值。Among them, Farea indicates whether the person is in the dangerous area, and t indicates the threshold for the degree of overlap between the dangerous area and the person in the image.
步骤5:对有人在危险区域内情况,系统自动将信息推送给工地管理人员,让其及时去处理该事件;并且人闯入危险区域的信息进行记录,信息包括时间、地点和该人闯入这一小段视频等;同时,在该危险区域对应的摄像头附近带有响应机制,有人闯入就触发响应机制,对该人员进行及时劝离。Step 5: When someone is in the dangerous area, the system automatically pushes the information to the site management personnel, so that they can deal with the incident in time; and record the information of the person entering the dangerous area, including the time, place and the person entering the dangerous area. This short video, etc.; at the same time, there is a response mechanism near the camera corresponding to the dangerous area. If someone breaks in, the response mechanism will be triggered, and the person will be persuaded to leave in time.
步骤6:经过前面多个步骤,对重点施工单元的人员闯入进行识别之后,需要对闯入人员进行多目标跟踪,同一目标每次出现在视频流中只进行一次警报,不能多次警报影响施工进行,为了实现对警报只产生一次的效果,不产生对同一视频流的目标进行多次触发报警,本发明采用deepsort算法对多目标进行跟踪,将目标在进入视频识别范围之时,对目标进行唯一标记。Step 6: After identifying the intrusion of people in key construction units after the previous steps, it is necessary to track the intruders with multiple targets. Each time the same target appears in the video stream, only one alarm will be issued, and multiple alarms cannot be affected. During construction, in order to achieve the effect of generating the alarm only once, and not to trigger the alarm multiple times for the target of the same video stream, the present invention adopts the deepsort algorithm to track multiple targets, and when the target enters the video recognition range, the target Make unique tags.
该多目标跟踪算法上半部分中计算相似度矩阵的方法使用到了外观模型(ReID)和运动模型(马氏距离)来计算相似度,得到代价矩阵,另外一个则是门控矩阵,用于限制代价矩阵中过大的值;下半部分中是如图5所示的级联匹配的数据关联,可以重新将被遮挡目标找回,降低被遮挡然后再出现的目标发生的ID Switch次数。The method of calculating the similarity matrix in the first half of the multi-target tracking algorithm uses the appearance model (ReID) and the motion model (Malanlan distance) to calculate the similarity to obtain the cost matrix, and the other is the gating matrix, which is used to limit The value in the cost matrix is too large; the lower part is the data association of cascade matching as shown in Figure 5, which can retrieve the occluded target again and reduce the number of ID Switch occurrences of the occluded and then reappearing target.
具体的,deepsort算法包括以下步骤:Specifically, the deepsort algorithm includes the following steps:
S61:分配跟踪指标集Track indices T={1,...,N}和检测指标集Detectionindices D={1,...,M},并初始化最大循环检测帧数Amax;其中,1,...,N分别为前一帧监控图像中第1,...,第N个人员的特征;1,...,M分别为后一帧监控图像中第1,...,第M个人员的特征;S61: Allocate the tracking index set Track indices T={1,...,N} and the detection index set Detectionindices D={1,...,M}, and initialize the maximum number of loop detection frames Amax ; where, 1, ...,N are the characteristics of the 1st,...,Nth person in the previous frame monitoring image respectively; 1,...,M are the 1st,...,th person's characteristics in the next frame monitoring image respectively Characteristics of M personnel;
S62:计算前一帧监控图像中第i个人员的特征与后一帧监控图像中第j个人员特征的代价矩阵C=[ci,j],其中,i=1,...,N,j=1,...,M;S62: Calculate the cost matrix C=[ci,j ] between the feature of the ith person in the previous frame of monitoring image and the feature of the jth person in the next frame of monitoring image, where i=1,...,N , j=1,...,M;
S63:计算卡尔曼预测的、前一帧监控图像中第i个人员特征对应的跟踪框平均轨道的位置和后一帧监控图像中第j个人员特征对应的实际检测框bounding box间的平方马氏距离的代价矩阵B=[bi,j];S63 : Calculate the square horse between the average track position of the tracking frame predicted by Kalman and corresponding to the ith person feature in the previous frame of monitoring image and the actual detection frame bounding box corresponding to the jth person feature in the next frame of monitoring image. The cost matrix B=[bi,j ] of the distance
S64:进行两次阈值的判断,将余弦代价矩阵中跟踪框和检测框间的平均马氏距离大于阈值的置为无穷大,并且将余弦距离大于阈值的置为较大;S64: Perform two threshold judgments, and set the average Mahalanobis distance between the tracking frame and the detection frame in the cosine cost matrix to be greater than the threshold is set to infinity and the cosine distance is greater than the threshold is set to be larger;
S65:使用匈牙利算法对跟踪框和检测框进行匹配,并返回匹配结果;S65: Use the Hungarian algorithm to match the tracking frame and the detection frame, and return the matching result;
具体的,根据运动匹配度和外观匹配度来权衡匈牙利算法的权重:Specifically, the weight of the Hungarian algorithm is weighed according to the motion matching degree and appearance matching degree:
计算前一帧监控图像中第i个人员的运动特征与后一帧监控图像中的第j个人员的运动特征之间的运动匹配度其中,所述运动匹配度d(1)计算公式为:Calculate the motion matching degree between the motion feature of the i-th person in the previous frame of surveillance image and the motion feature of the j-th person in the next frame of surveillance image Wherein, the calculation formula of the motion matching degree d(1) is:
其中,d(1)(i,j)中1表示前一帧监控图像中第i个人员与后一帧监控图像中的第j个人员之间有线,0则表示无线,该表达式值表示为第j个检测框和第i条轨迹之间的运动匹配度;是轨迹由卡尔曼滤波器预测得到的在当前时刻观测空间的协方差矩阵的逆矩阵;dj是第j个检测框的bounding box;yi是轨迹在当前时刻的预测量bounding box;Among them, 1 in d(1) (i,j) indicates that there is a wired connection between the i-th person in the previous frame of monitoring image and the j-th person in the next frame of monitoring image, and 0 indicates wireless. The value of this expression represents is the motion matching degree between the j-th detection frame and the i-th trajectory; is the inverse matrix of the covariance matrix of the observation space at the current moment predicted by the trajectory by the Kalman filter; dj is the bounding box of the jth detection frame; yi is the predicted bounding box of the trajectory at the current moment;
将所述运动匹配度输入至预设的运动匹配度门限函数中,判断前一帧监控图像中第i个人员的运动特征与后一帧监控图像中的第j个人员的运动特征是否关联成功;Match the motion to the degree Input into the preset motion matching degree threshold function, and determine whether the motion feature of the i-th person in the previous frame monitoring image is successfully associated with the j-th person's motion feature in the next frame monitoring image;
其中,其中,运动匹配度门限函数如下:Among them, the motion matching degree threshold function is as follows:
其中,用来决定初始匹配的连线,t(1)表示对于运动匹配度设定的阈值;当d(1)(i,j)≤t(1)表示前一帧监控图像中第i个人员的运动特征与后一帧监控图像中的第j个人员的运动特征关联成功;Among them, with To determine the initial matching connection, t(1) represents the threshold set for the motion matching degree; when d(1) (i, j)≤t(1) represents the movement of the i-th person in the previous frame of monitoring image The feature is successfully associated with the motion feature of the jth person in the monitoring image of the next frame;
计算前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员的外观特征的外观匹配度d(2)(i,j),其中,所述外观匹配度d(2)(i,j)计算公式为:Calculate the appearance matching degree d(2) (i, j) of the appearance feature of the ith person in the previous frame of monitoring image and the appearance feature of the jth person in the next frame of monitoring image, wherein the appearance matching degree The calculation formula of d(2) (i,j) is:
其中,rj为表面特征描述因子,用来存放最新的Lk个轨迹的描述因子,为第k条轨迹的第i个表面特征描述因子,上式表示第i个轨迹和第j个轨迹的最小余弦距离;Among them, rj is the surface feature description factor, is used to store the description factors of the latest Lk trajectories, is the ith surface feature description factor of the kth trajectory, The above formula represents the minimum cosine distance between the i-th trajectory and the j-th trajectory;
将所述外观匹配度输入至预设的外观匹配度门限函数中,前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员的外观特征是否关联成功,其中,外观匹配度门限函数如下:match the appearance Input into the preset appearance matching degree threshold function, whether the appearance characteristics of the ith person in the previous frame of monitoring image and the appearance characteristics of the jth person in the next frame of monitoring image are successfully correlated, wherein, the appearance matching degree threshold The function is as follows:
其中,t(2)表示对于外观匹配度设定的阈值,当d(2)(i,j)≤t(2)表示前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员的外观特征关联成功;Among them, t(2) represents the threshold set for the appearance matching degree, when d(2) (i,j)≤t(2) represents the appearance characteristics of the ith person in the previous frame of monitoring image and the monitoring of the next frame. The appearance feature of the jth person in the image is successfully associated;
当前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员的运动匹配度和外观匹配度均关联成功时,根据所述运动匹配度和所述外观匹配度计算前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员的综合匹配度ci,j,其中,综合匹配度ci,j计算公式如下:When the appearance feature of the ith person in the previous frame of monitoring image is successfully associated with the motion matching degree and appearance matching degree of the jth person in the following frame of monitoring image, according to the motion matching degree and the appearance matching degree Calculate the comprehensive matching degreeci,j of the appearance feature of the ith person in the previous frame of monitoring image and the jth person in the next frame of monitoring image, where the comprehensive matching degreeci,j is calculated as follows:
ci,j=λd(1)(i,j)+(1-λ)d(2)(i,j)ci,j =λd(1) (i,j)+(1-λ)d(2) (i,j)
其中,ci,j为前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员之间的综合匹配度,λ为预设的超参数,根据实际经验设定,d(1)(i,j)为运动匹配度,d(2)(i,j)为外观匹配度;Among them, ci,j is the comprehensive matching degree between the appearance feature of the ith person in the previous frame of monitoring image and the jth person in the next frame of monitoring image, λ is a preset hyperparameter, according to actual experience Set, d(1) (i, j) is the motion matching degree, and d(2) (i, j) is the appearance matching degree;
根据所述运动匹配度门限函数和外观匹配度门限函数,计算前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员之间的综合匹配度门限函数值bi,j,并根据所述综合匹配度门限函数值判断前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员的特征是否关联成功,若关联成功,则判断前一帧监控图像中第i个人员的外观特征与后一帧监控图像中的第j个人员匹配成功,所述综合匹配度门限函数如下:According to the motion matching degree threshold function and the appearance matching degree threshold function, calculate the comprehensive matching degree threshold function value between the appearance feature of the ith person in the previous frame of monitoring image and the jth person in the next frame of monitoring image bi,j , and according to the comprehensive matching degree threshold function value, it is judged whether the appearance feature of the ith person in the previous frame of monitoring image and the feature of the jth person in the next frame of monitoring image are successfully associated, if the association is successful , then it is judged that the appearance feature of the ith person in the previous frame of monitoring image is successfully matched with the jth person in the next frame of monitoring image, and the comprehensive matching degree threshold function is as follows:
其中,只有当bi,j为1时才认为是初步匹配成功。Among them, only when bi,j is 1, it is considered that the preliminary matching is successful.
S66:对匹配结果进行筛选,删去余弦距离较大对匹配;S66: Screen the matching results, and delete the matching pairs with larger cosine distance;
S67:当前循环检测帧数大于最大循环检测帧数Amax时,得到初步的匹配结果,否则执行S62。S67: When the current cycle detection frame number is greater than the maximum cycle detection frame number Amax , a preliminary matching result is obtained, otherwise, S62 is performed.
步骤7:每隔一段时间需要重新对危险源进行识别规划危险区域,其他则返回步骤4,否则结束危险区域人员闯入检测工作。Step 7: It is necessary to re-identify the danger source at regular intervals to plan the danger area, otherwise return to step 4, otherwise end the personnel intrusion detection work in the danger area.
如附图1,本发明方法首先需要获取重点施工单元的监控视频流信息,将所获得的视频流传输到服务器,由于施工工地环境情况复杂,有施工材料,行人,汽车和施工工具等物体,并且施工安全问题实时性需求较高,所以需要服务器通过单步目标检测技术对视频中行人进行检测识别出行人,并对行人进行特殊标识。为了使每个目标在视频流中做唯一标识,不重复触发报警,需要对所识别出来的目标进行跟踪识别。通过该技术最终对工人是否进入危险区进行判断,若进入危险区域则进行响应的处理操作。As shown in Figure 1, the method of the present invention first needs to obtain the monitoring video stream information of key construction units, and transmit the obtained video stream to the server, because the construction site environment is complicated, and there are objects such as construction materials, pedestrians, cars and construction tools, In addition, the real-time demand for construction safety issues is high, so the server needs to use the single-step target detection technology to detect and identify pedestrians in the video, and carry out special identification for pedestrians. In order to uniquely identify each target in the video stream and not trigger the alarm repeatedly, it is necessary to track and identify the identified targets. Through this technology, it is finally judged whether the worker enters the dangerous area, and if it enters the dangerous area, the corresponding processing operation will be carried out.
综上所述,本发明中一种基于计算机视觉自动生成施工危险区域人员闯入监测方法及系统,包括施工工地的安全施工监控,通过计算机视觉技术对施工工地的危险区域的划分,然后根据人物目标检测算法,当所在区域有人进入该危险区域时,进行相应的措施。该方案中,使用计算机视觉技术将其与工地安全相结合,形成一个反馈式系统,通过服务器识别出的围观闯入行为,进而对整个施工场地的重点施工单元进行管控,使施工的人员更能放心的进行安全作业,使整个工程的施工安全得到相应的保障,提高施工的作业效益。To sum up, in the present invention, a method and system for automatically generating a monitoring method and system for personnel intrusion in a construction dangerous area based on computer vision, including safe construction monitoring of a construction site, dividing the dangerous area of a construction site by computer vision technology, and then according to the characters. The target detection algorithm takes corresponding measures when someone in the area enters the dangerous area. In this scheme, computer vision technology is used to combine it with construction site safety to form a feedback system. Through the onlookers and intrusions identified by the server, the key construction units of the entire construction site are managed and controlled, so that construction personnel can better Safe operation can be carried out with confidence, so that the construction safety of the whole project can be guaranteed accordingly, and the operation efficiency of construction can be improved.
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included within the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210658001.XACN114973140A (en) | 2022-06-10 | 2022-06-10 | Method and system for intrusion detection of personnel in dangerous areas based on machine vision |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202210658001.XACN114973140A (en) | 2022-06-10 | 2022-06-10 | Method and system for intrusion detection of personnel in dangerous areas based on machine vision |
| Publication Number | Publication Date |
|---|---|
| CN114973140Atrue CN114973140A (en) | 2022-08-30 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202210658001.XAPendingCN114973140A (en) | 2022-06-10 | 2022-06-10 | Method and system for intrusion detection of personnel in dangerous areas based on machine vision |
| Country | Link |
|---|---|
| CN (1) | CN114973140A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114782675A (en)* | 2022-03-31 | 2022-07-22 | 江苏预立新能源科技有限公司 | Dynamic item pricing method and system in safety technical service field |
| CN115190277A (en)* | 2022-09-08 | 2022-10-14 | 中达安股份有限公司 | Safety monitoring method, device and equipment for construction area and storage medium |
| CN116206255A (en)* | 2023-01-06 | 2023-06-02 | 广州纬纶信息科技有限公司 | Dangerous area personnel monitoring method and device based on machine vision |
| CN116311361A (en)* | 2023-03-02 | 2023-06-23 | 北京化工大学 | Dangerous source indoor staff positioning method based on pixel-level labeling |
| CN116563776A (en)* | 2023-03-08 | 2023-08-08 | 国网宁夏电力有限公司信息通信公司 | Artificial intelligence-based method, system, medium and equipment for warning violations |
| CN116597472A (en)* | 2023-05-05 | 2023-08-15 | 普曼(杭州)工业科技有限公司 | A safety protection method for welding production line based on vision AI |
| CN116797031A (en)* | 2023-08-25 | 2023-09-22 | 深圳市易图资讯股份有限公司 | A safety production management method and system based on data collection |
| CN116977920A (en)* | 2023-06-28 | 2023-10-31 | 三峡科技有限责任公司 | Critical protection method for multi-zone type multi-reasoning early warning mechanism |
| CN117037269A (en)* | 2023-08-01 | 2023-11-10 | 中南大学 | Method and system for monitoring abnormal behaviors of personnel in lightweight government affair hall |
| CN117557201A (en)* | 2024-01-12 | 2024-02-13 | 国网山东省电力公司菏泽供电公司 | Intelligent warehousing safety management system and method based on artificial intelligence |
| CN117549330A (en)* | 2024-01-11 | 2024-02-13 | 四川省铁路建设有限公司 | Construction safety monitoring robot system and control method |
| CN118221005A (en)* | 2024-02-04 | 2024-06-21 | 青岛城市轨道交通科技有限公司 | A monitoring method for dangerous intrusion of lifting equipment based on cloud-edge collaboration |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111144232A (en)* | 2019-12-09 | 2020-05-12 | 国网智能科技股份有限公司 | Transformer substation electronic fence monitoring method based on intelligent video monitoring, storage medium and equipment |
| CN111860282A (en)* | 2020-07-15 | 2020-10-30 | 中国电子科技集团公司第三十八研究所 | Method and system for passenger flow statistics of subway section and pedestrian retrograde detection |
| CN113393679A (en)* | 2021-06-10 | 2021-09-14 | 中南大学 | Regional traffic guidance method and system based on traffic intersection traffic flow identification and statistics |
| CN113569801A (en)* | 2021-08-11 | 2021-10-29 | 广东电网有限责任公司 | Distribution construction site live equipment and live area identification method and device thereof |
| CN114022812A (en)* | 2021-11-01 | 2022-02-08 | 大连理工大学 | A Multi-target Tracking Method for DeepSort Water Surface Floating Objects Based on Lightweight SSD |
| KR102369229B1 (en)* | 2021-09-16 | 2022-03-03 | 주식회사 시티랩스 | Risk prediction system and risk prediction method based on a rail robot specialized in an underground tunnel |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111144232A (en)* | 2019-12-09 | 2020-05-12 | 国网智能科技股份有限公司 | Transformer substation electronic fence monitoring method based on intelligent video monitoring, storage medium and equipment |
| CN111860282A (en)* | 2020-07-15 | 2020-10-30 | 中国电子科技集团公司第三十八研究所 | Method and system for passenger flow statistics of subway section and pedestrian retrograde detection |
| CN113393679A (en)* | 2021-06-10 | 2021-09-14 | 中南大学 | Regional traffic guidance method and system based on traffic intersection traffic flow identification and statistics |
| CN113569801A (en)* | 2021-08-11 | 2021-10-29 | 广东电网有限责任公司 | Distribution construction site live equipment and live area identification method and device thereof |
| KR102369229B1 (en)* | 2021-09-16 | 2022-03-03 | 주식회사 시티랩스 | Risk prediction system and risk prediction method based on a rail robot specialized in an underground tunnel |
| CN114022812A (en)* | 2021-11-01 | 2022-02-08 | 大连理工大学 | A Multi-target Tracking Method for DeepSort Water Surface Floating Objects Based on Lightweight SSD |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114782675B (en)* | 2022-03-31 | 2022-11-25 | 江苏预立新能源科技有限公司 | Dynamic item pricing method and system in safety technical service field |
| CN114782675A (en)* | 2022-03-31 | 2022-07-22 | 江苏预立新能源科技有限公司 | Dynamic item pricing method and system in safety technical service field |
| CN115190277A (en)* | 2022-09-08 | 2022-10-14 | 中达安股份有限公司 | Safety monitoring method, device and equipment for construction area and storage medium |
| CN116206255B (en)* | 2023-01-06 | 2024-02-20 | 广州纬纶信息科技有限公司 | Dangerous area personnel monitoring method and device based on machine vision |
| CN116206255A (en)* | 2023-01-06 | 2023-06-02 | 广州纬纶信息科技有限公司 | Dangerous area personnel monitoring method and device based on machine vision |
| CN116311361A (en)* | 2023-03-02 | 2023-06-23 | 北京化工大学 | Dangerous source indoor staff positioning method based on pixel-level labeling |
| CN116311361B (en)* | 2023-03-02 | 2023-09-15 | 北京化工大学 | Dangerous source indoor staff positioning method based on pixel-level labeling |
| CN116563776A (en)* | 2023-03-08 | 2023-08-08 | 国网宁夏电力有限公司信息通信公司 | Artificial intelligence-based method, system, medium and equipment for warning violations |
| CN116597472A (en)* | 2023-05-05 | 2023-08-15 | 普曼(杭州)工业科技有限公司 | A safety protection method for welding production line based on vision AI |
| CN116977920B (en)* | 2023-06-28 | 2024-04-12 | 三峡科技有限责任公司 | Critical protection method for multi-zone type multi-reasoning early warning mechanism |
| CN116977920A (en)* | 2023-06-28 | 2023-10-31 | 三峡科技有限责任公司 | Critical protection method for multi-zone type multi-reasoning early warning mechanism |
| CN117037269A (en)* | 2023-08-01 | 2023-11-10 | 中南大学 | Method and system for monitoring abnormal behaviors of personnel in lightweight government affair hall |
| CN116797031A (en)* | 2023-08-25 | 2023-09-22 | 深圳市易图资讯股份有限公司 | A safety production management method and system based on data collection |
| CN116797031B (en)* | 2023-08-25 | 2023-10-31 | 深圳市易图资讯股份有限公司 | A safety production management method and system based on data collection |
| CN117549330A (en)* | 2024-01-11 | 2024-02-13 | 四川省铁路建设有限公司 | Construction safety monitoring robot system and control method |
| CN117549330B (en)* | 2024-01-11 | 2024-03-22 | 四川省铁路建设有限公司 | Construction safety monitoring robot system and control method |
| CN117557201A (en)* | 2024-01-12 | 2024-02-13 | 国网山东省电力公司菏泽供电公司 | Intelligent warehousing safety management system and method based on artificial intelligence |
| CN117557201B (en)* | 2024-01-12 | 2024-04-12 | 国网山东省电力公司菏泽供电公司 | Intelligent warehouse safety management system and method based on artificial intelligence |
| CN118221005A (en)* | 2024-02-04 | 2024-06-21 | 青岛城市轨道交通科技有限公司 | A monitoring method for dangerous intrusion of lifting equipment based on cloud-edge collaboration |
| Publication | Publication Date | Title |
|---|---|---|
| CN114973140A (en) | Method and system for intrusion detection of personnel in dangerous areas based on machine vision | |
| CN118486152B (en) | Security alarm information data interaction system and method | |
| CN104079874B (en) | A kind of security protection integral system and method based on technology of Internet of things | |
| CN109040669A (en) | Intelligent substation video fence method and system | |
| CN110674761B (en) | A regional behavior early warning method and system | |
| CN209543514U (en) | Monitoring and alarm system based on recognition of face | |
| CN115527340A (en) | Intelligent construction site safety monitoring system and method based on unmanned aerial vehicle and surveillance camera | |
| CN207909318U (en) | Article leaves intelligent detecting prewarning system in a kind of high risk zone | |
| CN113191273A (en) | Oil field well site video target detection and identification method and system based on neural network | |
| CN115019463B (en) | Water supervision system based on artificial intelligence technology | |
| CN112866647A (en) | Intelligent property management system based on smart community | |
| CN115100813B (en) | Intelligent community system based on digital twins | |
| CN115311735A (en) | A method for intelligent identification and early warning of abnormal behavior | |
| CN117037404A (en) | Electronic fence early warning method based on target detection and target tracking | |
| CN111553305B (en) | System and method for identifying illegal videos | |
| CN116993265A (en) | Intelligent warehouse safety management system based on Internet of things | |
| CN117649736A (en) | Video management method and system based on AI video management platform | |
| CN112580470A (en) | City visual perception method and device, electronic equipment and storage medium | |
| CN116311729A (en) | AI-based on-site safety management system for power grid infrastructure | |
| CN112102543A (en) | Security check system and method | |
| GB2608639A (en) | Threat assessment system | |
| CN119131691A (en) | Campus smart security method based on multi-source data analysis | |
| CN116580514A (en) | Intelligent security method, system, medium and electronic equipment based on Internet of things | |
| CN116246445A (en) | A warehouse security multi-source IoT data early warning method based on knowledge graph | |
| CN115346170A (en) | Intelligent monitoring method and device for gas facility area |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |