技术领域Technical field
本发明涉及安全防护技术领域,具体涉及一种多区域类型多推理预警机制的临界防护方法。The invention relates to the technical field of security protection, and in particular to a critical protection method with multi-region type and multi-reasoning early warning mechanisms.
背景技术Background technique
钻机从业人员越来越重视人员的人身安全,尤其是钻井作业高风险区域,即钻机钻台面井口区域,该区域涉及到游车高速上提下放、钻柱高速旋转和高压泥浆通过,极其危险。在平时作业活动中,钻工会间歇性地进入该区域参与“提卡坐卡”和“挂吊卡”等必要性工作,如果司钻与钻工配合不好,或者钻工误闯入该危险区域(比如游车还在全速下放,钻工走到井口范围,钻工可能被坠物砸伤),就极易造成人身伤亡事故。目前,针对危险区域井口的人员安全防护,现场一般的做法是:第一,在井口一圈,地面标记红色油漆,提示危险区域;第二,依靠个人经验,即设备停稳,钻工确认安全后,人员进入;第三,依靠司钻提醒,设备停稳,司钻招呼钻工进入,如发现有安全隐患,及时喊话提醒。但是,一旦提醒不到位或者个人安全意识不足的情况下,就极易发生个人安全事故。Drilling rig practitioners pay more and more attention to personal safety, especially in the high-risk area of drilling operations, that is, the wellhead area on the drilling rig surface. This area involves high-speed lifting and lowering of the traveling block, high-speed rotation of the drill string, and the passage of high-pressure mud, which is extremely dangerous. During normal operating activities, drillers will intermittently enter the area to participate in necessary tasks such as "picking and setting up trucks" and "hanging elevators". If the driller and driller do not cooperate well, or the driller accidentally enters the dangerous area, Area (for example, when the traveling car is lowering at full speed and the driller reaches the wellhead, the driller may be injured by falling objects), it is very easy to cause personal injury and death accidents. At present, for the safety protection of personnel at the wellhead in dangerous areas, the general on-site approach is: first, mark a circle around the wellhead with red paint on the ground to indicate the dangerous area; second, rely on personal experience, that is, the equipment is stopped and the driller confirms that it is safe. Finally, personnel enter; thirdly, rely on the driller's reminder to stop the equipment and the driller calls on the drillers to enter. If any safety hazard is discovered, a prompt warning is issued. However, if reminders are not in place or personal safety awareness is insufficient, personal safety accidents can easily occur.
现有技术中,应用于石油钻机的危险区域预警方法通过判断井口区域安全状态进行预警;在运行工况为正常时,读取并分析实时图像,通过人脸识别技术实现入侵人员身份确认,进行预警;在运行工况为异常时,无需读取实时图像,直接进行预警;整个过程高效快捷,优化预警流程,保证预警准确率。In the existing technology, the dangerous area early warning method applied to oil drilling rigs provides early warning by judging the safety status of the wellhead area; when the operating conditions are normal, real-time images are read and analyzed, and the identity of the intruder is confirmed through face recognition technology. Early warning; when the operating conditions are abnormal, there is no need to read real-time images and direct early warning; the entire process is efficient and fast, optimizing the early warning process and ensuring early warning accuracy.
另外可以基于摄像头拍摄的视频图像,结合深度学习目标检测算法和危险区域判断,在人员出现安全事故前停止吊机运行,最大可能的避免安全事故的发生。与传统方法相比,除无人值守外,具有可视化、与吊机联合判断的优势。In addition, based on the video images captured by the camera, combined with the deep learning target detection algorithm and dangerous area judgment, the crane can be stopped before a safety accident occurs to avoid safety accidents to the greatest extent possible. Compared with traditional methods, in addition to being unattended, it has the advantages of visualization and joint judgment with the crane.
但是,现有技术停留在单一推理机制,仅通过目标检测机器视觉对人员进行检测,当人员数量增加、场景复杂时检测精度会有所下降。However, the existing technology stays at a single inference mechanism and only detects people through target detection machine vision. When the number of people increases and the scene becomes complex, the detection accuracy will decrease.
发明内容Contents of the invention
本发明提供一种多区域类型多推理预警机制的临界防护方法,以解决现有技术中存在的上述问题。The present invention provides a critical protection method with multi-area type and multi-reasoning early warning mechanism to solve the above-mentioned problems existing in the prior art.
本发明提供一种多区域类型多推理预警机制的临界防护方法,该方法包括:The present invention provides a critical protection method with multi-area type and multi-reasoning early warning mechanism. The method includes:
S100,获得实时监控视频流数据;S100, obtain real-time surveillance video stream data;
S200,构建目标检测模型和目标跟踪模型;S200, build a target detection model and a target tracking model;
S300,基于目标检测模型和目标跟踪模型,对视频流数据进行目标监测和目标跟踪,获得跟踪目标的位置信息;S300, based on the target detection model and the target tracking model, perform target monitoring and target tracking on the video stream data to obtain the location information of the tracking target;
S400,判断位置信息是否在预设的临界防护区域范围内,根据判断结果采取不同的预警措施。S400: Determine whether the location information is within the preset critical protection area, and take different early warning measures based on the judgment results.
优选的,所述S200中,构建目标检测模型,包括:Preferably, in S200, building a target detection model includes:
S201,构建YOLOv8-F网络结构,将YOLOv8的骨干网络backbone由YOLO backbone替换为FasterNet backbone;S201, construct the YOLOv8-F network structure and replace the YOLOv8 backbone network backbone from YOLO backbone to FasterNet backbone;
S202,FasterNet以改进的PConv和PWConv作为核心的算子;S202, FasterNet uses improved PConv and PWConv as the core operators;
S203,基于构建的YOLOv8-F网络结构,对YOLOv8-F工地行人检测模型进行训练,获得目标检测模型。S203. Based on the constructed YOLOv8-F network structure, train the YOLOv8-F construction site pedestrian detection model to obtain a target detection model.
优选的,所述S203包括:Preferably, the S203 includes:
S2031,采集建筑工地不同区域类型监控设备的视频流数据;S2031, collect video stream data of types of monitoring equipment in different areas of the construction site;
S2032,对视频流数据按每60帧提取一帧的方式进行抽帧处理,构建工地行人数据集;S2032, perform frame extraction processing on the video stream data by extracting one frame every 60 frames to construct a construction site pedestrian data set;
S2033,将工地行人数据集划分为训练集、验证集以及测试集,对YOLOv8-F工地行人检测模型进行训练,获得目标监测模型。S2033: Divide the construction site pedestrian data set into a training set, a verification set and a test set, train the YOLOv8-F construction site pedestrian detection model, and obtain a target monitoring model.
优选的,所述S200中,构建目标跟踪模型,包括:Preferably, in S200, building a target tracking model includes:
S204,根据目标检测模型获得待监测目标的位置信息和置信度;S204, obtain the location information and confidence of the target to be monitored according to the target detection model;
S205,对待监测目标进行预测,获得预测目标,将待监测目标和预测目标进行匹配,获得匹配结果;S205, predict the target to be monitored, obtain the predicted target, match the target to be monitored and the predicted target, and obtain the matching result;
S206,设置置信度阈值,基于置信度阈值和匹配结果筛选出跟踪目标,形成目标跟踪模型。S206: Set a confidence threshold, filter out tracking targets based on the confidence threshold and matching results, and form a target tracking model.
优选的,所述S300包括:Preferably, the S300 includes:
S301,结合目标监测模型的检测识别结果,搭建StrongSORT目标跟踪模型,用于对工地行人实现实时跟踪;S301, combine the detection and recognition results of the target monitoring model to build a StrongSORT target tracking model for real-time tracking of pedestrians on the construction site;
S302,StrongSORT目标跟踪模型采用YOLOv8-F目标监测模型对工地行人进行检测,采用ECC算法进行摄像机运动补偿,ECC为Enhanced Correlation Coefficient;S302. The StrongSORT target tracking model uses the YOLOv8-F target monitoring model to detect pedestrians on the construction site, and uses the ECC algorithm for camera motion compensation. ECC is Enhanced Correlation Coefficient;
S303,采用NSA卡尔曼算法对工地行人运动轨迹的下一帧位置进行预测,利用EMA(Exponential moving average)特征更新策略,使用表观特征提取器BoT对检测目标与预测目标进行匹配;S303, use the NSA Kalman algorithm to predict the next frame position of the pedestrian movement trajectory on the construction site, use the EMA (Exponential moving average) feature update strategy, and use the apparent feature extractor BoT to match the detection target and the predicted target;
S304,将匹配结果与通过检测阈值筛选的目标进行融合输出,得到当前视频帧图像的跟踪目标。S304: Fusion and output the matching result and the target filtered by the detection threshold to obtain the tracking target of the current video frame image.
优选的,所述S400中,判断位置信息是否在预设的临界防护区域范围内,包括:Preferably, in S400, determining whether the location information is within a preset critical protection area includes:
S401,对检测区域进行人为界定,结合图像和视频处理库获取需要进行临界防护的区域范围;S401, artificially define the detection area, and combine the image and video processing libraries to obtain the area that requires critical protection;
S402,获取出现在监控视频特定区域内跟踪目标的位置信息的中点坐标;S402, obtain the midpoint coordinates of the position information of the tracking target appearing in the specific area of the surveillance video;
S403,比对该中点坐标与临界防护的区域的关系,采用PNPoly算法判定该中点坐标是否在临界防护的区域的范围内。S403: Compare the relationship between the midpoint coordinates and the critical protection area, and use the PNPoly algorithm to determine whether the midpoint coordinates are within the range of the critical protection area.
优选的,所述S400中,根据判断结果采取不同的预警措施,包括:Preferably, in S400, different early warning measures are taken according to the judgment results, including:
S404,结合应用侧特定场景自定义推理,判断是否需要对踏入相关区域的人员进行实时预警;S404, combined with customized reasoning for specific scenarios on the application side, determine whether real-time warning is needed for people who enter the relevant area;
S405,临界防护预警,其中危险区域及区域内人员均以红色实线标出,人员上方和下方分别显示置信度和唯一标识ID。S405, critical protection warning, in which the dangerous area and the people in the area are marked with red solid lines, and the confidence level and unique identification ID are displayed above and below the people.
优选的,所述S203还包括:Preferably, the S203 also includes:
S2034,训练YOLOv8-F目标监测模型,训练模型的深度depth_multiple为1,宽度width_multiple为1.25;S2034, train the YOLOv8-F target monitoring model. The depth_multiple of the training model is 1 and the width width_multiple is 1.25;
S2035,图片的尺寸为512x512,batch-size为16,训练300轮;S2035, the image size is 512x512, the batch-size is 16, and the training is 300 rounds;
S2036,优化fitness函数使P,R,mAP@0.5,mAP@0.5:0.95按5:1:2:2的比例计算模型评分,得到效果最优的YOLOv8-F目标检测模型用于后续目标跟踪任务。S2036, optimize the fitness function so that P, R, mAP@0.5, mAP@0.5:0.95 calculate the model score in the ratio of 5:1:2:2, and obtain the optimal YOLOv8-F target detection model for subsequent target tracking tasks. .
优选的,所述S403步骤之后,还包括:Preferably, after step S403, it also includes:
S406,在设定时间内,获取跟踪目标的位置转换次数;S406, within the set time, obtain the number of position conversions of the tracking target;
S407,判断位置转换次数是否大于等于设定次数阈值,若大于等于设定次数阈值,判定为有效移动;S407, determine whether the number of position conversions is greater than or equal to the set number threshold. If it is greater than or equal to the set number threshold, it is determined to be a valid movement;
S408,确定有效移动的移动路径,从移动路径中随机抽取若干个点,确定若干组位置坐标;S408, determine the effective moving path, randomly select several points from the moving path, and determine several sets of position coordinates;
S409,判断每组位置坐标是否落入临界防护的区域范围内,若落入临界防护的区域范围内的组数超过设定值,则判定跟踪目标落入临界防护的区域范围。S409: Determine whether each set of position coordinates falls within the critical protection area. If the number of groups falling within the critical protection area exceeds the set value, it is determined that the tracking target falls within the critical protection area.
优选的,所述S401包括:Preferably, the S401 includes:
S4011,结合图像和视频处理库获取输入的图像,从输入的图像中提取第一分量,所述第一分量包括:光度畸变值和几何畸变值;S4011, combine the image and video processing library to obtain the input image, and extract the first component from the input image, where the first component includes: photometric distortion value and geometric distortion value;
S4012,根据所述第一分量计算所述图像中每个像素的特征值,所述特征值是以光度畸变值和几何畸变值确定的畸变出现的概率;S4012, calculate the characteristic value of each pixel in the image according to the first component, where the characteristic value is the probability of occurrence of distortion determined by the photometric distortion value and the geometric distortion value;
S4013,根据所述特征值计算每个像素的畸变调整参数;S4013, calculate the distortion adjustment parameter of each pixel according to the characteristic value;
S4014,对输入的图像中每个像素进行噪声检测,判断所述图像中的每个像素是否为噪声点;S4014, perform noise detection on each pixel in the input image, and determine whether each pixel in the image is a noise point;
S4015,当所述像素为非噪声点时,通过畸变调整参数对所述像素进行像素校正;S4015, when the pixel is a non-noise point, perform pixel correction on the pixel through the distortion adjustment parameter;
S4016,基于像素校正后的图像获取需要进行临界防护的区域范围。S4016, based on the image after pixel correction, the area that requires critical protection is acquired.
与现有技术相比,本发明具有以下优点:Compared with the prior art, the present invention has the following advantages:
本发明提供一种多区域类型多推理预警机制的临界防护方法,包括:获得实时监控视频流数据;构建目标检测模型和目标跟踪模型;基于目标检测模型和目标跟踪模型,对视频流数据进行目标监测和目标跟踪,获得跟踪目标的位置信息;判断位置信息是否在预设的临界防护区域范围内,根据判断结果采取不同的预警措施。针对不同类型工地特征,结合目标检测与目标跟踪技术,实现多区域类型多推理预警机制的临界防护,从而对踏入相关区域的人员进行有效预警,极大程度减少人力投入,提高监管可靠性与安全性。The present invention provides a critical protection method of multi-area type multi-inference early warning mechanism, which includes: obtaining real-time monitoring video stream data; constructing a target detection model and a target tracking model; based on the target detection model and the target tracking model, target the video stream data Monitor and target tracking to obtain the location information of the tracking target; determine whether the location information is within the preset critical protection area, and take different early warning measures based on the judgment results. Based on the characteristics of different types of construction sites, combined with target detection and target tracking technology, critical protection of multi-region types and multi-inference early warning mechanisms can be achieved, thereby effectively warning people who enter relevant areas, greatly reducing manpower investment, and improving supervision reliability and safety.
本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solution of the present invention will be further described in detail below through the accompanying drawings and examples.
附图说明Description of the drawings
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:The drawings are used to provide a further understanding of the present invention and constitute a part of the specification. They are used to explain the present invention together with the embodiments of the present invention and do not constitute a limitation of the present invention. In the attached picture:
图1为本发明实施例中一种多区域类型多推理预警机制的临界防护方法的流程图;Figure 1 is a flow chart of a critical protection method of a multi-region type multi-inference early warning mechanism in an embodiment of the present invention;
图2为本发明实施例中多区域类型多推理预警机制的临界防护方法的原理示意图;Figure 2 is a schematic diagram of the principle of the critical protection method of the multi-region type multi-inference early warning mechanism in the embodiment of the present invention;
图3为本发明实施例中获得跟踪目标的位置信息的方法流程图。Figure 3 is a flow chart of a method for obtaining position information of a tracking target in an embodiment of the present invention.
具体实施方式Detailed ways
以下结合附图对本发明的优选实施例进行说明,应当理解,此处所描述的优选实施例仅用于说明和解释本发明,并不用于限定本发明。The preferred embodiments of the present invention will be described below with reference to the accompanying drawings. It should be understood that the preferred embodiments described here are only used to illustrate and explain the present invention, and are not intended to limit the present invention.
本发明实施例提供了一种多区域类型多推理预警机制的临界防护方法,请参照图1和图2,该方法包括:Embodiments of the present invention provide a critical protection method with multi-area types and multi-reasoning early warning mechanisms. Please refer to Figures 1 and 2. The method includes:
S100,获得实时监控视频流数据;S100, obtain real-time surveillance video stream data;
S200,构建目标检测模型和目标跟踪模型;S200, build a target detection model and a target tracking model;
S300,基于目标检测模型和目标跟踪模型,对视频流数据进行目标监测和目标跟踪,获得跟踪目标的位置信息;S300, based on the target detection model and the target tracking model, perform target monitoring and target tracking on the video stream data to obtain the location information of the tracking target;
S400,判断位置信息是否在预设的临界防护区域范围内,根据判断结果采取不同的预警措施。S400: Determine whether the location information is within the preset critical protection area, and take different early warning measures based on the judgment results.
上述技术方案的工作原理为:本实施例采用的方案是S100,获得实时监控视频流数据;The working principle of the above technical solution is: the solution adopted in this embodiment is S100, which obtains real-time monitoring video stream data;
S200,构建目标检测模型和目标跟踪模型;S200, build a target detection model and a target tracking model;
S300,基于目标检测模型和目标跟踪模型,对视频流数据进行目标监测和目标跟踪,获得跟踪目标的位置信息;S300, based on the target detection model and the target tracking model, perform target monitoring and target tracking on the video stream data to obtain the location information of the tracking target;
S400,判断位置信息是否在预设的临界防护区域范围内,根据判断结果采取不同的预警措施。S400: Determine whether the location information is within the preset critical protection area, and take different early warning measures based on the judgment results.
本实施例具体的构思原理如下,请参照图2,首先通过采集视频流数据,进而对视频流数据进行抽帧处理,然后构建工地行人数据集,根据构建的工地行人数据集训练YOLOv8-F目标检测模型,其中YOLOv8-F目标检测模型中的应用的目标检测算法采用对YOLOv8骨干网络进行改进后,得到YOLOv8-F网络结构,基于YOLOv8-F网络结构对工地行人数据集进行训练得到目标检测模型,进而基于目标检测模型对输入的实时监控视频流进行目标检测,得到全部工地行人坐标及置信度,根据设置置信度阈值,检测置信度阈值超过0.8的筛选出来,得到筛选后的工地行人坐标,基于StrongSORT目标跟踪算法输出结果,采用PNPoly算法判定跟踪目标是否落入设定的区域范围内,设定的区域的构建方式包括:基于采集检测区域图像,然后多区域类型界定划分后,得到危险区域多边形坐标,构成设定的区域。最后根据应用侧自定义推理方式,判定是否需要报警处理。The specific conceptual principle of this embodiment is as follows. Please refer to Figure 2. First, the video stream data is collected, and then the video stream data is processed by frame extraction. Then a construction site pedestrian data set is constructed, and the YOLOv8-F target is trained based on the constructed construction site pedestrian data set. Detection model, in which the target detection algorithm applied in the YOLOv8-F target detection model adopts the YOLOv8 backbone network to be improved, and the YOLOv8-F network structure is obtained. Based on the YOLOv8-F network structure, the construction site pedestrian data set is trained to obtain the target detection model. , and then perform target detection on the input real-time surveillance video stream based on the target detection model, and obtain the coordinates and confidence levels of all pedestrians on the construction site. According to the set confidence threshold, those with a detection confidence threshold exceeding 0.8 are filtered out, and the filtered coordinates of pedestrians on the construction site are obtained. Based on the output results of the StrongSORT target tracking algorithm, the PNPoly algorithm is used to determine whether the tracking target falls within the set area. The construction method of the set area includes: based on collecting detection area images, and then defining and dividing multiple area types to obtain the dangerous area. Polygon coordinates constitute the set area. Finally, based on the custom reasoning method on the application side, it is determined whether alarm processing is required.
上述技术方案的有益效果为:采用本实施例提供的方案获得实时监控视频流数据;构建目标检测模型和目标跟踪模型;基于目标检测模型和目标跟踪模型,对视频流数据进行目标监测和目标跟踪,获得跟踪目标的位置信息;判断位置信息是否在预设的临界防护区域范围内,根据判断结果采取不同的预警措施。采用多推理机制可以更好提取视频的前后特征,取得更优的检测效果。本方案支持图片、视频流等多源输入方式,首先对检测区域进行人为界定,使用改进的深度学习YOLOv8算法(YOLOv8-F)检测工地行人坐标,利用YOLOv8-F检测结果作为输入,结合StrongSORT目标跟踪算法对工人进行实时跟踪并分配唯一标识ID,实现指定范围内端到端的检测与跟踪,最后对处于特定区域的人员进行警示。本发明适用于各类型工地复杂的现场情况,实现了临界防护预警及预警信息推送,保障了安全的施工环境。The beneficial effects of the above technical solution are: using the solution provided by this embodiment to obtain real-time monitoring video stream data; constructing a target detection model and a target tracking model; and performing target monitoring and target tracking on the video stream data based on the target detection model and target tracking model. , obtain the position information of the tracking target; judge whether the position information is within the preset critical protection area, and take different early warning measures based on the judgment results. The use of multiple inference mechanisms can better extract the front and rear features of the video and achieve better detection results. This solution supports multiple source input methods such as pictures and video streams. First, the detection area is manually defined, and the improved deep learning YOLOv8 algorithm (YOLOv8-F) is used to detect the coordinates of pedestrians on the construction site. The YOLOv8-F detection results are used as input, combined with the StrongSORT target. The tracking algorithm tracks workers in real time and assigns unique identification IDs to achieve end-to-end detection and tracking within a specified range, and finally warns people in specific areas. The invention is suitable for complex on-site conditions of various types of construction sites, realizes critical protection early warning and early warning information push, and ensures a safe construction environment.
在另一实施例中,所述S200中,构建目标检测模型,包括:In another embodiment, in S200, building a target detection model includes:
S201,构建YOLOv8-F网络结构,将YOLOv8的骨干网络backbone由YOLO backbone替换为FasterNet backbone;S201, construct the YOLOv8-F network structure and replace the YOLOv8 backbone network backbone from YOLO backbone to FasterNet backbone;
S202,FasterNet以改进的PConv和PWConv作为核心的算子;S202, FasterNet uses improved PConv and PWConv as the core operators;
S203,基于构建的YOLOv8-F网络结构,对YOLOv8-F工地行人检测模型进行训练,获得目标检测模型。S203. Based on the constructed YOLOv8-F network structure, train the YOLOv8-F construction site pedestrian detection model to obtain a target detection model.
上述技术方案的工作原理为:本实施例采用的方案是所述S200中,构建目标检测模型,包括:The working principle of the above technical solution is: the solution adopted in this embodiment is to build a target detection model in S200, including:
S201,构建YOLOv8-F网络结构,将YOLOv8的骨干网络backbone由YOLO backbone替换为FasterNet backbone;S201, construct the YOLOv8-F network structure and replace the YOLOv8 backbone network backbone from YOLO backbone to FasterNet backbone;
S202,FasterNet以改进的PConv和PWConv作为核心的算子;S202, FasterNet uses improved PConv and PWConv as the core operators;
S203,基于构建的YOLOv8-F网络结构,对YOLOv8-F工地行人检测模型进行训练,获得目标检测模型。S203. Based on the constructed YOLOv8-F network structure, train the YOLOv8-F construction site pedestrian detection model to obtain a target detection model.
上述技术方案的有益效果为:采用本实施例提供的方案基于YOLOv8网络结构增加FasterNet结构,构建YOLOv8-F目标检测模型并进行训练。首先,构建YOLOv8-F网络结构,将YOLOv8的骨干网络backbone由YOLO backbone替换为FasterNet backbone,FasterNet以新型PConv和现成的PWConv作为主要的算子,运行速度非常快,对许多视觉任务非常有效,可在降低参数量的同时实现mAP有效增长;其次,训练YOLOv8-F工地行人检测模型,网络模型的深度depth_multiple为1,宽度width_multiple为1.25,图片的尺寸为512x512,batch-size为16,训练300轮,优化fitness函数使P,R,mAP@0.5,mAP@0.5:0.95按5:1:2:2的比例计算模型评分,得到效果最优的YOLOv8-F工地行人检测模型用于后续目标跟踪任务。The beneficial effects of the above technical solution are: using the solution provided in this embodiment to add the FasterNet structure based on the YOLOv8 network structure, build the YOLOv8-F target detection model and conduct training. First, build the YOLOv8-F network structure, and replace the backbone network backbone of YOLOv8 from the YOLO backbone to the FasterNet backbone. FasterNet uses the new PConv and the ready-made PWConv as the main operators. It runs very fast and is very effective for many visual tasks. Effective growth of mAP is achieved while reducing the number of parameters; secondly, train the YOLOv8-F construction site pedestrian detection model. The depth_multiple of the network model is 1, the width width_multiple is 1.25, the size of the image is 512x512, the batch-size is 16, and the training is 300 rounds , optimize the fitness function to make P, R, mAP@0.5, mAP@0.5:0.95 calculate the model score in the ratio of 5:1:2:2, and obtain the optimal YOLOv8-F construction site pedestrian detection model for subsequent target tracking tasks. .
在另一实施例中,所述S203包括:In another embodiment, the S203 includes:
S2031,采集建筑工地不同区域类型监控设备的视频流数据;S2031, collect video stream data of types of monitoring equipment in different areas of the construction site;
S2032,对视频流数据按每60帧提取一帧的方式进行抽帧处理,构建工地行人数据集;S2032, perform frame extraction processing on the video stream data by extracting one frame every 60 frames to construct a construction site pedestrian data set;
S2033,将工地行人数据集划分为训练集、验证集以及测试集,对YOLOv8-F工地行人检测模型进行训练,获得目标监测模型。S2033: Divide the construction site pedestrian data set into a training set, a verification set and a test set, train the YOLOv8-F construction site pedestrian detection model, and obtain a target monitoring model.
上述技术方案的工作原理为:本实施例采用的方案是所述S203包括:The working principle of the above technical solution is: the solution adopted in this embodiment is that S203 includes:
S2031,采集建筑工地不同区域类型监控设备的视频流数据;S2031, collect video stream data of types of monitoring equipment in different areas of the construction site;
S2032,对视频流数据按每60帧提取一帧的方式进行抽帧处理,构建工地行人数据集;S2032, perform frame extraction processing on the video stream data by extracting one frame every 60 frames to construct a construction site pedestrian data set;
S2033,将工地行人数据集划分为训练集、验证集以及测试集,对YOLOv8-F工地行人检测模型进行训练,获得目标监测模型。S2033: Divide the construction site pedestrian data set into a training set, a verification set and a test set, train the YOLOv8-F construction site pedestrian detection model, and obtain a target monitoring model.
上述技术方案的有益效果为:采用本实施例提供的方案采集建筑工地不同区域类型监控设备的视频流数据,对视频流按每60帧提取一帧的方式进行抽帧处理,构建工地行人数据集,工地场景包括但不限于白天、夜间、晴天、阴天、雨天、雾天、雾霾等各类复杂环境,工人行为类型包括但不限于站立、坐下、交流、施工作业等不同状态。为了更好检测工地行人坐标,对图片进行数据清洗,过滤相似度过高场景,对于人体遮挡面积超过80%以及模糊不清的目标不进行标注。制作工地行人标签,将标注的数据集保存为YOLO TXT格式,数据集按8:1:1的比例划分为训练集、验证集以及测试集。The beneficial effects of the above technical solution are: using the solution provided by this embodiment to collect video stream data of monitoring equipment in different areas of the construction site, extracting frames from the video stream by extracting one frame every 60 frames, and constructing a construction site pedestrian data set , construction site scenes include but are not limited to various complex environments such as day, night, sunny, cloudy, rainy, foggy, haze, etc. Worker behavior types include but are not limited to different states such as standing, sitting, communicating, and construction operations. In order to better detect the coordinates of pedestrians on the construction site, the pictures are data cleaned, and scenes with excessive similarities are filtered. Targets whose human body occlusion area exceeds 80% and which are blurry are not marked. Make construction site pedestrian labels and save the labeled data set in YOLO TXT format. The data set is divided into a training set, a verification set, and a test set in a ratio of 8:1:1.
在另一实施例中,所述S200中,构建目标跟踪模型,包括:In another embodiment, in S200, building a target tracking model includes:
S204,根据目标检测模型获得待监测目标的位置信息和置信度;S204, obtain the location information and confidence of the target to be monitored according to the target detection model;
S205,对待监测目标进行预测,获得预测目标,将待监测目标和预测目标进行匹配,获得匹配结果;S205, predict the target to be monitored, obtain the predicted target, match the target to be monitored and the predicted target, and obtain the matching result;
S206,设置置信度阈值,基于置信度阈值和匹配结果筛选出跟踪目标,形成目标跟踪模型。S206: Set a confidence threshold, filter out tracking targets based on the confidence threshold and matching results, and form a target tracking model.
上述技术方案的工作原理为:本实施例采用的方案是构建目标跟踪模型,包括:根据目标检测模型获得待监测目标的位置信息和置信度;对待监测目标进行预测,获得预测目标,将待监测目标和预测目标进行匹配,获得匹配结果;设置置信度阈值,基于置信度阈值和匹配结果筛选出跟踪目标,形成目标跟踪模型。本实施例中的置信度是指获得的位置信息的可信度,即获得的位置信息与真实值相匹配的概率值,置信度大说明获得的位置信息与真实值越接近。The working principle of the above technical solution is: the solution adopted in this embodiment is to build a target tracking model, including: obtaining the location information and confidence of the target to be monitored according to the target detection model; predicting the target to be monitored, obtaining the predicted target, and The target and the predicted target are matched to obtain the matching result; the confidence threshold is set, and the tracking targets are filtered out based on the confidence threshold and the matching result to form a target tracking model. The confidence in this embodiment refers to the credibility of the obtained location information, that is, the probability value that the obtained location information matches the true value. A high confidence degree indicates that the obtained location information is closer to the true value.
在另一实施例中,请参照图3,所述S300包括:In another embodiment, please refer to Figure 3, the S300 includes:
S301,结合目标监测模型的检测识别结果,搭建StrongSORT目标跟踪模型,用于对工地行人实现实时跟踪;S301, combine the detection and recognition results of the target monitoring model to build a StrongSORT target tracking model for real-time tracking of pedestrians on the construction site;
S302,StrongSORT目标跟踪模型采用YOLOv8-F目标监测模型对工地行人进行检测,采用ECC算法进行摄像机运动补偿,ECC为Enhanced Correlation Coefficient;S302. The StrongSORT target tracking model uses the YOLOv8-F target monitoring model to detect pedestrians on the construction site, and uses the ECC algorithm for camera motion compensation. ECC is Enhanced Correlation Coefficient;
S303,采用NSA卡尔曼算法对工地行人运动轨迹的下一帧位置进行预测,利用EMA(Exponential moving average)特征更新策略,使用表观特征提取器BoT对检测目标与预测目标进行匹配;S303, use the NSA Kalman algorithm to predict the next frame position of the pedestrian movement trajectory on the construction site, use the EMA (Exponential moving average) feature update strategy, and use the apparent feature extractor BoT to match the detection target and the predicted target;
S304,将匹配结果与通过检测阈值筛选的目标进行融合输出,得到当前视频帧图像的跟踪目标。S304: Fusion and output the matching result and the target filtered by the detection threshold to obtain the tracking target of the current video frame image.
上述技术方案的工作原理为:本实施例采用的方案是结合目标监测模型的检测识别结果,搭建StrongSORT目标跟踪模型,用于对工地行人实现实时跟踪;StrongSORT目标跟踪模型采用YOLOv8-F目标监测模型对工地行人进行检测,采用ECC算法进行摄像机运动补偿,ECC为Enhanced Correlation Coefficient;采用NSA卡尔曼算法对工地行人运动轨迹的下一帧位置进行预测,利用EMA(Exponential moving average)特征更新策略,使用表观特征提取器BoT对检测目标与预测目标进行匹配;将匹配结果与通过检测阈值筛选的目标进行融合输出,得到当前视频帧图像的跟踪目标。The working principle of the above technical solution is: the solution adopted in this embodiment is to combine the detection and recognition results of the target monitoring model to build a StrongSORT target tracking model to achieve real-time tracking of pedestrians on the construction site; the StrongSORT target tracking model uses the YOLOv8-F target monitoring model To detect pedestrians on the construction site, the ECC algorithm is used for camera motion compensation, ECC stands for Enhanced Correlation Coefficient; the NSA Kalman algorithm is used to predict the next frame position of the pedestrian movement trajectory on the construction site, and the EMA (Exponential moving average) feature update strategy is used. The apparent feature extractor BoT matches the detected target and the predicted target; the matching results are fused and output with the targets filtered by the detection threshold to obtain the tracking target of the current video frame image.
上述技术方案的有益效果为:采用本实施例提供的方案结合目标检测识别结果,搭建StrongSORT目标跟踪器,用于对工地行人实现实时跟踪。StrongSORT使用YOLOv8-F目标检测器对工地行人进行检测,采用ECC算法(Enhanced Correlation Coefficient)进行摄像机运动补偿,使用了NSA卡尔曼算法对工地行人运动轨迹的下一帧位置进行预测,利用EMA(Exponential moving average)特征更新策略,使用表观特征提取器BoT对检测目标与预测目标进行匹配。设置YOLOv8-F检测置信度阈值0.8,将匹配结果与通过检测阈值筛选的目标进行融合输出,得到当前视频帧图像的跟踪目标。The beneficial effects of the above technical solution are: using the solution provided in this embodiment combined with the target detection and recognition results to build a StrongSORT target tracker for real-time tracking of pedestrians on the construction site. StrongSORT uses the YOLOv8-F target detector to detect pedestrians on the construction site, uses the ECC algorithm (Enhanced Correlation Coefficient) for camera motion compensation, uses the NSA Kalman algorithm to predict the next frame position of the pedestrian movement trajectory on the construction site, and uses EMA (Exponential moving average) feature update strategy, using the apparent feature extractor BoT to match the detection target and the prediction target. Set the YOLOv8-F detection confidence threshold to 0.8, fuse the matching results with the targets filtered by the detection threshold and output them to obtain the tracking target of the current video frame image.
在另一实施例中,所述S400中,判断位置信息是否在预设的临界防护区域范围内,包括:In another embodiment, in S400, determining whether the location information is within a preset critical protection area includes:
S401,对检测区域进行人为界定,结合图像和视频处理库获取需要进行临界防护的区域范围;S401, artificially define the detection area, and combine the image and video processing libraries to obtain the area that requires critical protection;
S402,获取出现在监控视频特定区域内跟踪目标的位置信息的中点坐标;S402, obtain the midpoint coordinates of the position information of the tracking target appearing in the specific area of the surveillance video;
S403,比对该中点坐标与临界防护的区域的关系,采用PNPoly算法判定该中点坐标是否在临界防护的区域的范围内。S403: Compare the relationship between the midpoint coordinates and the critical protection area, and use the PNPoly algorithm to determine whether the midpoint coordinates are within the range of the critical protection area.
上述技术方案的工作原理为:本实施例采用的方案是判断位置信息是否在预设的临界防护区域范围内,包括:对检测区域进行人为界定,结合图像和视频处理库获取需要进行临界防护的区域范围;获取出现在监控视频特定区域内跟踪目标的位置信息的中点坐标;比对该中点坐标与临界防护的区域的关系,采用PNPoly算法判定该中点坐标是否在临界防护的区域的范围内。The working principle of the above technical solution is: the solution adopted in this embodiment is to determine whether the location information is within the preset critical protection area, including: artificially defining the detection area, and combining the image and video processing library to obtain the critical protection area. Area range; obtain the midpoint coordinates of the position information of the tracking target that appears in a specific area of the surveillance video; compare the relationship between the midpoint coordinates and the critical protection area, and use the PNPoly algorithm to determine whether the midpoint coordinates are in the critical protection area. within the range.
上述技术方案的有益效果为:采用本实施例提供的方案根据YOLOv8-F目标检测器及StrongSORT目标跟踪器输出的结果进行临界防护推理及预警。首先,对检测区域进行人为界定,结合图像和视频处理库(OpenCV)获取需要进行临界防护的区域范围;其次,获取出现在监控视频特定区域内目标的中点坐标,比对特定区域封闭多边形坐标,运用PNPoly算法判定坐标是否在特定区域内,结合应用侧特定场景自定义推理,判断是否需要对踏入相关区域的人员进行实时预警。The beneficial effect of the above technical solution is: using the solution provided by this embodiment to perform critical protection reasoning and early warning based on the results output by the YOLOv8-F target detector and the StrongSORT target tracker. First, the detection area is artificially defined, and the image and video processing library (OpenCV) is used to obtain the area that requires critical protection; secondly, the midpoint coordinates of the target appearing in the specific area of the surveillance video are obtained, and the closed polygon coordinates of the specific area are compared. , using the PNPoly algorithm to determine whether the coordinates are within a specific area, and combined with custom reasoning for specific scenarios on the application side, to determine whether a real-time warning is needed for people who enter the relevant area.
在另一实施例中,所述S400中,根据判断结果采取不同的预警措施,包括:In another embodiment, in S400, different early warning measures are taken according to the judgment results, including:
S404,结合应用侧特定场景自定义推理,判断是否需要对踏入相关区域的人员进行实时预警;S404, combined with customized reasoning for specific scenarios on the application side, determine whether real-time warning is needed for people who enter the relevant area;
S405,临界防护预警,其中危险区域及区域内人员均以红色实线标出,人员上方和下方分别显示置信度和唯一标识ID。S405, critical protection warning, in which the dangerous area and the people in the area are marked with red solid lines, and the confidence level and unique identification ID are displayed above and below the people.
上述技术方案的工作原理为:本实施例采用的方案是根据判断结果采取不同的预警措施,包括:结合应用侧特定场景自定义推理,判断是否需要对踏入相关区域的人员进行实时预警;临界防护预警,其中危险区域及区域内人员均以红色实线标出,人员上方和下方分别显示置信度和唯一标识ID。The working principle of the above technical solution is: the solution adopted in this embodiment is to take different early warning measures based on the judgment results, including: combining custom reasoning with specific scenarios on the application side to determine whether real-time warning is needed for people who step into the relevant area; critical In the protection warning, the dangerous area and the people in the area are marked with red solid lines, and the confidence level and unique identification ID are displayed above and below the people.
上述技术方案的有益效果为:采用本实施例提供的方案根据判断结果采取不同的预警措施,包括:结合应用侧特定场景自定义推理,判断是否需要对踏入相关区域的人员进行实时预警;临界防护预警,其中危险区域及区域内人员均以红色实线标出,人员上方和下方分别显示置信度和唯一标识ID。工地现场存在不同区域类型,比如进入生产区域必须正确穿戴防护用品,而危险区域是禁止人员进入的。传统方式利用人员进行监管或劝退,需要大量资源且具有一定的不可靠性。针对不同类型工地特征,结合目标检测与目标跟踪技术,实现多区域类型多推理预警机制的临界防护,从而对踏入相关区域的人员进行有效预警,极大程度减少人力投入,提高监管可靠性与安全性。The beneficial effects of the above technical solution are: using the solution provided by this embodiment to take different early warning measures based on the judgment results, including: combining custom reasoning with specific scenarios on the application side to determine whether a real-time early warning is needed for people who step into the relevant area; critical In the protection warning, the dangerous area and the people in the area are marked with red solid lines, and the confidence level and unique identification ID are displayed above and below the people. There are different types of areas on the construction site. For example, you must wear protective equipment correctly when entering the production area, while dangerous areas are prohibited from entering. The traditional method of using personnel to supervise or persuade employees to quit requires a lot of resources and is unreliable. Based on the characteristics of different types of construction sites, combined with target detection and target tracking technology, critical protection of multi-region types and multi-inference early warning mechanisms can be achieved, thereby effectively warning people who enter relevant areas, greatly reducing manpower investment, and improving supervision reliability and safety.
在另一实施例中,所述S203还包括:In another embodiment, the S203 further includes:
S2034,训练YOLOv8-F目标监测模型,训练模型的深度depth_multiple为1,宽度width_multiple为1.25;S2034, train the YOLOv8-F target monitoring model. The depth_multiple of the training model is 1 and the width width_multiple is 1.25;
S2035,图片的尺寸为512x512,batch-size为16,训练300轮;S2035, the image size is 512x512, the batch-size is 16, and the training is 300 rounds;
S2036,优化fitness函数使P,R,mAP@0.5,mAP@0.5:0.95按5:1:2:2的比例计算模型评分,得到效果最优的YOLOv8-F目标检测模型用于后续目标跟踪任务。S2036, optimize the fitness function so that P, R, mAP@0.5, mAP@0.5:0.95 calculate the model score in the ratio of 5:1:2:2, and obtain the optimal YOLOv8-F target detection model for subsequent target tracking tasks. .
上述技术方案的工作原理为:本实施例采用的方案是所述S203还包括:S2034,训练YOLOv8-F目标监测模型,训练模型的深度depth_multiple为1,宽度width_multiple为1.25;S2035,图片的尺寸为512x512,batch-size为16,训练300轮;S2036,优化fitness函数使P,R,mAP@0.5,mAP@0.5:0.95按5:1:2:2的比例计算模型评分,得到效果最优的YOLOv8-F目标检测模型用于后续目标跟踪任务。The working principle of the above technical solution is: The solution adopted in this embodiment is that S203 also includes: S2034, train the YOLOv8-F target monitoring model, the depth_multiple of the training model is 1, and the width width_multiple is 1.25; S2035, the size of the picture is 512x512, batch-size is 16, training is 300 rounds; S2036, optimize the fitness function to make P, R, mAP@0.5, mAP@0.5:0.95 calculate the model score in the ratio of 5:1:2:2, and get the best result The YOLOv8-F target detection model is used for subsequent target tracking tasks.
上述技术方案的有益效果为:采用本实施例提供的方案结合目标检测识别结果,搭建StrongSORT目标跟踪器,用于对工地行人实现实时跟踪。StrongSORT使用YOLOv8-F目标检测器对工地行人进行检测,采用ECC算法(Enhanced Correlation Coefficient)进行摄像机运动补偿,使用了NSA卡尔曼算法对工地行人运动轨迹的下一帧位置进行预测,利用EMA(Exponential moving average)特征更新策略,使用表观特征提取器BoT对检测目标与预测目标进行匹配。设置YOLOv8-F检测置信度阈值0.8,将匹配结果与通过检测阈值筛选的目标进行融合输出,得到当前视频帧图像的跟踪目标。The beneficial effects of the above technical solution are: using the solution provided in this embodiment combined with the target detection and recognition results to build a StrongSORT target tracker for real-time tracking of pedestrians on the construction site. StrongSORT uses the YOLOv8-F target detector to detect pedestrians on the construction site, uses the ECC algorithm (Enhanced Correlation Coefficient) for camera motion compensation, uses the NSA Kalman algorithm to predict the next frame position of the pedestrian movement trajectory on the construction site, and uses EMA (Exponential moving average) feature update strategy, using the apparent feature extractor BoT to match the detection target and the prediction target. Set the YOLOv8-F detection confidence threshold to 0.8, fuse the matching results with the targets filtered by the detection threshold and output them to obtain the tracking target of the current video frame image.
在另一实施例中,所述S403步骤之后,还包括:In another embodiment, after step S403, the following steps are also included:
S406,在设定时间内,获取跟踪目标的位置转换次数;S406, within the set time, obtain the number of position conversions of the tracking target;
S407,判断位置转换次数是否大于等于设定次数阈值,若大于等于设定次数阈值,判定为有效移动;S407, determine whether the number of position conversions is greater than or equal to the set number threshold. If it is greater than or equal to the set number threshold, it is determined to be a valid movement;
S408,确定有效移动的移动路径,从移动路径中随机抽取若干个点,确定若干组位置坐标;S408, determine the effective moving path, randomly select several points from the moving path, and determine several sets of position coordinates;
S409,判断每组位置坐标是否落入临界防护的区域范围内,若落入临界防护的区域范围内的组数超过设定值,则判定跟踪目标落入临界防护的区域范围。S409: Determine whether each set of position coordinates falls within the critical protection area. If the number of groups falling within the critical protection area exceeds the set value, it is determined that the tracking target falls within the critical protection area.
上述技术方案的工作原理为:本实施例采用的方案是所述S403步骤之后,还包括:S406,在设定时间内,获取跟踪目标的位置转换次数;S407,判断位置转换次数是否大于等于设定次数阈值,若大于等于设定次数阈值,判定为有效移动;S408,确定有效移动的移动路径,从移动路径中随机抽取若干个点,确定若干组位置坐标;S409,判断每组位置坐标是否落入临界防护的区域范围内,若落入临界防护的区域范围内的组数超过设定值,则判定跟踪目标落入临界防护的区域范围。The working principle of the above technical solution is: the solution adopted in this embodiment is after the step S403, and also includes: S406, obtaining the number of position conversions of the tracking target within a set time; S407, judging whether the number of position conversions is greater than or equal to the set time. Set the number of times threshold. If it is greater than or equal to the set number of times threshold, it is determined to be a valid movement; S408, determine the movement path of the effective movement, randomly select several points from the movement path, and determine several sets of position coordinates; S409, determine whether each set of position coordinates Falling within the critical protection area, if the number of groups falling within the critical protection area exceeds the set value, it is determined that the tracking target falls within the critical protection area.
上述技术方案的有益效果为:采用本实施例提供的方案所述S403步骤之后,还包括:S406,在设定时间内,获取跟踪目标的位置转换次数;S407,判断位置转换次数是否大于等于设定次数阈值,若大于等于设定次数阈值,判定为有效移动;S408,确定有效移动的移动路径,从移动路径中随机抽取若干个点,确定若干组位置坐标;S409,判断每组位置坐标是否落入临界防护的区域范围内,若落入临界防护的区域范围内的组数超过设定值,则判定跟踪目标落入临界防护的区域范围。提升判断跟踪目标与区域范围之间的关系的判定准确性。降低计算的计算量,且本实施例提供的方案的计算结果更加准确。The beneficial effects of the above technical solution are: after adopting the S403 step described in the solution provided by this embodiment, it also includes: S406, obtaining the number of position conversions of the tracking target within a set time; S407, judging whether the number of position conversions is greater than or equal to the set time. Set the number of times threshold. If it is greater than or equal to the set number of times threshold, it is determined to be a valid movement; S408, determine the movement path of the effective movement, randomly select several points from the movement path, and determine several sets of position coordinates; S409, determine whether each set of position coordinates Falling within the critical protection area, if the number of groups falling within the critical protection area exceeds the set value, it is determined that the tracking target falls within the critical protection area. Improve the accuracy of determining the relationship between the tracking target and the area range. The calculation amount is reduced, and the calculation results of the solution provided by this embodiment are more accurate.
在另一实施例中,所述S401包括:In another embodiment, the S401 includes:
S4011,结合图像和视频处理库获取输入的图像,从输入的图像中提取第一分量,所述第一分量包括:光度畸变值和几何畸变值;S4011, combine the image and video processing library to obtain the input image, and extract the first component from the input image, where the first component includes: photometric distortion value and geometric distortion value;
S4012,根据所述第一分量计算所述图像中每个像素的特征值,所述特征值是以光度畸变值和几何畸变值确定的畸变出现的概率;S4012, calculate the characteristic value of each pixel in the image according to the first component, where the characteristic value is the probability of occurrence of distortion determined by the photometric distortion value and the geometric distortion value;
S4013,根据所述特征值计算每个像素的畸变调整参数;S4013, calculate the distortion adjustment parameter of each pixel according to the characteristic value;
S4014,对输入的图像中每个像素进行噪声检测,判断所述图像中的每个像素是否为噪声点;S4014, perform noise detection on each pixel in the input image, and determine whether each pixel in the image is a noise point;
S4015,当所述像素为非噪声点时,通过畸变调整参数对所述像素进行像素校正;S4015, when the pixel is a non-noise point, perform pixel correction on the pixel through the distortion adjustment parameter;
S4016,基于像素校正后的图像获取需要进行临界防护的区域范围。S4016, based on the image after pixel correction, the area that requires critical protection is acquired.
上述技术方案的工作原理为:本实施例采用的方案是所述S401包括:S4011,结合图像和视频处理库获取输入的图像,从输入的图像中提取第一分量,所述第一分量包括:光度畸变值和几何畸变值;S4012,根据所述第一分量计算所述图像中每个像素的特征值,所述特征值是以光度畸变值和几何畸变值确定的畸变出现的概率;S4013,根据所述特征值计算每个像素的畸变调整参数;S4014,对输入的图像中每个像素进行噪声检测,判断所述图像中的每个像素是否为噪声点;S4015,当所述像素为非噪声点时,通过畸变调整参数对所述像素进行像素校正;S4016,基于像素校正后的图像获取需要进行临界防护的区域范围。The working principle of the above technical solution is: the solution adopted in this embodiment is that S401 includes: S4011, combines the image and video processing library to obtain the input image, and extracts the first component from the input image, and the first component includes: Photometric distortion value and geometric distortion value; S4012, calculate the characteristic value of each pixel in the image according to the first component, the characteristic value is the probability of occurrence of distortion determined by the photometric distortion value and the geometric distortion value; S4013, Calculate the distortion adjustment parameter of each pixel according to the characteristic value; S4014, perform noise detection on each pixel in the input image, and determine whether each pixel in the image is a noise point; S4015, when the pixel is not When there are noise points, perform pixel correction on the pixels through distortion adjustment parameters; S4016, obtain the area range that requires critical protection based on the pixel-corrected image.
上述技术方案的有益效果为:采用本实施例提供的方案结合图像和视频处理库获取输入的图像,从输入的图像中提取第一分量,所述第一分量包括:光度畸变值和几何畸变值;根据所述第一分量计算所述图像中每个像素的特征值,所述特征值是以光度畸变值和几何畸变值确定的畸变出现的概率;根据所述特征值计算每个像素的畸变调整参数;对输入的图像中每个像素进行噪声检测,判断所述图像中的每个像素是否为噪声点;当所述像素为非噪声点时,通过畸变调整参数对所述像素进行像素校正;基于像素校正后的图像获取需要进行临界防护的区域范围。本实施例提供的方案能够纠正图像畸变的问题,加强图像的轮廓,使图像变得清晰,提高图像的品质。进一步提升基于图像确定临界防护的区域范围的准确性。The beneficial effects of the above technical solution are: using the solution provided by this embodiment combined with the image and video processing library to obtain the input image, extracting the first component from the input image, the first component includes: photometric distortion value and geometric distortion value ; Calculate the characteristic value of each pixel in the image according to the first component, the characteristic value is the probability of occurrence of distortion determined by the photometric distortion value and the geometric distortion value; Calculate the distortion of each pixel according to the characteristic value Adjust parameters; perform noise detection on each pixel in the input image to determine whether each pixel in the image is a noise point; when the pixel is a non-noise point, perform pixel correction on the pixel through the distortion adjustment parameter ; Based on the pixel-corrected image acquisition, the area that requires critical protection is obtained. The solution provided by this embodiment can correct the problem of image distortion, strengthen the outline of the image, make the image clear, and improve the quality of the image. Further improve the accuracy of determining the critical protection area based on images.
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。Obviously, those skilled in the art can make various changes and modifications to the present invention without departing from the spirit and scope of the invention. In this way, if these modifications and variations of the present invention fall within the scope of the claims of the present invention and equivalent technologies, the present invention is also intended to include these modifications and variations.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310773744.6ACN116977920B (en) | 2023-06-28 | 2023-06-28 | Critical protection method for multi-zone type multi-reasoning early warning mechanism |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310773744.6ACN116977920B (en) | 2023-06-28 | 2023-06-28 | Critical protection method for multi-zone type multi-reasoning early warning mechanism |
| Publication Number | Publication Date |
|---|---|
| CN116977920Atrue CN116977920A (en) | 2023-10-31 |
| CN116977920B CN116977920B (en) | 2024-04-12 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310773744.6AActiveCN116977920B (en) | 2023-06-28 | 2023-06-28 | Critical protection method for multi-zone type multi-reasoning early warning mechanism |
| Country | Link |
|---|---|
| CN (1) | CN116977920B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111860318A (en)* | 2020-07-20 | 2020-10-30 | 杭州品茗安控信息技术股份有限公司 | Construction site pedestrian loitering detection method, device, equipment and storage medium |
| CN113723361A (en)* | 2021-09-18 | 2021-11-30 | 西安邮电大学 | Video monitoring method and device based on deep learning |
| CN114677640A (en)* | 2022-03-23 | 2022-06-28 | 河海大学 | Intelligent construction site safety monitoring system and method based on machine vision |
| CN114973140A (en)* | 2022-06-10 | 2022-08-30 | 广西北投公路建设投资集团有限公司 | Method and system for intrusion detection of personnel in dangerous areas based on machine vision |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111860318A (en)* | 2020-07-20 | 2020-10-30 | 杭州品茗安控信息技术股份有限公司 | Construction site pedestrian loitering detection method, device, equipment and storage medium |
| CN113723361A (en)* | 2021-09-18 | 2021-11-30 | 西安邮电大学 | Video monitoring method and device based on deep learning |
| CN114677640A (en)* | 2022-03-23 | 2022-06-28 | 河海大学 | Intelligent construction site safety monitoring system and method based on machine vision |
| CN114973140A (en)* | 2022-06-10 | 2022-08-30 | 广西北投公路建设投资集团有限公司 | Method and system for intrusion detection of personnel in dangerous areas based on machine vision |
| Title |
|---|
| ATHRUNSUNNY: "《yolov5增加fasternet结构》", pages 1 - 6, Retrieved from the Internet <URL:《https://blog.csdn.net/athrunsunny/article/details/129625155》>* |
| FROOTGUO: "《【目标跟踪算法】Strong SORT多目标跟踪模型论文解析+代码详解》", pages 1 - 2, Retrieved from the Internet <URL:《https://blog.csdn.net/qq_43348528/article/details/129619337》>* |
| Publication number | Publication date |
|---|---|
| CN116977920B (en) | 2024-04-12 |
| Publication | Publication Date | Title |
|---|---|---|
| CN110425005B (en) | Safety monitoring and early warning method for man-machine interaction behavior of belt transport personnel under mine | |
| CN111144263A (en) | Method and device for early warning of high fall accident for construction workers | |
| US12395610B2 (en) | Systems and methods for personnel location at a drilling site | |
| CN103632158B (en) | Forest fire prevention monitor method and forest fire prevention monitor system | |
| CN112347916A (en) | Power field operation safety monitoring method and device based on video image analysis | |
| CN107133564B (en) | Tooling cap detection method | |
| CN106210634A (en) | A kind of wisdom gold eyeball identification personnel fall down to the ground alarm method and device | |
| CN106571014A (en) | Method for identifying abnormal motion in video and system thereof | |
| CN112488483A (en) | AI technology-based EHS transparent management system and management method | |
| CN108376406A (en) | A kind of Dynamic Recurrent modeling and fusion tracking method for channel blockage differentiation | |
| CN112071084A (en) | Method and system for judging illegal parking by utilizing deep learning | |
| CN115063730A (en) | Early warning method and system for workers intruding into marginal area based on video trajectory analysis | |
| CN106127814A (en) | A kind of wisdom gold eyeball identification gathering of people is fought alarm method and device | |
| CN106355162A (en) | Method for detecting intrusion on basis of video monitoring | |
| CN116259002A (en) | A video-based human risk behavior analysis method | |
| CN113554682A (en) | Safety helmet detection method based on target tracking | |
| CN114299106A (en) | High-altitude parabolic early warning system and method based on visual sensing and track prediction | |
| CN110287917A (en) | The security management and control system and method in capital construction building site | |
| CN117238100A (en) | A method and system for intelligent monitoring of warehouse safety based on image recognition | |
| CN107368786A (en) | A kind of passenger based on machine vision crosses handrail detection algorithm | |
| CN114332736A (en) | Power site fire safety risk analysis method and system | |
| CN111432172A (en) | Fence alarm method and system based on image fusion | |
| CN116206253B (en) | Method and system for detecting and judging site fire behavior based on deep learning | |
| CN115879776A (en) | Dangerous area early warning method and system applied to petroleum drilling machine | |
| CN113343947B (en) | Petroleum underground oil pipe lifting safety analysis method based on artificial intelligence |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |