Movatterモバイル変換


[0]ホーム

URL:


CN118942163B - Intelligent safety early warning method and system applied to high-altitude operation - Google Patents

Intelligent safety early warning method and system applied to high-altitude operation
Download PDF

Info

Publication number
CN118942163B
CN118942163BCN202411422395.4ACN202411422395ACN118942163BCN 118942163 BCN118942163 BCN 118942163BCN 202411422395 ACN202411422395 ACN 202411422395ACN 118942163 BCN118942163 BCN 118942163B
Authority
CN
China
Prior art keywords
dynamic
subject
current
action
main body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411422395.4A
Other languages
Chinese (zh)
Other versions
CN118942163A (en
Inventor
汪炳曦
孙会娟
邓红兵
王志雄
郭万里
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Stridetop Technology Co ltd
Original Assignee
Wuhan Stridetop Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Stridetop Technology Co ltdfiledCriticalWuhan Stridetop Technology Co ltd
Priority to CN202411422395.4ApriorityCriticalpatent/CN118942163B/en
Publication of CN118942163ApublicationCriticalpatent/CN118942163A/en
Application grantedgrantedCritical
Publication of CN118942163BpublicationCriticalpatent/CN118942163B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The application discloses an intelligent safety early warning method and system applied to high-altitude operation, wherein the method comprises the steps of obtaining an image of the high-altitude operation at the current moment, carrying out image recognition on the image of the high-altitude operation, carrying out image recognition on a first target subarea under the condition that a monitoring main body comprises a dynamic main body and the dynamic main body is one, positioning a first current key point of the dynamic main body at the current moment, determining a first current action state of the dynamic main body, determining an ending action state according to the first target subarea at the current moment, predicting to obtain an action duration time from the first current action state to the ending action state, determining whether the monitoring main body comprises a static main body, if the first safety risk coefficient of the dynamic main body is larger than a first preset value, obtaining the first target subarea at the next N moments one by one, predicting to obtain a motion track of the dynamic main body, determining an action risk level of the dynamic main body and displaying.

Description

Intelligent safety early warning method and system applied to high-altitude operation
Technical Field
The embodiment of the application relates to the technical field of image recognition, in particular to an intelligent safety early warning method and system applied to high-altitude operation.
Background
Aloft work refers to work activities performed at a height from the ground or other horizontal surfaces, and is commonly used in the fields of building construction, power communication, storage, and the like. Due to the potential falling risk of the aerial work, necessary safety measures are required to ensure the safety of the aerial work crowd.
The safety of the overhead operation is necessary to ensure through safety equipment (such as safety helmets, safety belts and anti-skid shoes), falling protection devices (such as guardrails and safety nets) and a working platform, and the overhead operation process is monitored in real time so that accidents can be timely handled and rescuing at any moment.
The traditional high-altitude operation safety monitoring means is simple and passive, mainly relies on manual inspection or fixed camera monitoring installed on a working platform, and is difficult to realize omnibearing and full-period real-time monitoring. The manual inspection is easily affected by subjective factors, and continuous monitoring for 24 hours cannot be achieved. Although the fixed camera can be continuously monitored, the fixed camera lacks intelligent analysis capability and cannot actively identify dangerous behaviors. Secondly, the high-altitude operation environment is complex and changeable, a plurality of potential risk factors exist, and the safety condition is difficult to comprehensively evaluate by means of a simple sensor.
Due to the existence of the problems, the traditional high-altitude operation safety monitoring method cannot comprehensively evaluate the safety of high-altitude operation, and the problem that other non-monitored risk factors can cause safety accidents even if high-altitude operation personnel wear safety equipment can occur, so that the safety monitoring accuracy is low.
Based on the above problems, there is currently no effective solution.
Disclosure of Invention
The embodiment of the application provides an intelligent safety early warning method and system applied to high-altitude operation, which are used for comprehensively evaluating the safety of the high-altitude operation and improving the accuracy of safety monitoring.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical scheme:
in a first aspect, an intelligent safety precaution method applied to overhead operation is provided, the method comprising:
Acquiring an aerial work image at the current moment, and carrying out image recognition on the aerial work image so as to divide the aerial work image into a plurality of subareas according to an aerial work project, wherein each subarea comprises at least one monitoring main body, and the monitoring main body comprises a static main body and/or a dynamic main body;
Taking the subarea as a first target subarea under the condition that the monitoring subject in one subarea comprises the dynamic subject and the dynamic subject is one, and carrying out image recognition on the first target subarea to obtain the dynamic subject in the first target subarea;
Positioning a first current key point of the dynamic main body at the current moment, and determining a first current action state of the dynamic main body according to the first current key point;
determining an ending action state according to the first target subarea at the current moment, and predicting action duration from the first current action state to the ending action state based on the first current action state and the ending action state;
determining whether the monitoring subject in the first target subregion further comprises the static subject;
determining a first safety risk coefficient of the dynamic body according to the first current action state and the static body under the condition that the monitoring body further comprises the static body;
under the condition that the first security risk coefficient is larger than a first preset value, acquiring the first target subareas at the next N times one by one, wherein the set of the current time and the next N times is smaller than or equal to the action duration;
Predicting to obtain a motion trail of the dynamic main body according to the change of key points of the dynamic main body in the first target sub-area at the current moment and the first target sub-areas at the next N moments;
and determining and displaying the action risk level of the dynamic main body according to the motion trail of the dynamic main body.
In a possible implementation manner of the first aspect, the method further includes:
Taking the subarea as a second target subarea under the condition that the monitoring main body in one subarea comprises the dynamic main bodies and the number of the dynamic main bodies is at least two, and acquiring preset data of the second target subarea at the current moment, wherein the preset data comprises the standard number of the dynamic main bodies and the standard action data of each dynamic main body;
under the condition that the number of the dynamic main bodies is consistent with the standard number and the standard action data of each dynamic main body is consistent, locating a second current key point position of each dynamic main body at the current time, and determining a second current action state of each dynamic main body at the current time according to each second current key point position;
determining whether the monitoring subject in the second target subregion further comprises the static subject;
determining a second security risk coefficient of each dynamic body according to the second current key point of each dynamic body and the static body when the monitoring body further comprises the static body;
and under the condition that any one of the second safety risk coefficients is larger than a second preset value, sending out first alarm information.
In another possible implementation manner of the first aspect, the method further includes:
under the condition that the number of the dynamic main bodies is consistent with the standard number and the standard action data of each dynamic main body is inconsistent, analyzing each dynamic main body in parallel to obtain the action data of each dynamic main body in a preset time period, wherein the ending time of the preset time period is the current time;
calculating the action deviation degree between the action data and the corresponding standard action data;
Creating an interaction matrix, and calculating the relative positions and interactions between the dynamic bodies in real time, wherein the interaction matrix is used for storing the relative positions and the interactions in real time;
Calculating a third safety risk coefficient of each dynamic main body according to the action deviation degree and the interaction matrix by adopting an individual safety risk index formula;
and sending out second alarm information under the condition that any one of the third safety risk coefficients is larger than a third preset value.
In another possible implementation manner of the first aspect, the locating a first current key point of the dynamic body at a current time and determining a first current action state of the dynamic body according to the first current key point include:
performing key point positioning on the dynamic main body at the current moment by adopting a preset skeleton key point detection algorithm to obtain a first current key point of the dynamic main body at the current moment;
Acquiring three-dimensional coordinates of each first current key point;
According to the three-dimensional coordinates of each first current key point, calculating the relative distance and angle between the first current key points;
Inputting all the relative distances and the angles into a pre-trained classifier to obtain probability distribution, wherein the probability distribution is used for representing the possibility that the current action state of the dynamic main body belongs to a predefined action state;
And taking the current action state with the highest probability as the first current action state.
In another possible implementation manner of the first aspect, the predicting, based on the first current action state and the end action state, an action duration from the first current action state to the end action state includes:
Calculating a first vector of the first current action state and a second vector of the ending action state, and acquiring main body characteristics of the dynamic main body;
Inputting the first vector and the second vector into a pre-trained motion state space model to obtain motion change complexity, wherein the motion state space model is used for calculating Euclidean distance between the first vector and the second vector, and the Euclidean distance is used for representing the motion change complexity;
Inputting the main body characteristics, the first vector, the second vector and the motion change complexity into a pre-trained deep learning model to obtain motion duration.
In another possible implementation manner of the first aspect, the determining a first security risk coefficient of the dynamic body according to the first current action state and the static body includes:
Determining the gesture stability of the first current action state by adopting a biomechanical model;
Acquiring the current height of the dynamic main body, and determining the risk probability of the static main body to the dynamic main body according to the current height and the static main body;
and determining a first safety risk coefficient of the dynamic main body according to the gesture stability and the risk probability.
In another possible implementation manner of the first aspect, the determining, according to the current altitude and the static subject, a risk probability of the static subject to the dynamic subject includes:
Mapping the dynamic main body and the static main bodies into a preset three-dimensional coordinate system, and calculating a first distance and a first height difference between the dynamic main bodies and each static main body;
and calculating the risk probability of the static main body to the dynamic main body according to the first distance and the height difference by adopting a preset risk function.
In another possible implementation manner of the first aspect, the determining a first security risk coefficient of the dynamic body according to the gesture stability and the risk probability includes:
and carrying out weighted summation on the gesture stability and the risk probability to obtain the first safety risk coefficient of the dynamic main body.
In a second aspect, the present application provides an electronic device comprising:
a memory configured to store instructions, and
And the processor is configured to call the instruction from the memory and can realize the intelligent safety precaution method applied to the aerial work when executing the instruction.
In a third aspect, the present application provides an intelligent security pre-warning system for use in overhead operations, comprising:
A camera;
and the electronic equipment is connected with the camera.
According to the technical scheme, the comprehensive and real-time monitoring can be realized by acquiring the aerial work image at the current moment and performing intelligent analysis, the limitation that the traditional method relies on manual inspection or a fixed camera is overcome, and the behavior of aerial work personnel can be dynamically analyzed by positioning key points, determining the action state and predicting the action duration. In addition, the comprehensive assessment of the safety of the aerial work can be realized by calculating the safety risk coefficient and predicting and determining the action risk level according to the motion trail of the dynamic main body, the influence of the static main body on the dynamic main body is considered, and the comprehensive consideration of complex and changeable aerial work environments is embodied. In conclusion, the comprehensive, accurate and real-time safety monitoring is provided, potential risks can be actively identified, and the safety of overhead operation and the accuracy of safety monitoring are effectively improved.
Additional features and advantages of embodiments of the application will be set forth in the detailed description which follows.
Drawings
Fig. 1 is a schematic flow chart of an intelligent safety early warning method applied to overhead operation according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an aerial work image after image recognition according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an intelligent safety warning system applied to overhead operation according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the detailed description described herein is merely for illustrating and explaining the embodiments of the present application, and is not intended to limit the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that, if directional indications (such as up, down, left, right, front, and rear are referred to in the embodiments of the present application), the directional indications are merely used to explain the relative positional relationship, movement conditions, and the like between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present application, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present application.
Fig. 1 schematically shows a flow chart of an intelligent safety precaution method applied to overhead operation according to an embodiment of the application. As shown in fig. 1, an embodiment of the present application provides an intelligent security early warning method applied to an aerial work, which may include the following steps.
S110, acquiring an aerial work image at the current moment, and carrying out image recognition on the aerial work image to divide the aerial work image into a plurality of subareas according to an aerial work project, wherein each subarea comprises at least one monitoring main body, and the monitoring main body comprises a static main body and/or a dynamic main body;
S120, under the condition that the monitoring subject in one of the subareas comprises a dynamic subject and the dynamic subject is one, taking the subarea as a first target subarea, and carrying out image recognition on the first target subarea to obtain the dynamic subject in the first target subarea;
S130, positioning a first current key point of the dynamic main body at the current moment, and determining a first current action state of the dynamic main body according to the first current key point;
S140, determining an ending action state according to the first target subarea at the current moment, and predicting action duration from the first current action state to the ending action state based on the first current action state and the ending action state;
S150, determining whether the monitoring subject in the first target subregion further comprises a static subject;
S160, under the condition that the monitoring main body further comprises a static main body, determining a first safety risk coefficient of the dynamic main body according to the first current action state and the static main body;
S170, under the condition that the first security risk coefficient is larger than a first preset value, acquiring first target subregions of the next N times one by one, wherein the set of the current time and the next N times is smaller than or equal to the action duration time;
s180, predicting and obtaining a motion track of the dynamic main body according to the change of key points of the dynamic main body in the first target sub-area at the current moment and the first target sub-areas at the next N moments;
s190, determining and displaying the action risk level of the dynamic main body according to the motion trail of the dynamic main body.
Firstly, a high-definition camera installed on a working site (such as an aerial working platform and a hanging basket) is used for collecting aerial working images at the current moment in real time, and image identification processing is carried out on the aerial working images. Specifically, a deep learning algorithm (e.g., YOLO) may be used for object detection and semantic segmentation. The entire aerial work image may be divided into a plurality of sub-areas according to predefined aerial work item types (e.g., scaffold work, tower work, etc.). Each sub-area contains at least one monitoring body, which may be static (e.g., scaffolding, tower, etc. equipment) or dynamic (e.g., overhead workers). Fig. 2 shows an aerial work image after image recognition, as shown in fig. 2, where a scene involved in the aerial work image is a basket scene, in the scene, a dynamic main body (aerial work personnel) and a static main body (anti-falling pad, safety tool and fence) can be obtained through image recognition, and whether the safety tool is normally worn can be determined through whether the safety tool is monitored.
After division of the sub-area is completed, further analysis is performed for the sub-area in which only one dynamic body (e.g., a single aerial worker) is included. The sub-region is defined as a first target sub-region and image recognition is performed thereon. Specifically, a preset human body posture estimation algorithm can be adopted to accurately position and identify the posture of the dynamic main body. The human body posture estimation algorithm may be used to identify key points of a human body such as a head, a shoulder, an elbow, a wrist, a hip, a knee, an ankle, and the like.
The gesture and the action of the dynamic main body can be accurately described through the spatial relationship of the key points, and the first current action state of the dynamic main body is obtained. For example, for a worker working on a high-altitude scaffold, it is possible to recognize whether he is standing, bending over or squatting. By focusing on a single dynamic main body, the fine change of the individual behavior can be more accurately captured, so that the precision and timeliness of the safety early warning are improved.
And (3) further positioning the key point positions of the dynamic main body at the current moment based on the dynamic main body image obtained in the step (S120). A bone keypoint detection algorithm may be employed to pinpoint 17 or more keypoints of the human body. Key points include the top of the head, neck, shoulders, elbows, wrists, hips, knees, ankles, etc. Each key point has its corresponding two-dimensional or three-dimensional coordinates. According to the relative position and the angle relation of the key points, the current action state of the dynamic main body can be determined. For example, if the heights of the hip and knee are detected to be close and the upper body inclination angle is large, it can be judged as a stoop state, and if the heights of all the key points are detected to be low and horizontally distributed, it can be judged as a lying state.
In particular implementations, implementation may be through rule definition. For example, a rule may be defined that if the hip height is less than 50% of the shoulder height and the knee angle is greater than 150 degrees, then a sitting position is determined.
After determining the current motion state of the dynamic body, the duration of the motion is predicted. First, the expected ending motion state in the region is determined according to the characteristics (such as region type, area, height, etc.) of the first target sub-region at the current moment and the predefined job specification.
For example, if the current area is a work platform on a scaffold, the end-of-motion state may be standing or squatting. Then, based on the current action state and the expected end state, a time series prediction model is used to predict the time required to transition from the current state to the end state. The time series prediction model may consider a number of factors, such as the average duration of similar action sequences in the historical data, the stability of the current action, the job difficulty, etc. For example, if the current state is bowing and the ending state is standing, the time series prediction model may predict that this process takes 5-10 seconds based on past data.
After determining the current motion state and the predicted motion duration of the dynamic body, it is then confirmed whether the static body is also contained in the first target subregion. This step may be accomplished by 1. Fine-segmenting the sub-regions using a semantic segmentation algorithm, classifying each pixel in the image as a dynamic subject, a static subject, or a background. 2. All objects in the sub-area are identified using an object detection algorithm (e.g., YOLO) and classified as dynamic or static. 3. By comparing successive multi-frame images, a moving part (dynamic body) and a stationary part (static body or background) are identified.
For example, for a scaffold work area, a scaffold structure (static body) and a worker (dynamic body) may be detected. The presence of a static body may affect the security conditions of a dynamic body. For example, safety risks may increase when workers approach the edges of scaffolding or large equipment.
In the case of confirming the presence of a static subject in the first target subregion, a first security risk coefficient for the dynamic subject may be determined based on the first current motion state of the dynamic subject and the characteristics of the static subject. Specifically, this can be achieved by 1. Extracting features of the dynamic body (e.g., pose, position, velocity) and features of the static body (e.g., type, position, size). 2. Based on predefined risk assessment criteria, a risk matrix is constructed, mapping different first current action states and static subject combinations to different risk levels. 3. The risk factors are calculated using a machine learning model (e.g., random forest or neural network). The machine learning model inputs include motion state feature vectors and static subject feature vectors, and outputs risk coefficients between 0 and 1.
The calculation formula is as follows:
R=f(A,S,E);
Where R is a risk factor, A is an action state feature, S is a static body feature, and E is an environmental factor (e.g., weather, light, etc.).
For example, if a worker is detected to be working at the edge of a scaffold with loose tools nearby, a higher risk factor (e.g., 0.8) may be calculated. The sensitivity of the safety monitoring can be dynamically adjusted by calculating and updating the risk coefficient in real time, and more timely early warning is provided under the high risk condition.
And when the calculated first security risk coefficient is larger than a first preset value, acquiring first target subregions at the next N moments one by one.
Specifically, the step may be to set a first preset value, for example, 0.7, according to the history data. And determines the time window that needs to be monitored based on the previously predicted action duration T. For example, if T is 10 seconds, it may be set to monitor for 5 seconds (n=5) in the future. The sequence of images of the first target sub-region is acquired continuously at a higher frequency, e.g. 10 frames per second, within a determined time window.
The changing of the keypoint of the dynamic body in the first target subregion may comprise 1. Tracking a change in position of the keypoint of the dynamic body in consecutive frames using a light flow algorithm. 2. And fitting the position sequence of the key points by using algorithms such as a least square method or Kalman filtering, and the like, so as to obtain a smooth motion track. 3. Based on the known trajectory data, a time series prediction model (e.g., LSTM) is used to predict the position of the N future points in time.
For example, for a worker moving on a scaffold, his path of movement, including possible position and speed changes, within 5 seconds of the future can be predicted. The trajectory prediction described above considers not only the historical position but also the current speed and acceleration, and thus can more accurately predict the nonlinear motion.
By accurately predicting the motion profile of the dynamic body, a movement pattern that may cause danger, such as abrupt acceleration, abnormal steering, or approaching a dangerous area, can be recognized in advance. The method provides predictability for the safety early warning system, so that the safety early warning system can react before actual danger occurs, and the accident prevention capability is greatly improved.
And determining and displaying the action risk level based on the predicted dynamic main body motion trail. The specific implementation method is as follows:
1. dangerous areas such as aerial work platform edges, equipment movable ranges and the like are defined in the working environment in advance.
2. And comparing the predicted motion trail with a predefined dangerous area, and calculating indexes such as the minimum distance between the trail and the dangerous area, the probability of entering the dangerous area and the like.
3. Speed and acceleration changes in the trajectory are evaluated, and abnormal rapid movements or abrupt stops are identified.
4. The composite risk score is calculated using a weighted sum.
5. The risk score is mapped to predefined levels, such as low, medium, high, extremely high four levels.
6. Visual display-a real-time display of the risk level on the monitoring interface using color coding (e.g. green, yellow, orange, red) and possibly including a visualization of the predicted trajectory.
According to the embodiment, the full-range real-time monitoring can be realized by acquiring the aerial work image at the current moment and performing intelligent analysis, the limitation that the traditional method relies on manual inspection or a fixed camera is overcome, and the dynamic analysis can be performed on the behaviors of aerial work personnel by locating key points, determining the action state and predicting the action duration. In addition, the comprehensive assessment of the safety of the aerial work can be realized by calculating the safety risk coefficient and predicting and determining the action risk level according to the motion trail of the dynamic main body, the influence of the static main body on the dynamic main body is considered, and the comprehensive consideration of complex and changeable aerial work environments is embodied. In conclusion, the comprehensive, accurate and real-time safety monitoring is provided, potential risks can be actively identified, and the safety of overhead operation and the accuracy of safety monitoring are effectively improved.
In one implementation manner of this embodiment, the method further includes the following steps:
S210, taking the subarea as a second target subarea under the condition that the monitoring main body in one subarea comprises dynamic main bodies and at least two dynamic main bodies, and acquiring preset data of the second target subarea at the current moment, wherein the preset data comprise the standard quantity of the dynamic main bodies and the standard action data of each dynamic main body;
S220, under the condition that the number of the dynamic main bodies is consistent with the standard number and the standard action data of each dynamic main body is consistent, locating a second current key point position of each dynamic main body at the current moment, and determining a second current action state of each dynamic main body at the current moment according to each second current key point position;
S230, determining whether the monitoring subject in the second target subregion further comprises a static subject;
S240, under the condition that the monitoring main body further comprises a static main body, determining a second safety risk coefficient of each dynamic main body according to the second current key point position of each dynamic main body and the static main body;
S250, sending out first alarm information under the condition that any one of the second safety risk coefficients is larger than a second preset value.
When the monitoring main body in a certain subarea comprises at least two dynamic main bodies, defining the subarea as a second target subarea, and acquiring preset data of the second target subarea at the current moment. The preset data comprises two key information, namely the standard quantity of dynamic main bodies and the standard action data of each dynamic main body. The preset data is typically predefined according to the requirements of a particular scenario or task.
For a scenario of a double collaboration task, the preset data may specify a standard number of 2, and describe in detail the standard sequence of actions that each dynamic body should perform. Standard action data may include information on key point location, type of action, duration of action, etc.
After the preset data are acquired, the number of the actually detected dynamic bodies needs to be compared with the standard number. If the current scene is inconsistent, the current scene is in violation of preset rules, and only one person is allowed to operate. In this case, the system will issue a warning to alert the relevant personnel to the safety or regulatory issue.
The alert may take a variety of forms, such as an audible alarm, a screen prompt, or a notification to a manager. For example, a conspicuous red warning box may be displayed on the monitoring interface, with the content "warning: detection of a multi-person operation, violation of the security regulations". At the same time, an audible alarm may be triggered and a notification message pushed to the security supervisor's mobile device.
In the embodiment, the operation behaviors which do not accord with the preset rules can be found and prevented in time, and the operation safety and normalization are effectively improved. By comparing the actual situation with preset standards, potential risk factors can be rapidly identified, and corresponding intervention measures can be adopted.
When the number of the dynamic main bodies is consistent with the standard number and the standard action data of each dynamic main body is also consistent, the second current key point position of each dynamic main body at the current time is further required to be positioned, and the second current action state of each dynamic main body is determined according to the second current key point position.
Specifically, first, the image is processed by using a deep learning model, and the second current key point of each dynamic body is extracted. Key points generally include main joints of the human body such as the head, neck, shoulders, elbows, wrists, hips, knees and ankles. Each keypoint has its three-dimensional coordinates (x, y, z), where x and y represent positions on the image plane and z represents depth information.
After the key points are obtained, under the condition that the number of the dynamic main bodies is consistent with the standard number and the standard action data of each dynamic main body is consistent, the second current action state of each dynamic main body is required to be determined according to the second current key points. This may be achieved by comparing the second current keypoint with a predefined action template. The similarity between the second current key point and the action template can be calculated first, and whether the second current action state accords with the expected standard action can be judged according to the calculated similarity. And if the similarity is higher than a preset similarity threshold, the second current action state is considered to be consistent with the standard action.
After determining the number of dynamic principals in the second target subregion, the next step is to determine whether the monitored principals in that subregion also include static principals. To comprehensively evaluate the composition of the entire scene, as the presence of static principals may have a significant impact on the security of dynamic principals. The stationary body may be a stationary device, an obstacle, a hazardous area, etc.
The process of determining the presence or absence of a static subject involves image segmentation and object detection techniques. In implementations, a deep learning model, such as MaskR-CNN model, may be used to detect and segment various objects in the image. Not only dynamic bodies (aloft work personnel) but also various static objects can be identified.
The pre-trained MaskR-CNN model may be first configured and loaded. Then, the input image is predicted, and category, bounding box, and mask information of all detected objects are acquired. Finally, it is determined whether a static subject is present in the scene by checking whether the detected object class contains a predefined static object class (e.g., a machine device, a wall, etc.). For example, if a large mechanical device (static body) is detected in a scene, it is possible to evaluate more accurately whether the behavior of a dynamic body (aloft work person) is safe, whether collision with the device is likely, or the like.
Under the condition that the monitoring main body is confirmed to further comprise a static main body, a second safety risk coefficient of each dynamic main body needs to be determined according to the position relation between the second current key point position of each dynamic main body and the static main body. This step involves spatial relationship analysis and risk assessment algorithms.
In one embodiment, first, spatial information of a static body is acquired, including position, size, and shape. Meanwhile, characteristics of the static body, such as whether there is a risk (e.g., high temperature equipment, sharp edges, etc.), need to be considered. Then, for each dynamic body, the shortest distance between its second current key point and the static body is calculated. This can be achieved by the following steps:
1. The static body is reduced to a geometric shape such as a rectangle, circle or polygon.
2. For each keypoint of the dynamic body, its shortest distance to the simplified geometry of the static body is calculated.
3. And taking the minimum distance in all the second current key points as the shortest distance between the dynamic main body and the static main body.
Next, a second security risk factor is determined based on the calculated shortest distance and other related factors, which may be the speed of the dynamic body, the type of motion, the degree of risk of the static body, etc. A weighted summation approach may be used.
After the second security risk coefficient of each dynamic body is calculated, the second security risk coefficient needs to be compared with a second preset value. And if the second security risk coefficient of any one dynamic main body exceeds a second preset value, sending out first alarm information.
According to the method, when the second safety risk coefficient of any dynamic main body exceeds the second preset value, the first alarm information is sent, timeliness and pertinence of early warning are effectively improved, the influence of the static main body is considered when the second safety risk coefficient is calculated, the operation environment can be comprehensively considered, the adaptability to complex high-altitude operation scenes is effectively enhanced, the safety monitoring precision and efficiency are improved, and more comprehensive and more accurate safety guarantee is provided for multi-person collaborative operation.
In one implementation manner of this embodiment, the method further includes the following steps:
S310, under the condition that the number of the dynamic main bodies is consistent with the standard number and the standard action data of each dynamic main body is inconsistent, analyzing each dynamic main body in parallel to obtain the action data of each dynamic main body in a preset time period, wherein the ending time of the preset time period is the current time;
S320, calculating the action deviation degree between the action data and the corresponding standard action data;
S330, creating an interaction matrix, and calculating the relative positions and interactions between the dynamic main bodies in real time, wherein the interaction matrix is used for storing the relative positions and interactions in real time;
S340, calculating a third safety risk coefficient of each dynamic main body according to the action deviation degree and the interaction matrix by adopting an individual safety risk index formula;
and S350, sending out second alarm information under the condition that any one of the third safety risk coefficients is larger than a third preset value.
In the case that the number of dynamic bodies is consistent with the standard number but the standard motion data of each dynamic body is inconsistent, parallel analysis is required to be performed on each dynamic body so as to acquire the motion data of each dynamic body in a preset time period.
Specifically, the ending time of the preset time period is the current time. In the implementation, the length of the preset time period needs to be determined first, for example, the length can be set to 30 seconds, 1 minute or 5 minutes, and the specific length can be adjusted according to the actual application scene. And then, acquiring motion information such as motion track, speed, acceleration and the like of each dynamic main body in a preset time period in real time through a sensor, a camera and the like. The method can collect the detailed actions such as walking gesture, arm swing and the like as action data for the case that the dynamic main body is an overhead operator, and collect the operations such as steering, acceleration, braking and the like as action data for the case that the dynamic main body is a vehicle.
After the collected motion data is preprocessed by filtering, denoising and the like, the motion deviation degree between the motion data and the corresponding standard motion data is calculated so as to evaluate the abnormal degree of the behavior of the dynamic main body.
Specifically, for each dynamic body, there is a set of standard motion data corresponding to the dynamic body. When calculating the degree of deviation, the concept of vector distance is adopted. Both motion data and standard motion data are represented as multi-dimensional vectors, each dimension representing a type of motion parameter (e.g., position, velocity, acceleration, etc.). The euclidean distance between the two vectors is then calculated, resulting in a scalar value as the degree of deviation. The specific formula is as follows:
;
Where x1, x2,..xn represents the individual parameters of the actual motion data, x1', x2',..xn ' represents the corresponding standard motion data parameters, and P is the degree of deviation.
The calculated deviation degree can be further normalized to be in the range of 0-1, so that the subsequent processing is facilitated. The normalization formula is:
Normalized deviation = (original deviation-minimum deviation)/(maximum deviation-minimum deviation).
Thereafter, an interaction matrix is created and the relative positions and interactions between the dynamic bodies are calculated in real time. The interaction matrix is an N x N square matrix, where N is the number of dynamic bodies. Each element aij in the matrix represents interaction information between the ith and jth subjects. In particular implementations, a null nxn matrix is first initialized. Then, for each pair of subjects (i, j), a relative position vector between them is calculated:
relative position vector = position of subject j-position of subject i;
This vector contains distance and direction information. The distance can be obtained by calculating the modulus of the vector:
;
The direction may be represented by a unit vector:
Direction= (relative position vector)/distance;
next, the interaction force is calculated. A simplified gravitational model may be used, force =;
Where G is a constant and mi and mj are the importance or influence of the two subjects, respectively.
The calculated relative position and interaction information is stored in a matrix element aij. Since the interactions are symmetrical, only the upper triangular matrix needs to be calculated, and the lower triangular matrix can directly take the transpose of the upper triangle. The diagonal element ai is set to 0, indicating that the subject has no interaction with itself.
The interaction matrix needs to be updated in real time, and the update frequency can be set according to actual requirements, for example, the update frequency is updated once per second. At each update, the relative positions and interactions between all subject pairs are recalculated and the corresponding elements in the matrix are updated.
And calculating a third safety risk coefficient of each dynamic main body according to the action deviation degree and the interaction matrix by adopting an individual safety risk index formula. The individual security risk index formula comprehensively considers the behavior abnormality degree of the dynamic main body and the interaction influence with other main bodies. The formula is as follows:
third security risk factor = w1×normalized bias degree + w2×interaction risk degree;
where w1 and w2 are weight coefficients, w1+w2=1 is satisfied. The normalized deviation degree is derived from the calculation result of step S320, and reflects the deviation degree of the subject behavior from the standard behavior. The interaction risk level needs to be calculated based on the interaction matrix created in step S330.
The method for calculating the interaction risk degree comprises the following steps:
1. for subject i, all elements of row i (except aii) in the interaction matrix are traversed.
2. For each non-zero element aij, a local risk value is calculated:
Local risk = distance factor x force factor x direction factor, where distance factor = 1/(distance +1) indicates that the closer the distance is, the greater the risk, force factor = min (force/threshold, 1) indicates that the greater the force is, the greater the risk is, but the upper limit, and direction factor = cos θ, where θ is the angle of relative velocity to relative position, indicating that the risk of head-on encounter is greater.
3. And summing all the local risk values to obtain the total interaction risk degree.
4. The total interaction risk is normalized to the 0-1 range.
And finally, substituting the normalized deviation degree and the interaction risk degree into an individual security risk index formula to obtain a third security risk coefficient. The third safety risk coefficient comprehensively reflects the abnormality degree of the self behavior of the main body and the interaction risk with the surrounding environment, and provides a quantitative basis for subsequent alarm judgment.
And under the condition that any one of the third safety risk coefficients is larger than a third preset value, sending out second alarm information, and when detecting that any one of the third safety risk coefficients exceeds the third preset value, immediately triggering an alarm mechanism. The content of the alarm information may include an alarm time, a time stamp accurate to seconds, an alarm object, which dynamic body or bodies triggered the alarm, risk levels, risk factors, which are caused by abnormal behavior of the user or interaction risk, and advice measures, which give operational advice to reduce the risk.
The sending mode of the alarm information can comprise 1. The control center display screen real-time popup window, 2. Sending short message or instant message to related personnel, 3. Triggering the audible and visual alarm device, 4. Automatically recording in the system log.
In the present embodiment, first, motion data of each subject is obtained by parallel analysis, and then, the degree of abnormality of the subject behavior is quantified by calculating the degree of motion deviation. Creating the interaction matrix captures complex relationships between the subjects. On the basis, an individual security risk index formula is adopted, and the self behavior and environment interaction are comprehensively considered, so that more comprehensive risk assessment is obtained. Finally, by setting a reasonable threshold value and a timely alarm mechanism, early warning of potential risks is realized. The method not only can identify the abnormal behavior of a single main body, but also can discover the potential risk generated by the interaction of a plurality of main bodies, and effectively improves the accuracy and timeliness of safety monitoring.
In one implementation manner of this embodiment, locating a first current key point of the dynamic main body at a current time, and determining a first current action state of the dynamic main body according to the first current key point includes the following steps:
S410, performing key point positioning on a dynamic main body at the current moment by adopting a preset skeleton key point detection algorithm to obtain a first current key point of the dynamic main body at the current moment;
S420, acquiring three-dimensional coordinates of each first current key point;
S430, calculating the relative distance and angle between the first current key points according to the three-dimensional coordinates of each first current key point;
S440, inputting all relative distances and angles into a pre-trained classifier to obtain probability distribution, wherein the probability distribution is used for representing the possibility that the current action state of the dynamic main body belongs to a predefined action state;
S450, taking the current action state with the highest probability as a first current action state.
Firstly, a preset skeleton key point detection algorithm is adopted to conduct key point positioning on a dynamic main body at the current moment, wherein the skeleton key point detection algorithm is usually realized based on a deep learning model. The bone keypoint detection algorithm can accurately locate keypoints of a human body, such as head, neck, shoulder, elbow, wrist, hip, knee, ankle, etc., in an image or video frame.
In the implementation, firstly, an image or a video frame at the current moment needs to be acquired, and the image or the video frame can be captured in real time through a camera or extracted from a pre-recorded video. The image is then input into a pre-trained neural network model. This model typically contains a convolution layer, a pooling layer, and a full connection layer, which can automatically extract image features and output a position estimate for each keypoint. To improve accuracy, multi-stage or multi-branched network structures, such as top-down and bottom-up combined approaches, are often used. The model output is a thermodynamic diagram representing the probability that each pixel belongs to a certain keypoint. By post-processing the thermodynamic diagram, such as non-maxima suppression, the exact coordinates of each keypoint can be obtained.
For a multi-person scene, it is also necessary to group keypoints belonging to the same person using a clustering algorithm. Finally, for each detected dynamic body, a set of keypoint coordinates representing its bone structure, i.e. the first current keypoint, can be obtained.
The three-dimensional coordinates of each first current key point are obtained, specifically, a depth camera can be adopted in the method for obtaining the three-dimensional coordinates, and the depth camera can directly output a depth image. By aligning the depth information with the RGB image, three-dimensional coordinates of each pixel can be obtained.
After obtaining the depth information, the image coordinate system needs to be converted into a world coordinate system. I.e. camera calibration process, it is necessary to know the internal parameters (such as focal length, principal point coordinates) and external parameters (the position and orientation of the camera in the world coordinate system) of the camera. The conversion formula is:
;
Where (X, Y, Z) is world coordinates, (X, Y) is image coordinates, Z is depth, and M is camera projection matrix.
And calculating the relative distance and angle between the first current key points according to the three-dimensional coordinates of each first current key point. First, the relative distance is calculated. For any two keypoints a (x 1, y1, z 1) and B (x 2, y2, z 2), the euclidean distance between them can be calculated first, and in practical applications, the distance between some specific pairs of keypoints, such as the left wrist to the left shoulder, the right ankle to the right hip, etc., is usually calculated. The distances are normalized to eliminate the effects of different body sizes, for example, divided by the torso length (neck-to-hip distance). Next, the angle is calculated. The angle calculation typically involves three key points, for example, the elbow angle requires three points for the shoulder, elbow and wrist. The calculation steps are as follows:
1. Two vectors were constructed v1=shoulder to elbow, v2=wrist to elbow, 2. Vector dot product was calculated, dot=v1×v2=v1x×v2x+v1y×v2y+v1z×v2z, 3. Vector modular length was calculated: The same applies to |v2|; angle = arccos (dot/(|v1|×|v2|)) using an arccosine function.
The angles to be calculated in this embodiment include neck angle, shoulder angle, elbow angle, hip angle, knee angle, etc.
By calculating the relative distance and angle, a set of feature vectors describing the current pose of the dynamic body can be obtained. Feature vectors typically contain tens of elements, each representing a distance or angle value. The method not only reduces the data dimension, but also has rotation and translation invariance, namely is not influenced by the position and the orientation of the dynamic main body in space, thereby improving the accuracy and the robustness of the subsequent classification.
All relative distances and angles are input into the pre-trained classifier to obtain probability distribution for representing the possibility that the current action state of the dynamic main body belongs to the predefined action state. The pre-trained classifier may be a deep neural network.
First, training data needs to be prepared. This includes collecting a large number of tagged motion data, each sample containing the distance and angle features of the previous calculation, and the corresponding motion tag. The predefined motion state may include standing, walking, running, jumping, squatting, falling, etc.
Then, taking the deep neural network as an example, a structure can be designed that 1, the input layer is that the number of nodes is equal to the dimension of the feature vector, 2, the hidden layer is that a plurality of fully connected layers can be used, each layer uses a ReLU activation function, and 3, the output layer is that the number of nodes is equal to the number of predefined action states, and a softmax activation function is used.
The model training process includes forward propagation, computational loss, backward propagation, and parameter updating. After training is completed, the test set is used to evaluate model performance. Common evaluation metrics include accuracy, precision, recall, and F1 score.
In practical application, the distance and angle characteristics obtained by current calculation are input into a trained model. The output of the model is a probability distribution representing the probability that the current action belongs to each predefined state. For example, for five predefined states, the output may be:
[ standing 0.05, walking 0.15, running 0.75, jumping 0.03, squatting 0.02];
The probability distribution not only gives the most probable motion state, but also provides confidence information of the model to the judgment. For example, from walking to running, a situation may occur where the probabilities of the two states are close.
By the method, the complex three-dimensional gesture information can be converted into the action state probability, and the current action state with the highest probability is taken as the first current action state. For example, for the resulting probability distribution [ standing 0.05, walking 0.15, running 0.75, jumping 0.03, squat 0.02], running is selected as the first current motion state.
In the embodiment, starting from the detection of the skeletal key points, the relative distance and angle between the key points are calculated by acquiring three-dimensional coordinate information, then the probability distribution of the action state is obtained by using a pre-trained classifier, and finally the final action state is determined by an optimization strategy, so that various complex human actions can be accurately identified in real time.
In one implementation of the present embodiment, the predicting an action duration from the first current action state to the ending action state based on the first current action state and the ending action state includes the steps of:
S510, calculating a first vector of a first current action state and a second vector of an ending action state, and acquiring main body characteristics of a dynamic main body;
S520, inputting the first vector and the second vector into a pre-trained motion state space model to obtain motion change complexity, wherein the motion state space model is used for calculating Euclidean distance between the first vector and the second vector, and the Euclidean distance is used for representing the motion change complexity;
S530, inputting the main body characteristics, the first vector, the second vector and the motion change complexity into a pre-trained deep learning model to obtain the motion duration.
First, a first vector of a first current action state and a second vector of an end action state are calculated. Specifically, the first vector and the second vector may be constructed using joint angles, position coordinates. For example, for human motion, a 51-dimensional vector (17×3) may be constructed using three-dimensional coordinates of 17 key points (e.g., head, shoulder, elbow, wrist, hip, knee, and ankle) to comprehensively capture the spatial configuration of motion.
Next, body features of the dynamic body are obtained. Subject characteristics refer to individual attributes that can affect performance of an action, such as height, weight, age, gender, or specific physiological indicators. These features have a significant impact on the duration of the action, as different individuals may have significant time differences in performing the same action. For example, a tall adult and a short child may require significantly different times to complete the same jumping maneuver.
Methods of acquiring subject characteristics may include direct measurement (e.g., height, weight), questionnaire (e.g., age, gender), or use of specialized sensor devices (e.g., electromyography, heart rate monitor, etc.).
The first vector (representing the current motion state) and the second vector (representing the end motion state) are input into a pre-trained motion state space model to calculate the euclidean distance between the two vectors, thereby obtaining the complexity of motion variation.
The motion state space model is a mathematical model specifically designed for processing and analyzing motion data. Which may represent an action state as a point in a multidimensional space, and transitions between different action states may then be seen as paths or trajectories in the multidimensional space. By calculating the euclidean distance between the two motion states, the magnitude and complexity of the motion change can be quantified.
The obtained main body characteristics, the first vector (current action state), the second vector (end action state) and the action change complexity are taken as inputs, a pre-trained deep learning model is input, and finally the predicted action duration is obtained.
In the application scenario of the present embodiment, the task of the deep learning model is to learn how to accurately predict the duration of an action from a given input feature. One possible structure of the deep learning model is as follows:
1. the input layer receives all input features including main body features (such as height, weight, etc.), current motion state vector, ending motion state vector and motion change complexity.
2. Hidden layer-including 3-5 hidden layers, the number of neurons per layer can be reduced layer by layer, such as 512-256-128-64.
3. And an output layer, namely a neuron, used for outputting the predicted action duration.
The present embodiment enables accurate prediction of the action duration from the current action state to the target action state by combining action state vectorization, action complexity calculation, and deep learning prediction. Firstly, converting an action state into a high-dimensional vector, taking individual characteristics into consideration, then calculating Euclidean distance by utilizing a pre-trained action state space model, objectively quantifying the complexity of action change, and finally, inputting all the characteristics into a deep learning model to predict action duration. The accuracy of the prediction result is effectively improved.
In one implementation manner of the embodiment, determining a first security risk coefficient of the dynamic main body according to the first current action state and the static main body includes the following steps:
S610, determining the gesture stability of the first current action state by adopting a biomechanical model;
s620, acquiring the current height of the dynamic main body, and determining the risk probability of the static main body to the dynamic main body according to the current height and the static main body;
S630, determining a first safety risk coefficient of the dynamic main body according to the gesture stability and the risk probability.
Biomechanical models are mathematical models for analyzing movements and gestures of the human body or other organisms, which are based on physical principles, for describing and predicting the movement and stress conditions of various parts of the body.
In practice, a detailed manikin is first constructed, which typically includes a plurality of rigid body segments (e.g., torso, limbs, etc.) and joints connecting the rigid body segments. Each rigid body segment has its mass, centroid position, and inertial tensor properties. The joints define degrees of freedom of movement between rigid body segments. For example, a simplified manikin may contain 14 rigid body segments (head, torso, upper arm, forearm, hand, thigh, calf and foot, one on each side) and 13 primary joints (neck, shoulder, elbow, wrist, hip, knee and ankle).
Next, it is necessary to acquire angle and angular velocity data of each joint in the current motion state. Such data may be obtained by a motion capture system or other sensor. For example, using an optical motion capture system, reflective markers may be attached to body keypoints, the 3D positions of these markers captured by multiple high-speed cameras, and then the joint angles calculated.
After the human body model and the motion data are obtained, the posture stability index can be calculated. The posture stability index is the relationship of the center of gravity projection point (CoP) and the supporting polygon. A support polygon refers to an area surrounded by all points of contact with the ground (usually the sole). The calculation steps are as follows:
1. The center of gravity of the whole body is calculated according to the mass and the centroid position of each rigid body segment. Assuming that n rigid body segments are provided, the mass of each rigid body segment is mi, the mass center position is ri, and the calculation formula of the whole body gravity center Rc is as follows:
;
2. projecting the gravity center to the ground to obtain CoP;
3. determining the boundary of the supporting polygon according to the contact point of the sole and the ground;
4. the shortest distance of CoP to the edge of the supporting polygon is used as a stability indicator. The greater the distance, the more stable the posture.
And acquiring the current height of the dynamic main body, and determining the risk probability of the static main body to the dynamic main body according to the current height and the static main body. To assess potential hazards that static objects in the environment may pose to the dynamic body.
Firstly, the current height of the dynamic main body can be obtained through a preset database. The preset database stores the related information of each dynamic main body, including height, weight and the like.
Next, it is necessary to evaluate the risk probability of the static subject to the dynamic subject based on the current altitude and the static subject, and to determine a first safety risk factor of the dynamic subject based on the gesture stability and the risk probability, wherein the gesture stability is a value between 0 and 1, wherein 0 indicates extremely unstable, 1 indicates extremely stable, reflecting the ability of the dynamic subject to maintain balance itself, and the risk probability is a value between 0 and 1, wherein 0 indicates almost no risk, and 1 indicates extremely high risk, reflecting the potential threat of environmental factors to the dynamic subject.
For the first security risk factor, it should satisfy the following conditions:
1. When the gesture stability is high and the risk probability is low, the safety risk factor should be low.
2. The safety risk factor should be high when the gesture stability is low or the risk probability is high.
3. The impact of gesture stability and risk probability on security risk may not be linear.
For example, if the first security risk factor is >0.8, immediate protection or warning may be required. If 0.5< the first security risk factor +.0.8, it may be desirable to increase the monitoring frequency or provide additional support. If the first security risk factor is less than or equal to 0.5, the current state can be considered to be relatively safe, but vigilance still needs to be kept.
In the embodiment, firstly, a biomechanical model is adopted to accurately calculate the posture stability, and the human body structure and the physical rule are considered. And secondly, calculating the risk probability of the static main body to the dynamic main body by analyzing the current height and the surrounding environment of the dynamic main body, and comprehensively considering external environmental factors. Finally, a first safety risk coefficient is calculated according to the gesture stability and the risk probability. Not only the internal and external factors are considered, but also the accuracy of safety monitoring is effectively improved.
In one implementation manner of the embodiment, determining the risk probability of the static main body to the dynamic main body according to the current height and the static main body includes the following steps:
s710, mapping the dynamic main body and the static main body into a preset three-dimensional coordinate system, and calculating a first distance and a height difference between the dynamic main body and each static main body;
S720, calculating the risk probability of the static main body to the dynamic main body according to the first distance and the height difference by adopting a preset risk function.
A preset three-dimensional coordinate system is first established, usually using a cartesian coordinate system, wherein the x-axis and the y-axis represent the planar position and the z-axis represents the height. The location information of the dynamic and static subjects are mapped into this coordinate system, each subject having a unique (x, y, z) coordinate. For a dynamic body, its coordinates may change over time, while the coordinates of a static body remain unchanged.
After the mapping is completed, a first distance and a height difference between the dynamic body and each static body need to be calculated. The first distance may be calculated using a euclidean distance formula. Assuming that the coordinates of the dynamic body are (x 1, y1, z 1) and the coordinates of a certain static body are (x 2, y2, z 2), the plane distance may calculate the first distance by the euclidean distance calculation formula, and the height difference may be calculated by the difference of the z coordinates.
And calculating the risk probability of the static main body to the dynamic main body by using a preset risk function according to the calculated first distance and the height difference.
The risk function R may be:;
Where d represents a first distance, h represents a height difference, d0 and h0 are feature scales of distance and height, and w1 and w2 are weight coefficients. By this risk function, the risk decreases as the distance and height difference increases.
In practice, a threshold may be introduced to handle the extreme case, r=1 if d < dmin or |h| < hmin, and risk=0 if d > dmax and |h| > hmax. Wherein dmin, hmin, dmax and hmax are thresholds determined according to a specific application scenario.
According to the method, the dynamic main body and the static main body are mapped to the three-dimensional coordinate system, the distance and the height difference between the dynamic main body and the static main body are calculated, and then the risk probability is estimated by using the preset risk function, so that the accurate quantification of the risk between the dynamic main body and the static main body in the three-dimensional space is realized.
In one implementation manner of the embodiment, the determining the first safety risk coefficient of the dynamic main body according to the gesture stability and the risk probability includes the following steps:
and S810, carrying out weighted summation on the gesture stability and the risk probability to obtain a first safety risk coefficient of the dynamic main body.
And the gesture stability and the risk probability are weighted and summed to obtain a first safety risk coefficient of the dynamic main body. In particular, the postural stability is a number ranging from 0 to 1, wherein 0 means extremely unstable and 1 means completely stable. Similarly, the risk probability is a value between 0 and 1, indicating the likelihood that a dynamic subject will be affected by a static subject to risk events.
Next, weights need to be assigned to gesture stability and risk probability. In this embodiment, the weight assigned to the gesture stability is w1, the weight assigned to the risk probability is w2, and w1+w2=1.
The weighted sum is calculated as first security risk factor=w1× (1-pose stability) +w2×risk probability.
The reason for using (1-posture stability) is that the more stable the posture, the lower the risk.
In order to make the final security risk factor more intuitive and easy to understand, the calculation result may be mapped into a predefined risk level table. For example, 0-0.2 for low risk, 0.2-0.4 for medium and low risk, 0.4-0.6 for medium risk, 0.6-0.8 for medium and high risk, and 0.8-1.0 for high risk. The risk level can be directly derived.
According to the method, the gesture stability and the risk probability are combined into a unified first safety risk coefficient through a weighted summation method, and comprehensive quantitative evaluation of the safety state of the dynamic main body is achieved.
The embodiment of the application also provides electronic equipment, which comprises:
a memory configured to store instructions, and
The processor is configured to call the instruction from the memory and can realize the intelligent safety precaution method applied to the aerial work when executing the instruction.
Referring to fig. 3, the embodiment of the application further provides an intelligent safety pre-warning system applied to high-altitude operation, which comprises:
a camera 100;
the electronic device 200, the electronic device 200 is connected to the camera 100.
The embodiment of the application also provides a machine-readable storage medium, wherein the machine-readable storage medium is stored with instructions for enabling a machine to execute the intelligent safety early warning method applied to the aerial work.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.

Claims (7)

Translated fromChinese
1.一种应用于高空作业的智能安全预警方法,其特征在于,包括:1. An intelligent safety warning method for high-altitude operations, characterized by comprising:获取当前时刻的高空作业图像,并对所述高空作业图像进行图像识别,以将所述高空作业图像根据高空作业项目划分为多个子区域,每个所述子区域包括至少一个监测主体,所述监测主体包括静态主体和/或动态主体;Acquire an aerial work image at the current moment, and perform image recognition on the aerial work image to divide the aerial work image into a plurality of sub-areas according to the aerial work project, each of the sub-areas includes at least one monitoring subject, and the monitoring subject includes a static subject and/or a dynamic subject;在其中一个所述子区域中的所述监测主体包括所述动态主体且所述动态主体为一个的情况下,将所述子区域作为第一目标子区域,并对所述第一目标子区域进行图像识别,以得到所述第一目标子区域中的动态主体;In the case where the monitored subject in one of the sub-regions includes the dynamic subject and the number of the dynamic subject is one, taking the sub-region as a first target sub-region, and performing image recognition on the first target sub-region to obtain the dynamic subject in the first target sub-region;定位当前时刻下的所述动态主体的第一当前关键点位,并根据所述第一当前关键点位确定所述动态主体的第一当前动作状态;Locating a first current key point of the dynamic subject at a current moment, and determining a first current action state of the dynamic subject according to the first current key point;根据当前时刻的所述第一目标子区域,确定结束动作状态,并基于所述第一当前动作状态和所述结束动作状态,预测得到从所述第一当前动作状态到所述结束动作状态的动作持续时间;Determine an end action state according to the first target sub-region at the current moment, and predict an action duration from the first current action state to the end action state based on the first current action state and the end action state;确定所述第一目标子区域中的所述监测主体是否还包括所述静态主体;determining whether the monitored subject in the first target sub-area also includes the static subject;在所述监测主体还包括所述静态主体的情况下,根据所述第一当前动作状态和所述静态主体确定所述动态主体的第一安全风险系数;In a case where the monitoring subject also includes the static subject, determining a first safety risk coefficient of the dynamic subject according to the first current action state and the static subject;在所述第一安全风险系数大于第一预设值的情况下,逐一获取下N个时刻的所述第一目标子区域,所述当前时刻与所述下N个时刻的集合小于或等于所述动作持续时间;When the first safety risk coefficient is greater than a first preset value, acquiring the first target sub-areas at the next N moments one by one, and a set of the current moment and the next N moments is less than or equal to the action duration;根据当前时刻的所述第一目标子区域与下N个时刻的所述第一目标子区域中的所述动态主体的关键点位的变化,预测得到所述动态主体的运动轨迹;Predicting a motion trajectory of the dynamic subject according to changes in key points of the dynamic subject in the first target sub-region at the current moment and in the first target sub-region at the next N moments;根据所述动态主体的所述运动轨迹,确定所述动态主体的动作危险性等级并显示;According to the motion trajectory of the dynamic subject, determining and displaying the action danger level of the dynamic subject;所述根据所述第一当前动作状态和所述静态主体确定所述动态主体的第一安全风险系数,包括:The determining a first safety risk coefficient of the dynamic subject according to the first current action state and the static subject includes:采用生物力学模型确定所述第一当前动作状态的姿势稳定性;Determining the postural stability of the first current motion state using a biomechanical model;获取所述动态主体的当前高度,并根据所述当前高度与所述静态主体确定所述静态主体对所述动态主体的风险概率;Acquire the current height of the dynamic subject, and determine the risk probability of the static subject to the dynamic subject according to the current height and the static subject;根据所述姿势稳定性与所述风险概率确定所述动态主体的第一安全风险系数;Determining a first safety risk coefficient of the dynamic subject according to the posture stability and the risk probability;所述根据所述当前高度与所述静态主体确定所述静态主体对所述动态主体的风险概率,包括:The determining, according to the current height and the static subject, a risk probability of the static subject to the dynamic subject comprises:将所述动态主体与所述静态主体映射到预设的三维坐标系中,并计算所述动态主体与每个所述静态主体之间的第一距离和高度差;Mapping the dynamic subject and the static subject into a preset three-dimensional coordinate system, and calculating a first distance and a height difference between the dynamic subject and each of the static subjects;采用预设的风险函数,根据所述第一距离和所述高度差,计算所述静态主体对所述动态主体的风险概率;Using a preset risk function, and according to the first distance and the height difference, calculating the risk probability of the static subject to the dynamic subject;其中,所述风险函数包括:Wherein, the risk function includes: ;式中,R表示风险概率,d表示第一距离,h表示高度差,d0和h0是距离和高度的特征尺度,w1和w2是权重系数;Where R represents the risk probability, d represents the first distance, h represents the height difference, d0 and h0 are the characteristic scales of distance and height, and w1 and w2 are weight coefficients;所述根据所述姿势稳定性与所述风险概率确定所述动态主体的第一安全风险系数,包括:The determining a first safety risk coefficient of the dynamic subject according to the posture stability and the risk probability comprises:对所述姿势稳定性与所述风险概率进行加权求和,得到所述动态主体的所述第一安全风险系数。The first safety risk coefficient of the dynamic body is obtained by performing a weighted summation on the posture stability and the risk probability.2.根据权利要求1所述的方法,其特征在于,所述方法还包括:2. The method according to claim 1, characterized in that the method further comprises:在其中一个所述子区域中的所述监测主体包括所述动态主体且所述动态主体为至少两个的情况下,将所述子区域作为第二目标子区域,并获取所述当前时刻下的所述第二目标子区域的预设数据,所述预设数据包括所述动态主体的标准数量以及每个所述动态主体的标准动作数据;In the case where the monitored subject in one of the sub-areas includes the dynamic subject and there are at least two dynamic subjects, the sub-area is used as a second target sub-area, and preset data of the second target sub-area at the current moment is obtained, wherein the preset data includes a standard number of the dynamic subjects and standard action data of each of the dynamic subjects;在所述动态主体的数量与所述标准数量一致且每个所述动态主体的标准动作数据一致的情况下,定位当前时刻下的每个所述动态主体的第二当前关键点位,并根据每个所述第二当前关键点位确定所述当前时刻下的每个所述动态主体的第二当前动作状态;When the number of the dynamic subjects is consistent with the standard number and the standard action data of each of the dynamic subjects is consistent, locating the second current key point of each of the dynamic subjects at the current moment, and determining the second current action state of each of the dynamic subjects at the current moment according to each of the second current key points;确定所述第二目标子区域中的所述监测主体是否还包括所述静态主体;determining whether the monitored subject in the second target sub-area also includes the static subject;在所述监测主体还包括所述静态主体的情况下,根据每个所述动态主体的所述第二当前关键点位与所述静态主体,确定每个所述动态主体的第二安全风险系数;In the case where the monitoring subject also includes the static subject, determining a second safety risk coefficient of each of the dynamic subjects according to the second current key point position of each of the dynamic subjects and the static subject;在任意一个所述第二安全风险系数大于第二预设值的情况下,发出第一告警信息。In the event that any of the second safety risk coefficients is greater than a second preset value, a first warning message is issued.3.根据权利要求2所述的方法,其特征在于,所述方法还包括:3. The method according to claim 2, characterized in that the method further comprises:在所述动态主体的数量与所述标准数量一致且每个所述动态主体的标准动作数据不一致的情况下,并行分析每个所述动态主体,得到预设时间段下的每个所述动态主体的动作数据,所述预设时间段的结束时刻为所述当前时刻;When the number of the dynamic subjects is consistent with the standard number and the standard action data of each of the dynamic subjects is inconsistent, each of the dynamic subjects is analyzed in parallel to obtain the action data of each of the dynamic subjects in a preset time period, and the end time of the preset time period is the current time;计算所述动作数据与对应的所述标准动作数据之间的动作偏差度;Calculating the action deviation between the action data and the corresponding standard action data;创建交互矩阵,并实时计算所述动态主体之间的相对位置和相互作用,所述交互矩阵用于实时存储所述相对位置和所述相互作用;Creating an interaction matrix and calculating the relative positions and interactions between the dynamic entities in real time, wherein the interaction matrix is used to store the relative positions and interactions in real time;采用个体安全风险指数公式,根据所述动作偏差度和所述交互矩阵,计算每个所述动态主体的第三安全风险系数;Using an individual safety risk index formula, according to the action deviation and the interaction matrix, a third safety risk coefficient of each of the dynamic subjects is calculated;在任意一个所述第三安全风险系数大于第三预设值的情况下,发出第二告警信息。In the event that any one of the third safety risk coefficients is greater than a third preset value, a second warning message is issued.4.根据权利要求1所述的方法,其特征在于,所述定位当前时刻下的所述动态主体的第一当前关键点位,并根据所述第一当前关键点位确定所述动态主体的第一当前动作状态,包括:4. The method according to claim 1, characterized in that the locating the first current key point of the dynamic subject at the current moment and determining the first current action state of the dynamic subject according to the first current key point comprises:采用预设的骨骼关键点检测算法,对当前时刻下的所述动态主体进行关键点定位,得到当前时刻下的所述动态主体的第一当前关键点位;Using a preset skeleton key point detection algorithm, the key point of the dynamic subject at the current moment is located to obtain the first current key point position of the dynamic subject at the current moment;获取每个所述第一当前关键点位的三维坐标;Obtaining the three-dimensional coordinates of each of the first current key points;根据每个所述第一当前关键点位的所述三维坐标,计算所述第一当前关键点位之间的相对距离和角度;Calculating the relative distance and angle between the first current key points according to the three-dimensional coordinates of each of the first current key points;将所有所述相对距离和所述角度输入预训练的分类器中,得到概率分布,所述概率分布用于表征所述动态主体的当前动作状态属于预定义动作状态的可能性;Inputting all the relative distances and the angles into a pre-trained classifier to obtain a probability distribution, wherein the probability distribution is used to characterize the possibility that the current action state of the dynamic subject belongs to a predefined action state;将概率最高的所述当前动作状态作为所述第一当前动作状态。The current action state with the highest probability is used as the first current action state.5.根据权利要求1所述的方法,其特征在于,所述基于所述第一当前动作状态和所述结束动作状态,预测得到从所述第一当前动作状态到所述结束动作状态的动作持续时间,包括:5. The method according to claim 1, characterized in that the step of predicting the action duration from the first current action state to the end action state based on the first current action state and the end action state comprises:计算所述第一当前动作状态的第一向量和所述结束动作状态的第二向量,并获取所述动态主体的主体特征;Calculating a first vector of the first current action state and a second vector of the end action state, and acquiring subject features of the dynamic subject;将所述第一向量和所述第二向量输入预训练的动作状态空间模型,得到动作变化复杂度,其中所述动作状态空间模型用于计算所述第一向量与所述第二向量之间的欧式距离,所述欧式距离用于表征所述动作变化复杂度;Inputting the first vector and the second vector into a pre-trained action state space model to obtain action change complexity, wherein the action state space model is used to calculate the Euclidean distance between the first vector and the second vector, and the Euclidean distance is used to characterize the action change complexity;将所述主体特征、所述第一向量、所述第二向量和所述动作变化复杂度输入预训练的深度学习模型,得到动作持续时间。The subject feature, the first vector, the second vector, and the action change complexity are input into a pre-trained deep learning model to obtain the action duration.6.一种电子设备,应用于如权利要求1-5中任一项所述的应用于高空作业的智能安全预警方法,其特征在于,包括:6. An electronic device, applied to the intelligent safety warning method for high-altitude operations as claimed in any one of claims 1 to 5, characterized in that it comprises:存储器,被配置成存储指令;以及a memory configured to store instructions; and处理器,被配置成从所述存储器调用所述指令以及在执行所述指令时能够实现根据权利要求1至5中任一项所述的应用于高空作业的智能安全预警的方法。The processor is configured to call the instruction from the memory and implement the method for intelligent safety warning applied to aerial work according to any one of claims 1 to 5 when executing the instruction.7.一种应用于高空作业的智能安全预警系统,其特征在于,包括:7. An intelligent safety warning system for high-altitude operations, characterized by comprising:摄像头;Camera;根据权利要求6的电子设备,所述电子设备与所述摄像头连接。The electronic device according to claim 6, wherein the electronic device is connected to the camera.
CN202411422395.4A2024-10-122024-10-12Intelligent safety early warning method and system applied to high-altitude operationActiveCN118942163B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411422395.4ACN118942163B (en)2024-10-122024-10-12Intelligent safety early warning method and system applied to high-altitude operation

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411422395.4ACN118942163B (en)2024-10-122024-10-12Intelligent safety early warning method and system applied to high-altitude operation

Publications (2)

Publication NumberPublication Date
CN118942163A CN118942163A (en)2024-11-12
CN118942163Btrue CN118942163B (en)2024-12-10

Family

ID=93359093

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411422395.4AActiveCN118942163B (en)2024-10-122024-10-12Intelligent safety early warning method and system applied to high-altitude operation

Country Status (1)

CountryLink
CN (1)CN118942163B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116798185A (en)*2023-07-202023-09-22西安热工研究院有限公司Safety early warning method, system, equipment and medium for building site high-altitude falling object
CN118280066A (en)*2024-04-282024-07-02广东电网有限责任公司High-altitude operation reminding method, device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR101311873B1 (en)*2011-07-072013-09-27김헌성Camera system for monitoring the hook of a tower crane
KR20230094768A (en)*2021-12-212023-06-28주식회사 포스코Method for determining whether to wear personal protective equipment and server for performing the same
CN114639214B (en)*2022-05-232022-08-12安徽送变电工程有限公司Intelligent safety distance early warning system and method for electric power hoisting operation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116798185A (en)*2023-07-202023-09-22西安热工研究院有限公司Safety early warning method, system, equipment and medium for building site high-altitude falling object
CN118280066A (en)*2024-04-282024-07-02广东电网有限责任公司High-altitude operation reminding method, device, equipment and storage medium

Also Published As

Publication numberPublication date
CN118942163A (en)2024-11-12

Similar Documents

PublicationPublication DateTitle
Fang et al.Falls from heights: A computer vision-based approach for safety harness detection
CN114676956B (en) Elderly fall risk warning system based on multi-dimensional data fusion
CN111753747B (en)Violent motion detection method based on monocular camera and three-dimensional attitude estimation
CN111488804A (en) Method for detection and identification of labor protection equipment wearing condition based on deep learning
CN117351405B (en)Crowd behavior analysis system and method
JP7263094B2 (en) Information processing device, information processing method and program
CN110414400B (en)Automatic detection method and system for wearing of safety helmet on construction site
CN119251761B (en) A construction monitoring method
CN119810757A (en) Early warning analysis method and server based on intelligent vision
WO2019220589A1 (en)Video analysis device, video analysis method, and program
Li et al.Collaborative fall detection using smart phone and Kinect
US20220125359A1 (en)Systems and methods for automated monitoring of human behavior
CN116259002A (en) A video-based human risk behavior analysis method
CN118277947A (en)Outdoor photovoltaic field operation safety control method based on AI vision
Abd et al.Human fall down recognition using coordinates key points skeleton
Algabri et al.Robust person following under severe indoor illumination changes for mobile robots: online color-based identification update
CN119516161A (en) A method and system for identifying personnel status based on target recognition detection
EP4295327A1 (en)Identification of workers using personal protective equipment and worker movement characteristic analysis
JP2023098506A (en) Information processing program, information processing method, and information processing apparatus
CN118942163B (en)Intelligent safety early warning method and system applied to high-altitude operation
Flores-Barranco et al.Accidental fall detection based on skeleton joint correlation and activity boundary
Pękala et al.A novel method for human fall detection using federated learning and interval-valued fuzzy inference systems
EP4207107A1 (en)Information processing program, information processing method, and information processing apparatus
CN117593792A (en)Abnormal gesture detection method and device based on video frame
KR20200119165A (en)Safety status monitoring and alarm application for moving persons

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
PE01Entry into force of the registration of the contract for pledge of patent right
PE01Entry into force of the registration of the contract for pledge of patent right

Denomination of invention:An intelligent safety warning method and system applied to high-altitude operations

Granted publication date:20241210

Pledgee:Guanggu Branch of Wuhan Rural Commercial Bank Co.,Ltd.

Pledgor:WUHAN STRIDETOP TECHNOLOGY Co.,Ltd.

Registration number:Y2025980019354


[8]ページ先頭

©2009-2025 Movatter.jp