Disclosure of Invention
Aiming at the defects Of the prior art, the embodiment Of the application aims to provide a monitoring and early warning method for people stream analysis, which is used for acquiring images Of living scenes through a Time Of Flight (TOF) camera to generate three-dimensional point cloud data, and identifying and judging the acquired three-dimensional point cloud data through a monitoring processor, and generating alarm data and sending the alarm data to terminal equipment Of a security manager when people possibly threatening the environmental security are identified.
In order to achieve the above object, an embodiment of the present application provides a monitoring and early warning method for people stream analysis, including: the time-of-flight TOF camera shoots the environment of the monitoring area according to the image acquisition instruction to obtain three-dimensional point cloud data; wherein the TOF camera has a camera ID;
the TOF camera sends the three-dimensional point cloud data and the camera ID to a monitoring processor;
the monitoring processor performs filtering processing on the three-dimensional point cloud data to obtain filtered three-dimensional point cloud data; wherein the three-dimensional point cloud data includes intensity data;
the monitoring processor performs facial feature detection processing on the filtered three-dimensional point cloud data to obtain facial three-dimensional point cloud data, and the facial three-dimensional point cloud data are stored in a facial three-dimensional point cloud data list;
the monitoring processor performs expression recognition processing based on the facial three-dimensional point cloud data to obtain expression types of the facial three-dimensional point cloud data;
when the monitoring processor judges that the expression type is a preset expression type, the monitoring processor performs matching detection on the facial three-dimensional point cloud data and image data in a personnel information database to be identified to obtain detection result data; wherein the detection result comprises detection state and personnel information data;
the monitoring processor judges whether the detection state is a preset state or not;
and when the detection state is a preset state, the monitoring processor generates an alarm message according to the personnel information and sends the alarm message to the early warning terminal.
Preferably, the monitoring processor performs facial feature detection processing on the filtered three-dimensional point cloud data to obtain facial three-dimensional point cloud data, and stores the facial three-dimensional point cloud data in a facial three-dimensional point cloud data list specifically;
the monitoring processor performs face detection processing on the intensity data of the filtered three-dimensional point cloud data based on an openCV library to obtain face intensity data, and the face intensity data is stored in a face intensity data list;
the monitoring processor maps the face intensity data to the filtered three-dimensional point cloud data, extracts face three-dimensional point cloud data corresponding to the face intensity data from the filtered three-dimensional point cloud data, and stores the face three-dimensional point cloud data in a face three-dimensional point cloud data list.
Preferably, the monitoring processor performs expression recognition processing based on the facial three-dimensional point cloud data, and the expression type of the obtained facial three-dimensional point cloud data is specifically:
the monitoring processor performs expression recognition processing on the facial three-dimensional point cloud data based on a pre-trained deep convolutional neural network model to obtain the expression type; among them, expression types include angry, aversion, fear, happiness, depression, surprise, or neutrality.
Preferably, the person information database to be identified includes a plurality of first three-dimensional point cloud data and/or first two-dimensional image data, and the monitoring processor performs matching detection on the face three-dimensional point cloud data and the image data in the person information database to be identified, and the obtaining detection result data specifically includes:
the monitoring processor matches the facial three-dimensional point cloud data with the first three-dimensional point cloud data to obtain maximum three-dimensional matching rate and optimal three-dimensional point cloud data;
the monitoring processor matches the intensity data of the facial three-dimensional point cloud data with the first two-dimensional image data to obtain the maximum two-dimensional matching rate and the optimal two-dimensional image data;
the monitoring processor generates detection result data according to the maximum three-dimensional matching rate, the optimal three-dimensional point cloud data, the maximum two-dimensional matching rate and the optimal two-dimensional image data; wherein the detection result data includes the detection state.
Further preferably, the matching of the facial three-dimensional point cloud data and the first three-dimensional point cloud data by the monitoring processor, and the obtaining of the maximum three-dimensional matching rate and the optimal three-dimensional point cloud data specifically includes:
the monitoring processor performs normalization preprocessing on the face three-dimensional point cloud data;
the monitoring processor obtains three-dimensional average face images of all the first three-dimensional point cloud data in the personnel information database to be identified,
the monitoring processor selects a plurality of characteristic points and a datum point on the three-dimensional average face image, calculates the geodesic distance from each characteristic point to the datum point, establishes a characteristic point model according to the geodesic distance, and locates the characteristic points on the face three-dimensional point cloud data by using the characteristic point model;
the monitoring processor uses a Gabor filter to extract neighborhood characteristic relations between the first three-dimensional point cloud data and characteristic points on the face three-dimensional point cloud data;
the monitoring processor respectively establishes a probability map model for each first three-dimensional point cloud data and the face three-dimensional point cloud data in the personnel information database to be identified according to the neighborhood characteristic relation;
and the monitoring processor calculates the similarity between the face three-dimensional point cloud data and each first three-dimensional point cloud data in the personnel information database to be identified according to the probability map model, determines the maximum value of the similarity as the maximum matching degree, and determines the highest similar first three-dimensional point cloud data as the optimal three-dimensional point cloud data.
Further preferably, the monitoring processor generates the detection result data according to the maximum three-dimensional matching degree, the optimal three-dimensional point cloud data, the maximum two-dimensional matching degree and the optimal two-dimensional image data specifically including:
the monitoring processor determines the maximum three-dimensional matching degree, the maximum two-dimensional matching degree and the maximum value of the preset matching threshold;
when the maximum value is the maximum three-dimensional matching degree, the monitoring processor sets the detection state as a successful state;
the monitoring processor determines first personnel information according to the optimal three-dimensional point cloud data;
the monitoring processor generates detection result data according to the detection state and the first personnel information.
Further preferably, the monitoring processor generates the detection result data according to the maximum three-dimensional matching degree, the optimal three-dimensional point cloud data, the maximum two-dimensional matching degree and the optimal two-dimensional image data specifically including:
the monitoring processor determines the maximum three-dimensional matching degree, the maximum two-dimensional matching degree and the maximum value of the preset matching threshold;
when the maximum value is the maximum two-dimensional matching degree, the monitoring processor sets the detection state as a successful state;
the monitoring processor determines first personnel information according to the optimal two-dimensional image data;
the monitoring processor generates detection result data according to the detection state and the first personnel information.
Further preferably, the monitoring processor generates the detection result data according to the maximum three-dimensional matching degree, the optimal three-dimensional point cloud data, the maximum two-dimensional matching degree and the optimal two-dimensional image data specifically including:
the monitoring processor determines the maximum three-dimensional matching degree, the maximum two-dimensional matching degree and the maximum value of the preset matching threshold;
when the maximum value is the preset matching threshold value, the monitoring processor sets the detection state as a failure state;
and the monitoring processor generates detection result data according to the detection state.
Preferably, before the time-of-flight TOF camera performs environmental photographing on the monitored area according to the image acquisition instruction, the method further includes:
the monitoring processor receives a monitoring starting instruction and generates an image acquisition instruction according to an image acquisition time interval;
the monitoring processor sends the image acquisition instruction to the TOF camera.
The embodiment of the application provides a monitoring and early warning method for people stream analysis, which comprises the steps of acquiring an environmental image of a monitoring area by using a time-of-flight TOF camera to generate three-dimensional point cloud data, carrying out facial recognition on the acquired three-dimensional point cloud data, judging the expression type of the recognized facial three-dimensional point cloud data, further matching the facial three-dimensional point cloud data with image data in a preset personnel information database to be recognized when the possibility of threat to safety is obtained, and generating alarm message data and sending the alarm message data to an early warning terminal when personnel information possibly threatening the environmental safety is obtained through analysis.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
The monitoring and early warning method for people stream analysis is suitable for people stream dense places such as markets, museums, banks, airports, train stations and the like. Fig. 1 is a flowchart of a monitoring and early warning method for people stream analysis according to an embodiment of the present application. As shown in fig. 1, the method comprises the following steps:
in step 110, the time-of-flight TOF camera performs environmental photographing on the monitored area according to the image acquisition instruction, so as to obtain three-dimensional point cloud data.
Specifically, TOF cameras are installed in different areas of a living place where a monitoring and early warning method is required to be used, and the TOF cameras have camera IDs. When the TOF is installed, the installation position and the shooting angle of the TOF camera can be selected or adjusted according to the range of the monitoring area, so that the TOF camera can acquire as many clear face images as possible.
In a preferred scheme of the application, before the time-of-flight TOF camera shoots the environment of the monitored area according to the image acquisition instruction, the monitoring processor receives the monitoring start instruction and generates the image acquisition instruction according to the image acquisition time interval. That is, when the monitoring and early warning method provided by the embodiment of the application is to be started, a manager inputs a monitoring and starting instruction through an interactive screen of the monitoring processor; or the manager generates a monitoring starting instruction by operating the hardware control equipment connected with the monitoring processor and sends the monitoring starting instruction to the monitoring processor. The monitoring processor reads a preset image acquisition time interval after receiving the monitoring starting instruction, generates an image acquisition instruction according to the image acquisition time interval, and sends the image acquisition instruction to the TOF camera.
When the TOF camera receives an image acquisition instruction sent by the monitoring processor, an image of a frame monitoring area is shot, and three-dimensional point cloud data are generated.
The TOF camera adopted in the embodiment of the application transmits optical signals through the built-in laser transmitting module, acquires the distance depth data of the three-dimensional scene through the built-in complementary metal oxide semiconductor (Complementary Metal Oxide Semiconductor, CMOS) pixel array, has imaging rate of hundreds of frames per second, and has compact structure and low power consumption. The three-dimensional data acquisition mode for the target scene is as follows: TOF cameras use an amplitude modulated light source that actively illuminates the target scene and is coupled to an associated sensor that is locked to each pixel at the same frequency. The emitted light emitted by the built-in laser and the reflected light emitted by the emitted light after the emitted light irradiates on the scene object have phase shift, and multiple measurements are obtained by detecting different phase shift amounts between the emitted light and the reflected light. The amplitude modulation of the built-in laser transmitter is in the modulation frequency range of 10-100MH, and the frequency controls the depth range and depth resolution of the TOF camera sensor. Meanwhile, the processing unit of the TOF camera independently executes phase difference on each pixel to calculate so as to obtain depth data of the target scene, the processing unit of the TOF camera analyzes and calculates the reflection intensity of the reflected light so as to obtain intensity data of the target scene, and the obtained two-dimensional data are combined to analyze and process so as to obtain three-dimensional point cloud data of the target scene.
In a specific example of an embodiment of the present application, a TOF camera employs a solid state laser or an LED array light wave emitter with a wavelength around 850nm as the built-in laser emitter. The emission light source is a continuous square wave or sine wave obtained by a continuous modulation mode. The TOF camera processing unit obtains intensity data by calculating phase angles of emitted light and reflected light in a plurality of sampling samples and distances of a target object, analyzing and calculating current intensity converted by the reflected light intensity, and then combining two-dimensional image data obtained by an optical camera to perform fusion processing to obtain three-dimensional point cloud data of a target scene.
In the process of collecting the environment image of the monitored area, as the scene shooting is carried out by the non-visible light actively emitted by the TOF camera, the three-dimensional point cloud data of the environment image of the clear monitored area can be obtained even in dark condition. Therefore, the method provided by the embodiment of the application is suitable for the night or in the dark environment with poor illumination state and even without illumination.
In step 120, the tof camera sends the three-dimensional point cloud data and the camera ID to the monitoring processor.
Specifically, each TOF camera stores a camera ID, and each camera ID corresponds to a monitoring area ID of a monitoring area to which it belongs. The TOF camera sends the generated three-dimensional point cloud data and the camera ID to the monitoring processor so that the monitoring processor can determine which TOF camera collects the data when receiving the three-dimensional point cloud data.
And 130, filtering the three-dimensional point cloud data by the monitoring processor to obtain filtered three-dimensional point cloud data.
Wherein the three-dimensional point cloud data includes intensity data;
specifically, the monitoring processor selects a specific filtering mode to perform filtering processing on the received three-dimensional point cloud data, and noise points in the three-dimensional point cloud data are removed. The three-dimensional point cloud data is subjected to filtering processing by using the following method:
in the embodiment of the application, the resolution of the TOF camera is mxn (M, N is a positive integer), for example 320×240 or 640×480, so that one frame of three-dimensional point cloud data obtained by the TOF camera has mxn pixels, and each pixel further includes X, Y, Z three-dimensional coordinate values. The steps from the original depth data of the TOF camera to the 3-dimensional point cloud data needed by us are as follows: firstly, carrying out preliminary correction and temperature calibration on original depth data; secondly, performing distortion correction processing on the image; again, the depth image coordinate system (x 0, y0, z 0) is converted into a camera coordinate system (x 1, y1, z 1), and depth information on the image is converted into a three-dimensional coordinate system with the camera as an origin; finally, the camera coordinate system (x 1, y1, z 1) is converted into the required world coordinate system (x 2, y2, z 2), and the camera coordinate system is converted into the project required coordinate system, i.e. the coordinate system of the final point cloud. The data values of the X axis and the Y axis represent the plane coordinate positions of scene points, and the data value of the Z axis represents the acquired actual depth values of the scene.
The monitoring processor converts the three-dimensional point cloud data into an mxn x 3 matrix, each row representing one pixel arranged in the time-of-flight sensor. By resetting the matrix of mxn×3 to the matrix of mxn and expressing the value of each element in the reset matrix with the depth value, the three-dimensional point cloud data is converted into two-dimensional plane image data.
The monitoring processor calculates the depth value of each pixel point of the two-dimensional plane image data by adopting a 3X 3 space filtering operator based on the three-dimensional point cloud, and calculates the depth difference between the pixels of the central point and the surrounding pixels. And comparing the depth difference with a preset global threshold, judging the depth value measured by the pixel point as a noise point when the depth difference is larger than the preset global threshold, and filtering the pixel point in the corresponding three-dimensional point cloud data. Otherwise, reserving the pixel points in the corresponding three-dimensional point cloud data. And obtaining the three-dimensional point cloud data after filtering after processing. The filtered three-dimensional point cloud data also includes intensity data.
And 140, the monitoring processor performs facial feature detection processing on the filtered three-dimensional point cloud data to obtain facial three-dimensional point cloud data, and the facial three-dimensional point cloud data are stored in a facial three-dimensional point cloud data list.
Specifically, the monitoring processor performs facial feature detection on the three-dimensional point cloud data after filtering by adopting a facial feature detection processing method, and extracts facial three-dimensional point cloud data from the facial three-dimensional point cloud data.
In a preferred embodiment of the present application, an openCV computer vision library is installed in the monitor processor. Facial feature detection processing is carried out on the three-dimensional point cloud data after filtering based on an openCV library, and the steps are as follows:
firstly, the monitoring processor performs face detection processing on the intensity data of the three-dimensional point cloud data after filtering based on an openCV library to obtain face intensity data, and the face intensity data is stored in a face intensity data list. That is, the monitoring processor calls a corresponding function in the openCV library to perform face detection processing on the intensity data of the filtered three-dimensional point cloud data, and extracts face intensity data from the intensity data of the filtered three-dimensional point cloud data. And saves the face intensity data in a face intensity data list.
Then, the monitoring processor maps the face intensity data to the filtered three-dimensional point cloud data, extracts the face three-dimensional point cloud data corresponding to the face intensity data from the filtered three-dimensional point cloud data, and stores the face three-dimensional point cloud data in a face three-dimensional point cloud data list. More specifically, the face intensity data is a set of a plurality of pixel point data, the monitoring processor maps the face intensity data into the filtered three-dimensional point data, and the face three-dimensional point cloud data is extracted from the filtered three-dimensional point cloud data according to the corresponding relation between the face intensity data pixels and the filtered three-dimensional point cloud data, so that all the face point cloud data are extracted from one frame of filtered three-dimensional point transportation, and are stored in a face three-dimensional point cloud data list.
Of course, there are many methods for detecting facial features, and the embodiment of the present application is not limited to extracting facial three-dimensional point cloud data from filtered three-dimensional point cloud data by using the above method, and may also use its facial detection method for detecting facial features.
And 150, performing expression recognition processing by the monitoring processor based on the facial three-dimensional point cloud data to obtain the expression type of the facial three-dimensional point cloud data.
Specifically, the monitoring processor can perform expression recognition processing on the facial three-dimensional point cloud data by adopting a plurality of expression recognition methods.
In the preferred scheme of the embodiment of the application, the monitoring processor adopts a deep convolutional neural network model training method in advance to train the deep convolutional neural network model capable of carrying out expression recognition. And the monitoring processor performs expression recognition processing on the facial three-dimensional point cloud data based on the pre-trained deep convolutional neural network model to obtain expression types. Among them, expression types include angry, aversion, fear, happiness, depression, surprise, or neutrality.
The deep convolutional neural network model adopted by the embodiment of the application has high generalization capability, can improve the recognition adaptability to the expression under the scene of super quotient and the like, and further can greatly improve the reliability and the effectiveness of expression recognition.
And 160, when the monitoring processor judges that the expression type is the preset expression type, the monitoring processor performs matching detection on the facial three-dimensional point cloud data and the image data in the personnel information database to be identified to obtain detection result data.
Specifically, the preset expression type is stored in the monitoring processor in advance, and the determination of the preset expression type is determined by analysis of persons with professional crime knowledge and experience such as experts with deep research on crime psychology, criminal investigation personnel and the like. The preset expression type is used for judging the identified expression type, and when the monitoring processor judges that the identified expression type is the same as the preset expression type, the abnormal state of the person with the facial three-dimensional point cloud data is explained, and the person with the facial three-dimensional point cloud data is likely to act as a safety threat to the public. At this time, the monitoring processor needs to match the facial three-dimensional point cloud data with the image data in the personnel information database to be identified.
The person information database to be identified includes therein a plurality of person information data, wherein each person information data includes image data. The personnel information database to be identified can be obtained from data of national security institutions, such as personnel information data of evasion personnel and the like. Further, the image data may be three-dimensional point cloud data or two-dimensional image data, and it is also possible that the image data includes both three-dimensional point cloud data and two-dimensional image data. The personnel information database to be identified comprises a plurality of personnel information data, so that the personnel information database to be identified also comprises a plurality of first three-dimensional point cloud data and a plurality of first two-dimensional image data.
The monitoring processor respectively carries out matching detection on the facial three-dimensional point cloud data and a plurality of first two-dimensional image data in the personnel information database to be identified to obtain detection results, and the steps are as follows:
firstly, matching the face three-dimensional point cloud data with first three-dimensional point cloud data by a monitoring processor to obtain maximum three-dimensional matching rate and optimal three-dimensional point cloud data;
specifically, the monitoring processor can adopt any recognition method of recognizing the face by using the three-dimensional point cloud to match the three-dimensional point cloud data of the face with a plurality of first three-dimensional point cloud data in the personnel information database to be recognized, so as to obtain the maximum three-dimensional matching rate and the optimal three-dimensional point cloud data. In an alternative scheme of the embodiment of the application, the monitoring processor identifies the facial three-dimensional point cloud data by adopting a three-dimensional face recognition method based on a probability map model, and the matching process further specifically comprises the following steps:
in step 1601, the monitor processor performs a normalization preprocessing on the facial three-dimensional point cloud data.
In step 1602, the monitoring processor obtains three-dimensional average face images of all the first three-dimensional point cloud data in the personnel information database to be identified.
In step 1603, the monitoring processor selects a plurality of feature points and a reference point on the three-dimensional average face image, calculates the geodesic distance from each feature point to the reference point, establishes a feature point model according to the geodesic distance, and locates feature points on the face three-dimensional point cloud data using the feature point model.
In step 1604, the monitoring processor extracts neighborhood feature relationships of feature points on the first three-dimensional point cloud data and the face three-dimensional point cloud data using Gabor filters.
In step 1605, the monitoring processor establishes a probability map model according to the neighborhood characteristic relationship and each first three-dimensional point cloud data and the face three-dimensional point cloud data in the personnel information database to be identified.
In step 1606, the monitoring processor calculates the similarity between the face three-dimensional point cloud data and each first three-dimensional point cloud data in the personnel information database to be identified according to the probability map model, determines the maximum value of the similarity as the maximum matching degree, and determines the first three-dimensional point cloud data with the highest similarity as the optimal three-dimensional point cloud data.
And secondly, the monitoring processor matches the intensity data of the face three-dimensional point cloud data with the first two-dimensional image data to obtain the maximum two-dimensional matching rate and the optimal two-dimensional image data.
Specifically, the method for recognizing the two-dimensional image data comprises the steps that the monitoring processor matches the intensity data of the face three-dimensional point cloud data with a plurality of first two-dimensional image data by adopting an elastic image matching face recognition method, and confirms the first two-dimensional image data with the highest matching rate and the highest matching rate as the largest two-dimensional matching rate and the optimal two-dimensional image data.
And finally, the monitoring processor generates detection result data according to the maximum three-dimensional matching rate, the optimal three-dimensional point cloud data, the maximum two-dimensional matching rate and the optimal two-dimensional image data. Wherein the detection result data includes a detection state.
Specifically, the monitoring processor determines the maximum three-dimensional matching degree, the maximum two-dimensional matching degree and the maximum value of a preset matching threshold value:
and when the maximum value is the maximum three-dimensional matching degree, the monitoring processor sets the detection state as a success state. And the monitoring processor determines first personnel information according to the optimal three-dimensional point cloud data, and finally the monitoring processor generates detection result data according to the detection state and the first personnel information.
And when the maximum value is the maximum two-dimensional matching degree, the monitoring processor sets the detection state to be a successful state. And the monitoring processor determines first personnel information according to the optimal two-dimensional image data, and finally the monitoring processor generates detection result data according to the detection state and the first personnel information.
When the maximum value is a preset matching threshold value, the monitoring processor sets the detection state as a matching failure state, and then the monitoring processor generates detection result data according to the detection state. At this time, the personnel information data is empty.
In a specific example of the embodiment of the application, when the monitoring processor judges that the expression type obtained by the expression recognition processing is a preset expression type, the monitoring processor matches the face three-dimensional point cloud data with all the first three-dimensional point cloud data in the personnel information database to be recognized to obtain the similarity of the face three-dimensional point cloud data and each first three-dimensional point cloud data, namely, the three-dimensional matching rate, and the first three-dimensional point cloud data with the maximum similarity of 76% is determined as the maximum three-dimensional matching rate, and the first three-dimensional point cloud data with the similarity of 76% with the face three-dimensional point cloud data is the optimal three-dimensional point cloud data. Meanwhile, the monitoring processor matches the intensity data of the face three-dimensional point cloud data with all the first two-dimensional image data in the personnel information database to be identified to obtain the similarity of the intensity data of the face three-dimensional point cloud data and each first two-dimensional image data, namely the two-dimensional matching rate, wherein the first two-dimensional image data with the maximum similarity of 93% is determined as the maximum two-dimensional matching rate, and the first two-dimensional image data with the intensity data similarity of 93% with the face three-dimensional point cloud data is the optimal two-dimensional image data. Then, the monitoring processor compares the maximum three-dimensional matching rate, the maximum two-dimensional matching rate and the preset matching threshold value, and determines a matching state according to the maximum one value of the maximum three-dimensional matching rate, the maximum two-dimensional matching rate and the preset matching threshold value, for example, the preset matching threshold value is 92%, and then the monitoring processor determines that the maximum two-dimensional matching rate is 93% to be the maximum value, at this time, the monitoring processor sets the detection state to be a successful state. Because the two-dimensional image data and the personnel information have an association relationship, the monitoring processor can determine the first personnel information according to the optimal two-dimensional image data, and the first personnel information can comprise names, ages, identification numbers, DNA data and the like. And finally, the monitoring processor generates detection result data according to the detection state and the first personnel information. The detection result data comprises detection states and first person information data.
Through the matching process, the monitoring processor determines detection result data, including detection state and personnel information data.
Step 170, the monitor processor determines whether the detection status is a preset status.
Specifically, the monitoring processor determines whether the monitoring state is a preset state, and in an alternative scheme of the embodiment of the application, the preset state is a successful state. When the monitoring processor judges that the detection state is a preset state, the face three-dimensional point cloud data is indicated to find out information data of a person corresponding to the face three-dimensional point cloud data in the person information database to be identified, namely, the person information data. At this time, it is indicated that a security threat person exists in the monitored area to which the TOF camera belongs, and step 180 is performed.
And 180, the monitoring processor generates an alarm message according to the personnel information and sends the alarm message to the early warning terminal.
Specifically, the monitoring processor finds information of the monitoring processor according to the TOF camera ID, generates an alarm message according to the identified personnel information, and sends the alarm message to an early warning terminal used by a worker. The staff can execute the corresponding early warning scheme by checking the content displayed by the alarm message at the terminal.
The embodiment of the application provides a monitoring and early warning method for people stream analysis, which comprises the steps of acquiring an environmental image of a monitoring area by using a time-of-flight TOF camera to generate three-dimensional point cloud data, carrying out facial recognition on the acquired three-dimensional point cloud data, judging the expression type of the recognized facial three-dimensional point cloud data, further matching the facial three-dimensional point cloud data with image data in a preset personnel information database to be recognized when the possibility of threat to safety is obtained through analysis, and generating alarm message data and sending the alarm message data to an early warning terminal when personnel information possibly threatening the environmental safety is obtained through matching analysis.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as sub-hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), programmable ROM, erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing detailed description of the application has been presented for purposes of illustration and description, and it should be understood that the application is not limited to the particular embodiments disclosed, but is intended to cover all modifications, equivalents, alternatives, and improvements within the spirit and principles of the application.