Background
The business information of the present day changes, how to make a quick and accurate response to the weak change of the market in the shortest time, and the business operation cost is saved to the maximum extent, so that the realization of efficient business operation management becomes the core element of the success or failure of business operation. For example:
and passenger flow information is collected in real time, and scientific basis is provided for operation management.
The method prevents unnecessary accidents caused by excessive passenger flow and establishes a safe public place.
By counting the passenger flow of each entrance and exit and the direction of the passenger flow entering and exiting, the rationality of the arrangement of each entrance and exit can be accurately judged.
Through counting passenger flows of each main area, scientific basis is provided for reasonable distribution of the whole area.
Through passenger flow statistics, the price level of rent of counters and shops can be objectively determined.
However, in the prior art, the passenger flow is often counted by using an infrared sensing mode, the cost ratio of the method is moderate, but the statistical data of the infrared sensor is greatly interfered by external factors, so that a large error is generated; for a wider doorway, the phenomenon of missing is also easily caused when multiple people pass through the doorway simultaneously.
The traditional technology has the following technical problems:
at present, through a passenger flow statistical mode of cloud face detection and comparison, the consumption of network bandwidth is large, and the response speed of multiple people entering a store at the same time is influenced.
Disclosure of Invention
The invention aims to provide an accurate passenger group analysis method based on an edge computing technology, which is used for counting and analyzing passenger flow.
In order to solve the above technical problem, the present invention provides a method for analyzing a precise passenger group based on an edge calculation technique, including: external light rays irradiate the surface of the sensor after passing through the lens, and the sensor converts the light rays conducted from the lens into electric signals and then converts the electric signals into digital signals through the internal AD; the image signal passes through a video input module and is processed by a video processing subsystem, and then data in a color difference component format is output;
the producer is responsible for receiving and distributing color difference component format data:
filling frame serial numbers (accumulation) in the color difference component data by a producer, sending the data to a video output module, and processing the data by a face recognition algorithm;
filling frame serial numbers (accumulation) in the color difference component data by a producer and storing the data in an image cache queue; processing results for synchronizing video frames and face recognition algorithms;
the producer sends the color difference component data to the MJPEG encoder, and accumulates the frame number in the MJPEG receiving thread so as to synchronize the color difference component data and the MJPEG frame number;
the consumer is responsible for receiving and processing the results of the face recognition algorithm:
the consumer receives the result of the face recognition algorithm (including the corresponding frame number), and matches the corresponding video frame through the image cache queue, draws the face position into the color difference component image, then sends the image to the encoder, and receives the code stream to provide the code stream playing service outwards;
sending the result of the face recognition algorithm to a face tracking thread;
and the human face tracking thread preferably selects a human face target meeting the requirements within a specified time and sends the human face target to the snapshot thread. And the snapshot thread receives the snapshot task, matches the corresponding pictures in the JPEG queue through the frame number in the face information, and then intercepts the face pictures according to the face position in the face information.
In one embodiment, the filtered light is filtered and applied to the sensor surface.
In one embodiment, the video processing subsystem processing includes auto-tracking white balance, lens shading, grayscale, sharpness, auto-exposure, and noise reduction.
In one embodiment, the encoder is an H264 encoder.
In one embodiment, the received code stream provides a code stream playing service to the outside, and the code stream is an H264 code stream.
In one embodiment, the flv server and the rtsp server receive the H264 code stream and provide the code stream playing service outwards.
In one embodiment, the entire snapshot process is completed and then uploaded to a server for further processing.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the methods when executing the program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of any of the methods.
A processor for running a program, wherein the program when running performs any of the methods.
The invention has the beneficial effects that:
the invention firstly realizes video image analysis, target detection tracking (the same picture of one device supports at most 10 human face detection tracking), brightness compensation adjustment, human face feature extraction and the like at a hardware end, so that the network bandwidth consumption is reduced, the traffic efficiency is obviously improved, higher concurrency can be supported, and the invention is obviously helpful for industries with higher real-time requirement and larger batch passenger flow. Meanwhile, the camera and the algorithm chip are changed into an integrated type from a split type, so that the camera is smaller and more compact, and the installation problem of a complex environment is well solved.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
A precise passenger group analysis method based on edge computing technology comprises the following steps: external light rays irradiate the surface of the sensor after passing through the lens, and the sensor converts the light rays conducted from the lens into electric signals and then converts the electric signals into digital signals through the internal AD; the image signal passes through a video input module and is processed by a video processing subsystem, and then data in a color difference component format is output;
the producer is responsible for receiving and distributing color difference component format data:
filling frame serial numbers (accumulation) in the color difference component data by a producer, sending the data to a video output module, and processing the data by a face recognition algorithm;
filling frame serial numbers (accumulation) in the color difference component data by a producer and storing the data in an image cache queue; processing results for synchronizing video frames and face recognition algorithms;
the producer sends the color difference component data to the MJPEG encoder, and accumulates the frame number in the MJPEG receiving thread so as to synchronize the color difference component data and the MJPEG frame number;
the consumer is responsible for receiving and processing the results of the face recognition algorithm:
the consumer receives the result of the face recognition algorithm (including the corresponding frame number), and matches the corresponding video frame through the image cache queue, draws the face position into the color difference component image, then sends the image to the encoder, and receives the code stream to provide the code stream playing service outwards;
sending the result of the face recognition algorithm to a face tracking thread;
and the human face tracking thread preferably selects a human face target meeting the requirements within a specified time and sends the human face target to the snapshot thread. And the snapshot thread receives the snapshot task, matches the corresponding pictures in the JPEG queue through the frame number in the face information, and then intercepts the face pictures according to the face position in the face information.
In one embodiment, the filtered light is filtered and applied to the sensor surface.
In one embodiment, the video processing subsystem processing includes auto-tracking white balance, lens shading, grayscale, sharpness, auto-exposure, and noise reduction.
In one embodiment, the encoder is an H264 encoder.
In one embodiment, the received code stream provides a code stream playing service to the outside, and the code stream is an H264 code stream.
In one embodiment, the flv server and the rtsp server receive the H264 code stream and provide the code stream playing service outwards.
In one embodiment, the entire snapshot process is completed and then uploaded to a server for further processing.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the methods when executing the program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of any of the methods.
A processor for running a program, wherein the program when running performs any of the methods.
The invention has the beneficial effects that:
the invention firstly realizes video image analysis, target detection tracking (the same picture of one device supports at most 10 human face detection tracking), brightness compensation adjustment, human face feature extraction and the like at a hardware end, so that the network bandwidth consumption is reduced, the traffic efficiency is obviously improved, higher concurrency can be supported, and the invention is obviously helpful for industries with higher real-time requirement and larger batch passenger flow. Meanwhile, the camera and the algorithm chip are changed into an integrated type from a split type, so that the camera is smaller and more compact, and the installation problem of a complex environment is well solved.
A specific application scenario of the present invention is described below:
hardware:
as shown in fig. 1, after passing through the lens, the external light is filtered by the optical filter and then irradiated onto the Sensor surface, and the Sensor converts the light transmitted from the lens into an electrical signal and then converts the electrical signal into a digital signal through the internal AD. The image signal passes through VI (video input module), and is processed (AWB (auto tracking white balance), lens shading, gamma (gray scale), sharpness, AE (auto exposure), de-noise) by VPSS (video processing subsystem), and then data in YUV (color difference component) format is output.
The FrameProducer is responsible for receiving and distributing YUV data:
1. the FrameProducer fills frame serial numbers (accumulation) in the YUV data and sends the YUV data to a VO (video output module) for processing by a face recognition algorithm.
2. FrameProducer fills frame numbers (accumulates) in YUV data and saves in YUVFrameList (image buffer queue). And the method is used for synchronizing the video frame and the processing result of the face recognition algorithm.
3. The FrameProducer sends the YUV data to the MJPEG encoder, and accumulates the frame number in the MJPEG receiving thread, thereby synchronizing the YUV data and the MJPEG frame number.
The FrameConsumer (consumer) is responsible for receiving and processing the results of the face recognition algorithm:
1. the FrameConsumer receives the result of the face recognition algorithm (including the corresponding frame number), matches the corresponding video frame through YUVFrameList, draws the face position into a YUV image, then sends the YUV image to an H264 encoder, and the flv server and the rtsp server receive the H264 code stream and provide the code stream playing service outwards.
2. The FrameConsumer sends the result of the face recognition algorithm to TrackingThead (face tracking thread).
And (3) the TrackingThead (a face tracking thread) preferably selects a face target meeting the requirement within a specified time and sends the face target to a snapshot thread. And the snapshot thread receives the snapshot task, matches the corresponding picture in the JPEGRQ (JPEG queue) through the frame number in the face information, and then intercepts the face picture according to the face position in the face information. Thus, the whole snapshot process is completed and then uploaded to a server for further processing.
Software:
1. the background configures the preprocessing rules according to various use scenes, thereby reducing resource consumption of interface calling frequency.
2. And then face attribute detection is carried out (face attribute requirements can be defined for each device, the requirements are improved under a scene with good conditions, and otherwise, the requirements are properly reduced). Detecting unqualified photos, storing records, and not performing business processing; and detecting qualified photos and then carrying out face comparison.
3. Determining the faceid according to the comparison result, simultaneously comparing the faceid with the last time of arriving at the store of the faceid, if the time difference does not exceed the duplication removal time, considering the same visit, and not recording the passenger flow again; otherwise, the data (faceid, gender, age, mood, etc.) meeting the requirements are updated into the passenger flow table, thereby realizing the duplicate removal of the passenger flow and achieving the accurate statistics.
The above-mentioned embodiments are merely preferred embodiments for fully illustrating the present invention, and the scope of the present invention is not limited thereto. The equivalent substitution or change made by the technical personnel in the technical field on the basis of the invention is all within the protection scope of the invention. The protection scope of the invention is subject to the claims.