Movatterモバイル変換


[0]ホーム

URL:


CN110321857B - Accurate customer group analysis method based on edge computing technology - Google Patents

Accurate customer group analysis method based on edge computing technology
Download PDF

Info

Publication number
CN110321857B
CN110321857BCN201910609505.0ACN201910609505ACN110321857BCN 110321857 BCN110321857 BCN 110321857BCN 201910609505 ACN201910609505 ACN 201910609505ACN 110321857 BCN110321857 BCN 110321857B
Authority
CN
China
Prior art keywords
face
thread
color difference
difference component
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910609505.0A
Other languages
Chinese (zh)
Other versions
CN110321857A (en
Inventor
周圣强
宁松松
黄岗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OP Retail Suzhou Technology Co Ltd
Original Assignee
OP Retail Suzhou Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OP Retail Suzhou Technology Co LtdfiledCriticalOP Retail Suzhou Technology Co Ltd
Priority to CN201910609505.0ApriorityCriticalpatent/CN110321857B/en
Publication of CN110321857ApublicationCriticalpatent/CN110321857A/en
Application grantedgrantedCritical
Publication of CN110321857BpublicationCriticalpatent/CN110321857B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于边缘计算技术的精准客群分析方法。本发明一种基于边缘计算技术的精准客群分析方法,包括:外部光线穿过镜头后,照射到传感器面上,传感器将从镜头上传导过来的光线转换为电信号,再通过内部的AD转换为数字信号;图像信号通过视频输入模块,再经过视频处理子系统处理后输出色差分量格式的数据。本发明的有益效果:本发明首先在硬件端实现了视频图像解析、目标检测跟踪(一台设备同一画面最多支持10张人脸检测跟踪)、亮度补偿可调节、人脸特征提取等,使得网络带宽消耗减少,通行效率显著提高,能支持更高的并发,对实时性要求较高和批次客流较大的行业有明显帮助。

Figure 201910609505

The invention discloses an accurate customer group analysis method based on edge computing technology. The present invention is an accurate customer group analysis method based on edge computing technology. It is a digital signal; the image signal passes through the video input module and is processed by the video processing subsystem to output the data in the color difference component format. Beneficial effects of the present invention: the present invention firstly realizes video image analysis, target detection and tracking (one device supports up to 10 face detection and tracking on the same screen), adjustable brightness compensation, face feature extraction, etc. on the hardware side, making the network The bandwidth consumption is reduced, the traffic efficiency is significantly improved, and it can support higher concurrency, which is obviously helpful for industries with high real-time requirements and large batch passenger flow.

Figure 201910609505

Description

Accurate passenger group analysis method based on edge calculation technology
Technical Field
The invention relates to the field of face recognition, in particular to a precise passenger group analysis method based on an edge calculation technology.
Background
The business information of the present day changes, how to make a quick and accurate response to the weak change of the market in the shortest time, and the business operation cost is saved to the maximum extent, so that the realization of efficient business operation management becomes the core element of the success or failure of business operation. For example:
and passenger flow information is collected in real time, and scientific basis is provided for operation management.
The method prevents unnecessary accidents caused by excessive passenger flow and establishes a safe public place.
By counting the passenger flow of each entrance and exit and the direction of the passenger flow entering and exiting, the rationality of the arrangement of each entrance and exit can be accurately judged.
Through counting passenger flows of each main area, scientific basis is provided for reasonable distribution of the whole area.
Through passenger flow statistics, the price level of rent of counters and shops can be objectively determined.
However, in the prior art, the passenger flow is often counted by using an infrared sensing mode, the cost ratio of the method is moderate, but the statistical data of the infrared sensor is greatly interfered by external factors, so that a large error is generated; for a wider doorway, the phenomenon of missing is also easily caused when multiple people pass through the doorway simultaneously.
The traditional technology has the following technical problems:
at present, through a passenger flow statistical mode of cloud face detection and comparison, the consumption of network bandwidth is large, and the response speed of multiple people entering a store at the same time is influenced.
Disclosure of Invention
The invention aims to provide an accurate passenger group analysis method based on an edge computing technology, which is used for counting and analyzing passenger flow.
In order to solve the above technical problem, the present invention provides a method for analyzing a precise passenger group based on an edge calculation technique, including: external light rays irradiate the surface of the sensor after passing through the lens, and the sensor converts the light rays conducted from the lens into electric signals and then converts the electric signals into digital signals through the internal AD; the image signal passes through a video input module and is processed by a video processing subsystem, and then data in a color difference component format is output;
the producer is responsible for receiving and distributing color difference component format data:
filling frame serial numbers (accumulation) in the color difference component data by a producer, sending the data to a video output module, and processing the data by a face recognition algorithm;
filling frame serial numbers (accumulation) in the color difference component data by a producer and storing the data in an image cache queue; processing results for synchronizing video frames and face recognition algorithms;
the producer sends the color difference component data to the MJPEG encoder, and accumulates the frame number in the MJPEG receiving thread so as to synchronize the color difference component data and the MJPEG frame number;
the consumer is responsible for receiving and processing the results of the face recognition algorithm:
the consumer receives the result of the face recognition algorithm (including the corresponding frame number), and matches the corresponding video frame through the image cache queue, draws the face position into the color difference component image, then sends the image to the encoder, and receives the code stream to provide the code stream playing service outwards;
sending the result of the face recognition algorithm to a face tracking thread;
and the human face tracking thread preferably selects a human face target meeting the requirements within a specified time and sends the human face target to the snapshot thread. And the snapshot thread receives the snapshot task, matches the corresponding pictures in the JPEG queue through the frame number in the face information, and then intercepts the face pictures according to the face position in the face information.
In one embodiment, the filtered light is filtered and applied to the sensor surface.
In one embodiment, the video processing subsystem processing includes auto-tracking white balance, lens shading, grayscale, sharpness, auto-exposure, and noise reduction.
In one embodiment, the encoder is an H264 encoder.
In one embodiment, the received code stream provides a code stream playing service to the outside, and the code stream is an H264 code stream.
In one embodiment, the flv server and the rtsp server receive the H264 code stream and provide the code stream playing service outwards.
In one embodiment, the entire snapshot process is completed and then uploaded to a server for further processing.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the methods when executing the program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of any of the methods.
A processor for running a program, wherein the program when running performs any of the methods.
The invention has the beneficial effects that:
the invention firstly realizes video image analysis, target detection tracking (the same picture of one device supports at most 10 human face detection tracking), brightness compensation adjustment, human face feature extraction and the like at a hardware end, so that the network bandwidth consumption is reduced, the traffic efficiency is obviously improved, higher concurrency can be supported, and the invention is obviously helpful for industries with higher real-time requirement and larger batch passenger flow. Meanwhile, the camera and the algorithm chip are changed into an integrated type from a split type, so that the camera is smaller and more compact, and the installation problem of a complex environment is well solved.
Drawings
FIG. 1 is a schematic diagram of an edge calculation integrated hardware processing part in the edge calculation technology-based precise customer group analysis method according to the present invention
FIG. 2 is a schematic diagram of a "background portion" of the method for analyzing a precise passenger group based on an edge calculation technique according to the present invention.
Detailed Description
The present invention is further described below in conjunction with the following figures and specific examples so that those skilled in the art may better understand the present invention and practice it, but the examples are not intended to limit the present invention.
A precise passenger group analysis method based on edge computing technology comprises the following steps: external light rays irradiate the surface of the sensor after passing through the lens, and the sensor converts the light rays conducted from the lens into electric signals and then converts the electric signals into digital signals through the internal AD; the image signal passes through a video input module and is processed by a video processing subsystem, and then data in a color difference component format is output;
the producer is responsible for receiving and distributing color difference component format data:
filling frame serial numbers (accumulation) in the color difference component data by a producer, sending the data to a video output module, and processing the data by a face recognition algorithm;
filling frame serial numbers (accumulation) in the color difference component data by a producer and storing the data in an image cache queue; processing results for synchronizing video frames and face recognition algorithms;
the producer sends the color difference component data to the MJPEG encoder, and accumulates the frame number in the MJPEG receiving thread so as to synchronize the color difference component data and the MJPEG frame number;
the consumer is responsible for receiving and processing the results of the face recognition algorithm:
the consumer receives the result of the face recognition algorithm (including the corresponding frame number), and matches the corresponding video frame through the image cache queue, draws the face position into the color difference component image, then sends the image to the encoder, and receives the code stream to provide the code stream playing service outwards;
sending the result of the face recognition algorithm to a face tracking thread;
and the human face tracking thread preferably selects a human face target meeting the requirements within a specified time and sends the human face target to the snapshot thread. And the snapshot thread receives the snapshot task, matches the corresponding pictures in the JPEG queue through the frame number in the face information, and then intercepts the face pictures according to the face position in the face information.
In one embodiment, the filtered light is filtered and applied to the sensor surface.
In one embodiment, the video processing subsystem processing includes auto-tracking white balance, lens shading, grayscale, sharpness, auto-exposure, and noise reduction.
In one embodiment, the encoder is an H264 encoder.
In one embodiment, the received code stream provides a code stream playing service to the outside, and the code stream is an H264 code stream.
In one embodiment, the flv server and the rtsp server receive the H264 code stream and provide the code stream playing service outwards.
In one embodiment, the entire snapshot process is completed and then uploaded to a server for further processing.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the methods when executing the program.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of any of the methods.
A processor for running a program, wherein the program when running performs any of the methods.
The invention has the beneficial effects that:
the invention firstly realizes video image analysis, target detection tracking (the same picture of one device supports at most 10 human face detection tracking), brightness compensation adjustment, human face feature extraction and the like at a hardware end, so that the network bandwidth consumption is reduced, the traffic efficiency is obviously improved, higher concurrency can be supported, and the invention is obviously helpful for industries with higher real-time requirement and larger batch passenger flow. Meanwhile, the camera and the algorithm chip are changed into an integrated type from a split type, so that the camera is smaller and more compact, and the installation problem of a complex environment is well solved.
A specific application scenario of the present invention is described below:
hardware:
as shown in fig. 1, after passing through the lens, the external light is filtered by the optical filter and then irradiated onto the Sensor surface, and the Sensor converts the light transmitted from the lens into an electrical signal and then converts the electrical signal into a digital signal through the internal AD. The image signal passes through VI (video input module), and is processed (AWB (auto tracking white balance), lens shading, gamma (gray scale), sharpness, AE (auto exposure), de-noise) by VPSS (video processing subsystem), and then data in YUV (color difference component) format is output.
The FrameProducer is responsible for receiving and distributing YUV data:
1. the FrameProducer fills frame serial numbers (accumulation) in the YUV data and sends the YUV data to a VO (video output module) for processing by a face recognition algorithm.
2. FrameProducer fills frame numbers (accumulates) in YUV data and saves in YUVFrameList (image buffer queue). And the method is used for synchronizing the video frame and the processing result of the face recognition algorithm.
3. The FrameProducer sends the YUV data to the MJPEG encoder, and accumulates the frame number in the MJPEG receiving thread, thereby synchronizing the YUV data and the MJPEG frame number.
The FrameConsumer (consumer) is responsible for receiving and processing the results of the face recognition algorithm:
1. the FrameConsumer receives the result of the face recognition algorithm (including the corresponding frame number), matches the corresponding video frame through YUVFrameList, draws the face position into a YUV image, then sends the YUV image to an H264 encoder, and the flv server and the rtsp server receive the H264 code stream and provide the code stream playing service outwards.
2. The FrameConsumer sends the result of the face recognition algorithm to TrackingThead (face tracking thread).
And (3) the TrackingThead (a face tracking thread) preferably selects a face target meeting the requirement within a specified time and sends the face target to a snapshot thread. And the snapshot thread receives the snapshot task, matches the corresponding picture in the JPEGRQ (JPEG queue) through the frame number in the face information, and then intercepts the face picture according to the face position in the face information. Thus, the whole snapshot process is completed and then uploaded to a server for further processing.
Software:
1. the background configures the preprocessing rules according to various use scenes, thereby reducing resource consumption of interface calling frequency.
2. And then face attribute detection is carried out (face attribute requirements can be defined for each device, the requirements are improved under a scene with good conditions, and otherwise, the requirements are properly reduced). Detecting unqualified photos, storing records, and not performing business processing; and detecting qualified photos and then carrying out face comparison.
3. Determining the faceid according to the comparison result, simultaneously comparing the faceid with the last time of arriving at the store of the faceid, if the time difference does not exceed the duplication removal time, considering the same visit, and not recording the passenger flow again; otherwise, the data (faceid, gender, age, mood, etc.) meeting the requirements are updated into the passenger flow table, thereby realizing the duplicate removal of the passenger flow and achieving the accurate statistics.
The above-mentioned embodiments are merely preferred embodiments for fully illustrating the present invention, and the scope of the present invention is not limited thereto. The equivalent substitution or change made by the technical personnel in the technical field on the basis of the invention is all within the protection scope of the invention. The protection scope of the invention is subject to the claims.

Claims (7)

1. A precise passenger group analysis method based on an edge computing technology is characterized by comprising the following steps:
step 101, after passing through a lens, external light irradiates a sensor surface, the sensor converts the light conducted from the lens into an electric signal, and then the electric signal is converted into an image signal through an internal AD (analog-to-digital) converter; the image signal is processed by a video processing subsystem through a video input module and then output color difference component format data;
step 102, a producer thread receives and distributes color difference component format data, which specifically comprises:
filling frame serial numbers in the color difference component format data by the producer thread, sending the data to a video output module, and processing the data by a face recognition algorithm;
filling frame numbers in the color difference component format data by the producer thread and storing the frame numbers in an image cache queue; processing results for synchronizing video frames and face recognition algorithms;
the producer thread sends the data in the color difference component format to the MJPEG encoder, and accumulates the frame number in the MJPEG receiving thread so as to synchronize the data in the color difference component format and the MJPEG frame number;
step 103, the consumer thread receives and processes the result of the face recognition algorithm, which specifically includes:
the consumer thread receives the result of the face recognition algorithm, matches the corresponding video frame through the image cache queue, draws the face position into the color difference component image, and then sends the image to the H264 encoder, and the FLV streaming service and the RTSP streaming service receive the code stream and provide the code stream playing service outwards;
and sending the result of the face recognition algorithm to a face tracking thread, selecting a face target meeting the requirement by the face tracking thread within a specified time, sending the face target to a snapshot thread, receiving a snapshot task by the snapshot thread, matching the corresponding pictures in the image queue through the frame number in the face information, and intercepting the face pictures according to the face position in the face information.
2. The method of claim 1, wherein external light passes through a lens, is filtered by a filter, and then is directed onto the sensor surface.
3. The edge-computing-technology-based accurate passenger group analysis method of claim 1, wherein the video processing subsystem processing comprises auto-tracking white balance, auto-exposure, and noise reduction.
4. The method for analyzing the precise customer base on the edge computing technology as claimed in claim 1, wherein the entire capturing process is completed and then uploaded to a server for further processing.
5. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 4 are implemented when the program is executed by the processor.
6. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 4.
7. A processor, characterized in that the processor is configured to run a program, wherein the program when running performs the method of any of claims 1 to 4.
CN201910609505.0A2019-07-082019-07-08 Accurate customer group analysis method based on edge computing technologyActiveCN110321857B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910609505.0ACN110321857B (en)2019-07-082019-07-08 Accurate customer group analysis method based on edge computing technology

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910609505.0ACN110321857B (en)2019-07-082019-07-08 Accurate customer group analysis method based on edge computing technology

Publications (2)

Publication NumberPublication Date
CN110321857A CN110321857A (en)2019-10-11
CN110321857Btrue CN110321857B (en)2021-08-17

Family

ID=68123081

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910609505.0AActiveCN110321857B (en)2019-07-082019-07-08 Accurate customer group analysis method based on edge computing technology

Country Status (1)

CountryLink
CN (1)CN110321857B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110766474A (en)*2019-10-302020-02-07浙江易时科技股份有限公司Sales exhibition room passenger flow batch statistics based on face recognition technology
CN113868470A (en)*2021-09-302021-12-31成都考拉悠然科技有限公司 An offline video retrieval method based on human body recognition

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105488478A (en)*2015-12-022016-04-13深圳市商汤科技有限公司Face recognition system and method
CN108710856A (en)*2018-05-222018-10-26河南亚视软件技术有限公司A kind of face identification method based on video flowing
CN109086919A (en)*2018-07-172018-12-25新华三云计算技术有限公司A kind of sight spot route planning method, device, system and electronic equipment
CN109492536A (en)*2018-10-122019-03-19大唐高鸿信息通信研究院(义乌)有限公司A kind of face identification method and system based on 5G framework
CN109657588A (en)*2018-12-112019-04-19上海工业自动化仪表研究院有限公司Intelligent edge calculations built-in terminal based on video identification
CN109672751A (en)*2019-01-152019-04-23特斯联(北京)科技有限公司A kind of wisdom population statistical method and system based on edge calculations

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7665113B1 (en)*2007-05-242010-02-16TrueSentry, Inc.Rate adaptive video transmission and synchronization system
US9179196B2 (en)*2012-02-222015-11-03Adobe Systems IncorporatedInterleaved video streams
CN103034841B (en)*2012-12-032016-09-21Tcl集团股份有限公司A kind of face tracking methods and system
US10007849B2 (en)*2015-05-292018-06-26Accenture Global Solutions LimitedPredicting external events from digital video content
CN109218731B (en)*2017-06-302021-06-01腾讯科技(深圳)有限公司Screen projection method, device and system of mobile equipment
CN107645673B (en)*2017-08-292020-05-15湖北航天技术研究院总体设计所Telemetering image real-time decoding unit
CN108491822B (en)*2018-04-022020-09-08杭州高创电子科技有限公司Face detection duplication-removing method based on limited cache of embedded equipment
CN109522853B (en)*2018-11-222019-11-19湖南众智君赢科技有限公司 Face detection and search method for surveillance video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105488478A (en)*2015-12-022016-04-13深圳市商汤科技有限公司Face recognition system and method
CN108710856A (en)*2018-05-222018-10-26河南亚视软件技术有限公司A kind of face identification method based on video flowing
CN109086919A (en)*2018-07-172018-12-25新华三云计算技术有限公司A kind of sight spot route planning method, device, system and electronic equipment
CN109492536A (en)*2018-10-122019-03-19大唐高鸿信息通信研究院(义乌)有限公司A kind of face identification method and system based on 5G framework
CN109657588A (en)*2018-12-112019-04-19上海工业自动化仪表研究院有限公司Intelligent edge calculations built-in terminal based on video identification
CN109672751A (en)*2019-01-152019-04-23特斯联(北京)科技有限公司A kind of wisdom population statistical method and system based on edge calculations

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
边缘计算助力大数据侦查;王浩先 ;《中国公共安全》;20190228(第2期);21-24*
铁路智能客运车站系统总体设计及评价;史天运 等;《铁路计算机应用》;20180731;第27卷(第7期);9-16*

Also Published As

Publication numberPublication date
CN110321857A (en)2019-10-11

Similar Documents

PublicationPublication DateTitle
US9277165B2 (en)Video surveillance system and method using IP-based networks
CN109299703B (en)Method and device for carrying out statistics on mouse conditions and image acquisition equipment
EP2549738B1 (en)Method and camera for determining an image adjustment parameter
KR101725884B1 (en)Automatic processing of images
CN108960290A (en)Image processing method, image processing device, computer-readable storage medium and electronic equipment
US11917158B2 (en)Static video recognition
CN109618102B (en) Focusing processing method, device, electronic device and storage medium
WO2019233260A1 (en)Method and apparatus for pushing advertisement information, storage medium and electronic device
CN104182721A (en)Image processing system and image processing method for improving face recognition rate
US8798369B2 (en)Apparatus and method for estimating the number of objects included in an image
WO2018223394A1 (en)Method and apparatus for photographing image
CN110321857B (en) Accurate customer group analysis method based on edge computing technology
CN113132695A (en)Lens shadow correction method and device and electronic equipment
CN108040244A (en)Grasp shoot method and device, storage medium based on light field video flowing
US11195298B2 (en)Information processing apparatus, system, method for controlling information processing apparatus, and non-transitory computer readable storage medium
CN109447022B (en)Lens type identification method and device
CN104038775B (en)A kind of channel information recognition methods and device
CN115297257B (en)Method, device and equipment for acquiring multiple paths of video streams
CN108875477B (en)Exposure control method, device and system and storage medium
CN112907206B (en)Business auditing method, device and equipment based on video object identification
US20220408013A1 (en)DNN Assisted Object Detection and Image Optimization
CN115509739A (en)High-concurrency scheduling and analyzing system for real-time intelligent perception of videos
KR102567823B1 (en)Youtube upload time analysis system
WO2020063688A1 (en)Method and device for detecting video scene change, and video acquisition device
CN110933304A (en)Method and device for determining to-be-blurred region, storage medium and terminal equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
PE01Entry into force of the registration of the contract for pledge of patent right
PE01Entry into force of the registration of the contract for pledge of patent right

Denomination of invention:Accurate customer group analysis method based on edge computing technology

Effective date of registration:20220715

Granted publication date:20210817

Pledgee:Bank of Suzhou Co.,Ltd. Shishan road sub branch

Pledgor:SUZHOU WANDIANZHANG NETWORK TECHNOLOGY Co.,Ltd.

Registration number:Y2022320010387

PC01Cancellation of the registration of the contract for pledge of patent right
PC01Cancellation of the registration of the contract for pledge of patent right

Granted publication date:20210817

Pledgee:Bank of Suzhou Co.,Ltd. Shishan road sub branch

Pledgor:SUZHOU WANDIANZHANG NETWORK TECHNOLOGY Co.,Ltd.

Registration number:Y2022320010387


[8]ページ先頭

©2009-2025 Movatter.jp