技术领域technical field
本发明涉及基于深度学习方法的图像处理技术领域,具体涉及一种基于2D人脸图片的年龄性别属性分析方法、系统和计算机设备。The present invention relates to the technical field of image processing based on deep learning methods, in particular to an age and gender attribute analysis method, system and computer equipment based on 2D face pictures.
技术背景technical background
当我们行走在大街上或者各种商场店铺超市内,如果留心关注,会发现各式各样的摄像头遍布在我们的日常生活中,这其中的大部分的摄像头都是用来进行数据记录且具有存储功能,在某些情况下(案件追踪,店面监控等)调取监控数据进行历史回溯分析,这些摄像头每天都会产生大量的数据,但是这些数据绝大部分都是用作回溯作用,数据并没有得到充分的利用,例如在零售商铺场景内的摄像头,店主虽然有摄像头记录的历史记录,但是并没有利用该数据来分析入店顾客的年龄和性别分布,没有数据支撑,也就很难进行有针对性的商品布置优化。为了解决上述类似场景的实际问题,我们发明了一种基于2D人脸照片的年龄性别属性分析方法。When we walk on the street or in various shopping malls, shops and supermarkets, if we pay attention, we will find that various cameras are scattered in our daily life, most of which are used for data recording and have Storage function, in some cases (case tracking, storefront monitoring, etc.) to retrieve monitoring data for historical retrospective analysis, these cameras will generate a large amount of data every day, but most of these data are used for retrospective purposes, the data is not For example, the camera in the retail store scene, although the store owner has the historical record of the camera record, but has not used the data to analyze the age and gender distribution of the customers entering the store. Without data support, it is difficult to carry out meaningful research. Targeted product layout optimization. In order to solve the practical problems of the above-mentioned similar scenarios, we invented a method for analyzing age and gender attributes based on 2D face photos.
公开日为20190607,公开号为CN109858388A的中国发明公开了一种智慧旅游管理系统,包括:无人机航拍游客分布系统、景区人脸识别系统、景区入口人流预测系统、景区基础信息数据系统、酒店数据统计系统、云端数据管理平台和移动终端;其中景区人脸识别系统是利用人脸识别技术对游客的年龄阶段及性别进行识别,其包括如下步骤:The Chinese invention whose publication date is 20190607 and whose publication number is CN109858388A discloses a smart tourism management system, including: drone aerial photography tourist distribution system, scenic spot face recognition system, scenic spot entrance crowd flow prediction system, scenic spot basic information data system, hotel Data statistics system, cloud data management platform and mobile terminal; among them, the face recognition system of the scenic spot uses face recognition technology to identify the age stage and gender of tourists, which includes the following steps:
首先,建立人脸数据库,数据库中人脸图像包括来自不同年龄、不同表情的照片,照片背景与景区入口摄像头拍摄的照片背景一致;First, establish a face database. The face images in the database include photos from different ages and different expressions. The background of the photos is consistent with the background of the photos taken by the camera at the entrance of the scenic spot;
然后,按照性别人工整理数据库,训练样本分为男性图像集与女性图像集,数据库命名是按英文首字母缩写,第一层:对性别初次划分;第二层:在男性或女性性别层中划分出青年YM、中年MM、老年OM;第三层划分年龄范围,第四层划分年龄间距更小的数据库,“MM-i-13”解释为第i位中年男子附属于第1数据库中的第3子数据库;第五层年龄估算;Then, the database is manually organized according to gender. The training samples are divided into male image sets and female image sets. The database is named according to the English acronym. The first layer: the first division of gender; the second layer: division in the male or female gender layer Youth YM, middle-aged MM, and old OM; the third layer divides the age range, and the fourth layer divides the database with smaller age intervals. "MM-i-13" is interpreted as the i-th middle-aged man belonging to the first database The 3rd sub-database; the fifth layer age estimation;
最后,采用平均年龄估计法age=Li/Nij,其中,Li为数据库年龄段,Nij表示为子数据库训练的总张数,图片分成每人每岁的多张照片,再单独做训练;Finally, the average age estimation method age=Li/Nij is used, where Li is the age group of the database, Nij represents the total number of sub-database training, and the pictures are divided into multiple photos of each person and age, and then trained separately;
所述景区人脸识别系统的训练模型如下:The training model of the face recognition system in the scenic spot is as follows:
首先,在人脸数据库上进行人脸识别预训练得到深度学习人脸模型,然后使用该模型在人脸属性数据集上对其头发、眼睛、鼻子、嘴巴、胡子的特征进行微调训练,得到人脸属性模型,并将网络的各全连接层特征连接起来作为人脸特征向量,最后使用随机森林分类器在数据集上训练和测试;First, face recognition pre-training is performed on the face database to obtain a deep learning face model, and then the model is used to perform fine-tuning training on the features of hair, eyes, nose, mouth, and beard on the face attribute data set, and the human face model is obtained. Face attribute model, and connect the features of each fully connected layer of the network as a face feature vector, and finally use a random forest classifier to train and test on the data set;
然后,将年龄阶段分为5-15岁,15-25岁,25-50岁,50岁以上四个年龄段类别;云端数据管理平台将所述景区人脸识别系统得出的游客年龄根据四个年龄阶段分类,计算每个年龄段类别的游客人数占比,游客在终端APP上查询景区信息时输入自己的年龄和性别,系统向游客推送适合游客年龄阶段和性别的景区数据。但该发明只能预测年龄段,而不能预测具体的年龄值,应用场景范围单一,而且其深度学习人脸模型没有采用多人标注加权求平均的方法,结果不准确。Then, the age stage is divided into four age categories: 5-15 years old, 15-25 years old, 25-50 years old, and over 50 years old; Classify by age stage, calculate the proportion of tourists in each age group category, tourists input their age and gender when querying scenic spot information on the terminal APP, and the system will push scenic spot data suitable for tourists’ age stage and gender to tourists. However, the invention can only predict the age group, but not the specific age value. The range of application scenarios is single, and its deep learning face model does not use the method of multi-person labeling weighted averaging, so the result is inaccurate.
发明内容Contents of the invention
本发明要解决的技术问题,在于提供一种人脸2D图像的年龄性别属性分析方法、系统、设备和介质,能够快速准确的针对人脸图像分析出其年龄性别,统计分析各种场景下摄像头内的年龄和性别信息。The technical problem to be solved by the present invention is to provide a method, system, device and medium for analyzing the age and gender attributes of a 2D face image, which can quickly and accurately analyze the age and gender of the face image, and statistically analyze the camera in various scenarios age and gender information.
第一方面,本发明的方法是这样实现的:一种人脸2D图像的年龄性别属性分析方法,包括:In the first aspect, the method of the present invention is implemented as follows: a method for analyzing the age and sex attributes of a 2D image of a human face, comprising:
步骤S1、获取需要检测的人脸2D图片;Step S1, obtaining a 2D face image to be detected;
步骤S2、通过训练好的第一神经网络模型对单张人脸2D图片进行人脸检测,获取人脸框位置和面部特征点位置;根据人脸框位置和面部特征点位置进行图片矫正和截取,获得经过矫正标准化后的人脸2D图片;Step S2. Perform face detection on a single face 2D picture through the trained first neural network model, and obtain the position of the face frame and the position of facial feature points; perform image correction and interception according to the position of the face frame and the positions of facial feature points , to obtain a corrected and standardized face 2D image;
步骤S3、通过训练好的第二神经网络模型对矫正标准化后的人脸2D图片进行年龄性别属性预测,获得原始预测值;Step S3, predicting the age and sex attributes of the corrected and standardized face 2D image through the trained second neural network model to obtain the original prediction value;
步骤S4、根据所述原始预测值以及年龄性别属性选择策略来确定人脸的年龄性别属性,输出预测的年龄及性别;Step S4, determine the age and gender attributes of the face according to the original predicted value and the age and gender attribute selection strategy, and output the predicted age and gender;
步骤S5、将预测的年龄性别结果输出到后台并记录到数据库内,用于后续的数据分析。Step S5, output the predicted age and gender results to the background and record them in the database for subsequent data analysis.
第二方面,本发明的系统是这样实现的:一种人脸2D图像的年龄性别属性分析系统,包括:In the second aspect, the system of the present invention is implemented as follows: a system for analyzing the age and gender attributes of a 2D image of a human face, comprising:
获取数据模块,用于获取需要检测的人脸2D图片;Obtain a data module, which is used to obtain a 2D image of a face to be detected;
第一神经网络模型,用于对单张人脸2D图片进行人脸检测,获取人脸框位置和面部特征点位置;根据人脸框位置和面部特征点位置进行图片矫正和截取,获得经过矫正标准化后的人脸2D图片;The first neural network model is used to perform face detection on a single face 2D picture, obtain the position of the face frame and facial feature points; perform image correction and interception according to the position of the face frame and facial feature points, and obtain the corrected Normalized face 2D image;
第二神经网络模型,用于对矫正标准化后的人脸2D图片进行年龄性别属性预测,获得原始预测值;The second neural network model is used to predict the age and gender attributes of the corrected and standardized 2D face image to obtain the original prediction value;
预测模块,用于根据所述原始预测值以及年龄性别属性选择策略来确定人脸的年龄性别属性,输出预测的年龄及性别;A prediction module, configured to determine the age and gender attributes of the face according to the original predicted value and the age and gender attribute selection strategy, and output the predicted age and gender;
结果输出模块,用于将预测的年龄性别结果输出到后台并记录到数据库内,用于后续的数据分析。The result output module is used to output the predicted age and gender results to the background and record them in the database for subsequent data analysis.
第三方面,本发明的计算机设备是这样实现的:一种计算机设备,包括存储器和处理器,存储器存储有计算机程序,处理器执行计算机程序时,实现如上述本发明的方法。In the third aspect, the computer device of the present invention is implemented as follows: a computer device includes a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, the method of the present invention as described above is implemented.
第四方面,本发明的介质是这样实现的:一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如权利要求1至5任一项所述的方法。In the fourth aspect, the medium of the present invention is implemented in the following way: a computer-readable storage medium, on which a computer program is stored, characterized in that, when the program is executed by a processor, the program described in any one of claims 1 to 5 is implemented. described method.
与现有技术相比,本发明的有益效果如下:Compared with the prior art, the beneficial effects of the present invention are as follows:
(1)本发明一种基于2D人脸照片的年龄性别属性分析方法、系统和计算机设备,通过人脸检测神经网络模型能够快速检测到图片中的人脸框和面部特征点,并输出人脸框位置和面部特征点位置;对人脸图片进行校正和扩增后截取矫正标准化的人脸图片。(1) A kind of age gender attribute analysis method, system and computer equipment based on 2D face photo of the present invention, can detect the face frame and facial feature point in the picture quickly by face detection neural network model, and output people's face The position of the frame and the position of the facial feature points; after correcting and amplifying the face picture, intercept and correct the normalized face picture.
(2)本发明一种基于2D人脸照片的年龄性别属性分析方法、系统和计算机设备,能快速准确的检测摄像头内人脸的年龄性别属性信息,以帮助店主能够准确的掌握店内顾客的年龄和性别分布,进而可以利用分析后的数据做出有效的策略来提高营业额。(2) An age and gender attribute analysis method, system and computer equipment based on 2D face photos of the present invention can quickly and accurately detect the age and gender attribute information of the face in the camera, so as to help the store owner to accurately grasp the age of the customers in the store And gender distribution, and then can use the analyzed data to make effective strategies to increase turnover.
(3)本发明预测的年龄范围在[0-90]岁之间,数据获取后会经过模型预测后可得到非常精准的表面年龄,最终预测的结果是年龄值和性别,而非年龄段和性别。(3) The age range predicted by the present invention is between [0-90] years old. After data acquisition, a very accurate surface age can be obtained after model prediction. The final prediction result is age value and gender, not age group and gender.
(4)本发明在数据的源自于真实使用场景,采用多人标注加权求平均的方法让结果更加准确,经过人脸检测后进行统一的矫正和剪切进行标准化,并且年龄性别预测模型是经过阅读大量相关论文后确定基础模型并进行设计特征处理分支结构,并根据年龄性别预测模型输出结果经过后处理流程得到精准的年龄值和性别。(4) In the present invention, the data originates from the real use scene, and the method of multi-person labeling weighted averaging is used to make the result more accurate. After face detection, unified correction and cutting are carried out for standardization, and the age and gender prediction model is After reading a large number of related papers, the basic model is determined and the feature processing branch structure is designed, and the accurate age value and gender are obtained through the post-processing process according to the output results of the age and gender prediction model.
附图说明Description of drawings
下面参照附图结合实施例对本发明作进一步的说明。The present invention will be further described below in conjunction with the embodiments with reference to the accompanying drawings.
图1为本发明基于2D人脸照片的年龄性别属性分析方法在真实场景的使用流程图;Fig. 1 is the flow chart of the present invention based on the age and gender attribute analysis method of 2D face photo in real scene;
图2为本发明实施例的人脸检测神经网络模型结构图;其中2(a)为人脸检测模型的P-Net网络结构图;2(b)为人脸检测模型的R-Net网络结构图;2(c)为人脸检测模型的O-Net网络结构图;Fig. 2 is the human face detection neural network model structural diagram of the embodiment of the present invention; Wherein 2 (a) is the P-Net network structural diagram of human face detection model; 2 (b) is the R-Net network structural diagram of human face detection model; 2(c) is the O-Net network structure diagram of the face detection model;
图3为本发明在实际场景中预测的年龄和性别效果图,图片中在左上角标注出了预测的性别(男为M,女为F)及其对应的预测值(范围为0-1,越接近0则越像女性,越接近1则越像男性)、以及预测的年龄大小,并且在图片中画出了检测到的人脸框的位置以及五个坐标点(左眼瞳孔,右眼瞳孔,鼻尖,左侧嘴角以及右侧嘴角)的位置;Fig. 3 is the age and sex effect picture that the present invention predicts in the actual scene, marked in the upper left corner of the picture the predicted sex (male is M, female is F) and its corresponding prediction value (range is 0-1, The closer to 0, the more like a woman, the closer to 1, the more like a man), and the predicted age, and the position of the detected face frame and five coordinate points (left eye pupil, right eye pupils, nose tip, left mouth corner and right mouth corner);
图4为本发明系统的架构图。Fig. 4 is a structure diagram of the system of the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步的详细描述。In order to make the object, technical solution and advantages of the present invention clearer, the embodiments of the present invention will be further described in detail below in conjunction with the accompanying drawings.
一方面,本发明提供了一种人脸2D图像的年龄性别属性分析方法,通过使用深度学习的人脸检测算法、人脸年龄性别分析算法对视频进行分析,能够满足一些需要判断人脸年龄性别的需求场景。能有效快速准确的检测视频或/和图片内人脸位置及特征点并预测出其人脸的年龄性别属性,以帮助一些对人脸年龄性别属性有要求的项目或者场景进行人脸图片的年龄性别属性分析,进而可以更好的分析和利用相关数据。On the one hand, the present invention provides a method for analyzing the age and gender attributes of a 2D face image. By analyzing the video using a deep learning face detection algorithm and a face age and gender analysis algorithm, it can meet some needs for judging the age and gender of a face. demand scenarios. It can effectively, quickly and accurately detect the position and feature points of the face in the video or/and picture and predict the age and gender attributes of the face, so as to help some projects or scenes that require the age and gender attributes of the face to carry out the age of the face picture Gender attribute analysis, and then better analysis and utilization of relevant data.
如图1所示,本发明方法包括:As shown in Figure 1, the inventive method comprises:
步骤S1、获取需要检测的人脸2D图片;Step S1, obtaining a 2D face image to be detected;
步骤S2、通过训练好的第一神经网络模型对单张人脸2D图片进行人脸检测,获取人脸框位置和面部特征点位置;根据人脸框位置和面部特征点位置进行图片矫正和截取,获得经过矫正标准化后的人脸2D图片;Step S2. Perform face detection on a single face 2D picture through the trained first neural network model, and obtain the position of the face frame and the position of facial feature points; perform image correction and interception according to the position of the face frame and the positions of facial feature points , to obtain a corrected and standardized face 2D image;
步骤S3、通过训练好的第二神经网络模型对矫正标准化后的人脸2D图片进行年龄性别属性预测,获得原始预测值;Step S3, predicting the age and sex attributes of the corrected and standardized face 2D image through the trained second neural network model to obtain the original prediction value;
步骤S4、根据所述原始预测值以及年龄性别属性选择策略来确定人脸的年龄性别属性,输出预测的年龄及性别;Step S4, determine the age and gender attributes of the face according to the original predicted value and the age and gender attribute selection strategy, and output the predicted age and gender;
步骤S5、将预测的年龄性别结果输出到后台并记录到数据库内,用于后续的数据分析。Step S5, output the predicted age and gender results to the background and record them in the database for subsequent data analysis.
另一方面,如图4所示,本发明还提供了一种人脸2D图像的年龄性别属性分析系统,包括:On the other hand, as shown in Figure 4, the present invention also provides a system for analyzing age and gender attributes of a 2D face image, including:
获取数据模块,用于获取需要检测的人脸2D图片;Obtain a data module, which is used to obtain a 2D image of a face to be detected;
第一神经网络模型,用于对单张人脸2D图片进行人脸检测,获取人脸框位置和面部特征点位置;根据人脸框位置和面部特征点位置进行图片矫正和截取,获得经过矫正标准化后的人脸2D图片;The first neural network model is used to perform face detection on a single face 2D picture, obtain the position of the face frame and facial feature points; perform image correction and interception according to the position of the face frame and facial feature points, and obtain the corrected Normalized face 2D image;
第二神经网络模型,用于对矫正标准化后的人脸2D图片进行年龄性别属性预测,获得原始预测值;The second neural network model is used to predict the age and gender attributes of the corrected and standardized 2D face image to obtain the original prediction value;
预测模块,用于根据所述原始预测值以及年龄性别属性选择策略来确定人脸的年龄性别属性,输出预测的年龄及性别;A prediction module, configured to determine the age and gender attributes of the face according to the original predicted value and the age and gender attribute selection strategy, and output the predicted age and gender;
结果输出模块,用于将预测的年龄性别结果输出到后台并记录到数据库内,用于后续的数据分析。The result output module is used to output the predicted age and gender results to the background and record them in the database for subsequent data analysis.
再一方面,本发明还提供了一种计算机设备,包括存储器和处理器,存储器存储有计算机程序,处理器执行计算机程序时,实现本发明所述人脸2D图像的年龄性别属性分析方法。In another aspect, the present invention also provides a computer device, including a memory and a processor, the memory stores a computer program, and when the processor executes the computer program, the method for analyzing age and gender attributes of a 2D face image of the present invention is implemented.
又一方面,本发明还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时,实现本发明所述人脸2D图像的年龄性别属性分析方法。In yet another aspect, the present invention also provides a computer-readable storage medium, on which a computer program is stored. When the program is executed by a processor, the method for analyzing age and gender attributes of a 2D face image in the present invention is implemented.
实现本发明方案的具体步骤如下:The concrete steps that realize the scheme of the present invention are as follows:
一、神经网络模型训练1. Neural network model training
训练数据主要包含两部分:公开数据集和非公开数据集,首先我们收集了网上公开数据集IMBD-WIKI,用作预训练,同时收集了真实场景摄像头收集到的真实场景的人脸数据,用作优化模型提高其在真实场景下的预测的准确度,因为神经网络模型的特点是在相同场景下的数据表现效果会很好,但是在不同场景下会有较大的效果损失。The training data mainly includes two parts: public data set and non-public data set. First, we collected the online public data set IMBD-WIKI for pre-training. At the same time, we collected the face data of the real scene collected by the real scene camera. Optimizing the model improves its prediction accuracy in real scenarios, because the neural network model is characterized by the fact that the data performance in the same scenario will be very good, but there will be a greater loss of effect in different scenarios.
1、第一神经网络的训练1. Training of the first neural network
首先通过采集各种人物在各种场景下摄像头内的图片和视频,然后采用外接矩形框人工标定出人脸区域及面部五个特征点(分别是左眼瞳孔、右眼瞳孔、鼻尖、左边嘴角和右边嘴角),将标定好的数据及相对应的标签送入第一神经网络中进行训练。在具体的实施例中,第一神经网络模型采用MTCCN人脸检测(Multi-task Cascaded ConvolutionalNetworks,多任务级联卷积网络)模型,该人脸检测模型是由P-Net(Proposal Network)、R-Net(Refine Network)和O-Net(Output Network)这三个网络结构组成,则所述获取人脸框位置和面部特征点位置包括三个阶段:Firstly, by collecting pictures and videos of various people in the camera in various scenes, and then manually marking the face area and five facial feature points (left eye pupil, right eye pupil, nose tip, left mouth corner) by using an external rectangular frame and the right corner of the mouth), send the calibrated data and corresponding labels to the first neural network for training. In a specific embodiment, the first neural network model adopts the MTCCN face detection (Multi-task Cascaded Convolutional Networks, multi-task cascaded convolutional network) model, which is composed of P-Net (Proposal Network), R -Net (Refine Network) and these three network structures of O-Net (Output Network), then described acquisition face frame position and facial feature point position comprise three stages:
(1)由所述P-Net网络获得人脸区域的候选窗口,和边界框的回归向量,并用该边界框的回归向量做回归,对所述候选窗口进行校准,然后通过非极大值抑制来合并高度重叠的候选框,输出初始的人脸框预测结果和五个面部特征点;(1) Obtain the candidate window of the face area and the regression vector of the bounding box by the P-Net network, and use the regression vector of the bounding box to do regression, calibrate the candidate window, and then suppress by non-maximum value To merge highly overlapping candidate frames, output the initial face frame prediction results and five facial feature points;
(2)由所述R-Net网络通过边界框回归和非极大值抑制去掉那些false-positive的区域,输出较精准的人脸框预测结果和五个面部特征点;(2) Remove those false-positive regions by the R-Net network through bounding box regression and non-maximum value suppression, and output more accurate face frame prediction results and five facial feature points;
(3)由所述O-Net网络通过边界框回归和非极大值抑制进一步去掉那些false-positive的区域,输出更为精准的人脸框预测结果和五个面部特征点。(3) The O-Net network further removes those false-positive regions through bounding box regression and non-maximum value suppression, and outputs more accurate face frame prediction results and five facial feature points.
如下分别详细介绍一下这三个网络:The three networks are described in detail as follows:
P-Net网络:网络结构如图2的(a)所示,采用12pixel×12pixel×3channel作为网络输入,经过3×3的卷积网络->MaxPooling层->3×3的卷积网络->3×3的卷积网络->3×3的卷积网络后得到1×1×32的输出结果。P-Net network: The network structure is shown in Figure 2 (a), using 12pixel×12pixel×3channel as the network input, after 3×3 convolutional network->MaxPooling layer->3×3 convolutional network-> 3×3 convolutional network -> 3×3 convolutional network to obtain an output of 1×1×32.
R-Net网络:网络结构如图2的(b)所示,主要是通过边界框回归和NMS来去掉那些false-positive(网络预测为人脸区域但是事实上并不是)的区域。只是由于该网络结构和P-Net网络结构有差异,输入改成24pixel×24pixel×3channel,并且多了一个全连接层,所以会取得更好的抑制false-positive的作用。R-Net network: The network structure is shown in (b) of Figure 2. It mainly uses bounding box regression and NMS to remove those false-positive (the network predicts it as a face area but it is not). Just because the network structure is different from the P-Net network structure, the input is changed to 24pixel×24pixel×3channel, and there is an additional fully connected layer, so it will achieve better suppression of false-positive.
O-Net网络:网络结构如图2的(c)所示,输入进一步扩大到48pixel×48pixel×3channel所以输入的信息会更加精细,并且该层比R-Net层又多了一层卷基层,作用和R-Net层作用一样。但是该层对人脸区域进行了更多的监督,作为整个模型的最后阶段,输出的五个面部特征点(landmark,包括左眼瞳孔、右眼瞳孔、鼻尖、嘴巴最左点和嘴巴最右点)相比于前两个阶段要精准很多,三个小网络结构都输出了面部五个特征点坐标,但是由于R-Net和P-Net网络输入太小,面部特征点的信息很少,所以在前两个阶段的人脸面部特征点回归所产生的损失函数的权重系数设置为比较小的0.5,而在最后阶段的O-Net网络产生的人脸面部特征点损失函数采用的权重比较大为1.0,因为面部特征点的预测在O-Net阶段输出的结果最为准确,所以实践中选择在最后阶段O-Net网络输出的时候作为面部特征点预测结果,O-Net的网络输入也是这三个小网络里面最大,有利于更准确提取面部特征。O-Net network: The network structure is shown in (c) of Figure 2. The input is further expanded to 48pixel×48pixel×3channel, so the input information will be more refined, and this layer has an additional layer of volume base layer than the R-Net layer. The function is the same as that of the R-Net layer. However, this layer has more supervision on the face area. As the final stage of the whole model, the output five facial feature points (landmark, including left eye pupil, right eye pupil, nose tip, leftmost point of mouth and rightmost point of mouth point) is much more accurate than the previous two stages. The three small network structures all output the coordinates of the five feature points of the face, but because the input of the R-Net and P-Net networks is too small, the information of the facial feature points is very little. Therefore, the weight coefficient of the loss function generated by the face feature point regression in the first two stages is set to a relatively small 0.5, and the weight used in the face feature point loss function generated by the O-Net network in the final stage is compared. It is greater than 1.0, because the prediction of facial feature points is the most accurate output in the O-Net stage, so in practice, it is selected as the facial feature point prediction result when the O-Net network is output in the final stage, and the network input of O-Net is also the same. The largest of the three small networks is conducive to more accurate extraction of facial features.
MCCN人脸检测模型的人脸检测特征描述的损失函数主要包含3个部分:人脸分类损失函数(人脸/非人脸分类器)、人脸框损失函数(边界框回归)和面部特征点损失函数(特征点定位)。The loss function of the face detection feature description of the MCCN face detection model mainly includes three parts: face classification loss function (face/non-face classifier), face frame loss function (bounding box regression) and facial feature points Loss function (feature point positioning).
(a)所述人脸分类损失函数表示如下:(a) The face classification loss function is expressed as follows:
其中,i代表第i个样本,pi代表第i个样本是人脸的概率,范围为在0-1之间,pi∈[0,1],代表第i个样本的真实标签数据,数据范围为0和1,y∈{0,1};Among them, i represents the i-th sample, pi represents the probability that the i-th sample is a face, the range is between 0-1, pi ∈ [0, 1], Represents the real label data of the i-th sample, the data range is 0 and 1, y ∈ {0, 1};
(b)所述人脸框损失函数表示如下:(b) The face frame loss function is expressed as follows:
其中,为通过网络预测得到,为实际的真实的背景坐标,y为人脸框左上角横纵坐标、人脸框的长和人脸框的宽组成的四元组;in, To be obtained through network prediction, is the actual real background coordinates, y is a quadruple composed of the horizontal and vertical coordinates of the upper left corner of the face frame, the length of the face frame, and the width of the face frame;
(c)所述面部特征点损失函数表示如下:(c) The facial feature point loss function is expressed as follows:
其中,为通过网络预测得到,为实际的真实的面部特征点坐标,y为5个面部特征点坐标组成的十元组。in, To be obtained through network prediction, is the actual real facial feature point coordinates, and y is a ten-tuple composed of 5 facial feature point coordinates.
综上,整个模型训练过程的整体损失函数可以表示为如下:In summary, the overall loss function of the entire model training process can be expressed as follows:
P-Net R-Net(αdet=1,αbox=0.5,αlandmark=0.5)P-Net R-Net(αdet =1, αbox =0.5, αlandmark =0.5)
O-Net(αdet=1,αbox=0.5,αlandmark=1)O-Net(αdet =1, αbox =0.5, αlandmark =1)
其中,N是预设人脸框的正样本数量;αdet、αbox和αlandmark表示分别表示人脸分类损失、人脸框和面部特征点损失的权重;表示是否人脸输入;和分别表示人脸分类损失函数、人脸框损失函数和面部特征点损失函数。Among them, N is the number of positive samples of the preset face frame; αdet , αbox and αlandmark represent the weights of face classification loss, face frame and facial feature point loss respectively; Indicates whether face input; and Respectively represent the face classification loss function, face frame loss function and facial feature point loss function.
由上可知,在训练的时候虽然都会计算上述的3个损失函数但是并不是对每个输入这些损失都有意义,因此定义了上述公式用来控制对不同的输入采用不同的损失以及分配不同的权重。可以看出,在P-Net网络和R-Net网络中,面部特征点回归的损失权重αlandmark要小于O-Net部分,这是因为前面2个stage重点在于过滤掉非人脸的bbox。β存在的意义是比如非人脸输入,就只需要计算有意义的人脸分类损失,而不需要计算无意义的边界框和面部特征点的回归损失,因为针对非人脸区域。It can be seen from the above that although the above three loss functions are calculated during training, these losses are not meaningful for each input. Therefore, the above formula is defined to control the use of different losses for different inputs and the allocation of different Weights. It can be seen that in the P-Net network and the R-Net network, the loss weight αlandmark of the facial feature point regression is smaller than the O-Net part, because the first two stages focus on filtering out non-face bboxes. The significance of β is that, for example, for non-face input, only meaningful face classification loss needs to be calculated, and there is no need to calculate the regression loss of meaningless bounding boxes and facial feature points, because it is aimed at non-face areas.
经过训练,得到一个可以精准检测人脸框以及面部特征点的深度学习神经网络模型,用于预测视频或/和图片中的人脸框及面部特征点的位置,进而提取出人脸为下一步提取人脸的年龄性别属性分析所用。After training, a deep learning neural network model that can accurately detect the face frame and facial feature points is obtained, which is used to predict the position of the face frame and facial feature points in the video or/and picture, and then extract the face as the next step It is used to extract the age and gender attributes of faces.
2、第二神经网络模型的训练2. Training of the second neural network model
在具体的实施例中,第二神经网络模型使用LightCNN作为作为特征抽取层,将128pixel×128pixel×3channel作为网络输入,设定输出为512维向量作为抽取的特征,并在其后面接了三个并行的分支:In a specific embodiment, the second neural network model uses LightCNN as the feature extraction layer, uses 128pixel×128pixel×3channel as the network input, sets the output as a 512-dimensional vector as the extracted feature, and connects three Parallel branches:
第一个分支用来进行性别的预测,预测的结果在0-1之间,越接近1则表示模型越确定照片里面是个男性,越接近0则表示模型越确定照片里面是个女性;The first branch is used to predict gender. The predicted result is between 0 and 1. The closer to 1, the more sure the model is that the photo is a man, and the closer to 0, the more sure the model is that the photo is a woman;
第二个分支是用来进行年龄组别分类,将预测的年龄段设置为0-90岁,并平均分为18个分段,所以在第二个分支有18个结果输出,分别代表了各个分段的置信度,在训练及预测的时候会选择置信度最大的分段作为年龄段预测的结果;比如每5岁一个分段,共18个分段,所以在第二个分支有18个结果输出;The second branch is used for age group classification, the predicted age group is set to 0-90 years old, and divided into 18 segments on average, so there are 18 output results in the second branch, representing each Confidence of the segment, during training and prediction, the segment with the highest confidence will be selected as the result of the age group prediction; for example, there is a segment every 5 years old, a total of 18 segments, so there are 18 in the second branch result output;
第三个分支同样有18个结果输出,分别对应的小范围的调整值,结合第二个分支的结果,可以得到预测的年龄值。The third branch also has 18 result outputs, corresponding to small-scale adjustment values, combined with the results of the second branch, the predicted age value can be obtained.
例如第二个分支预测结果为第五个年龄段的置信度最大,对应的年龄范围是[20,25)这个年龄段,中心年龄为22.5岁,第三个分支对应的第五个预测结果为1.2,结合第二分支和第三分支的结果,则最终预测的年龄为22.5+1.2=23.7≈24岁。For example, the prediction result of the second branch is that the fifth age group has the highest confidence, the corresponding age range is [20,25) age group, the central age is 22.5 years old, and the fifth prediction result corresponding to the third branch is 1.2, combining the results of the second branch and the third branch, the final predicted age is 22.5+1.2=23.7≈24 years old.
1)、所述第一个分支(性别预测分支)采用均方差MSELoss作为损失函数,其公式如下:1), the first branch (gender prediction branch) uses the mean square error MSELoss as the loss function, and its formula is as follows:
其中,表示预测的为男性性别属性的概率值,y表示性别属性的真实值,y∈{0,1},0代表本张图片为女性,1代表本张图片为男性;n代表全部属性的种类数;in, Indicates the probability value of the predicted male gender attribute, y represents the real value of the gender attribute, y∈{0, 1}, 0 represents that this picture is female, 1 represents that this picture is male; n represents the number of types of all attributes ;
2)、所述第二分支(年龄段分类分支)采用交叉熵CELoss作为损失函数,其公式如下:2), the second branch (age group classification branch) adopts cross entropy CELoss as the loss function, and its formula is as follows:
其中,表示所有预测的年龄段的概率值,y表示所有年龄段的真实值,y∈{0,1},0代表不在这个年龄段,1代表在这个年龄段,对于同一张图片,只会有一个年龄段的标签为1,其他都为0;表示第i个年龄段的预测概率值;yi表示第i个年龄段的真实值;n代表所有年龄段的数量;in, represents the probability values for all predicted age groups, y represents the true value of all age groups, y ∈ {0, 1}, 0 means not in this age group, 1 means in this age group, for the same picture, only one age group will have a label of 1, and the others will be 0; Indicates the predicted probability value of the i-th age group; yi represents the true value of the i-th age group; n represents the number of all age groups;
3)、所述第三分支(段内年龄调整分支)采用交叉熵CELoss作为损失函数,其公式如下:3), the third branch (in-segment age adjustment branch) adopts cross entropy CELoss as a loss function, and its formula is as follows:
其中,表示预测的对应年龄段调整值的回归值,y表示所有年龄段的真实值,y∈[-2.5,2.5];表示第i个年龄段的调整值的预测回归值;yi表示第i个年龄段的真实回归值;n代表全部年龄段的数量。in, Represents the regression value of the predicted corresponding age-adjusted value, y represents the true value of all age groups, y∈[-2.5,2.5]; Indicates the predicted regression value of the adjusted value of the i-th age group; yi represents the true regression value of the i-th age group; n represents the number of all age groups.
经过大量训练调参,得到一个能够较为精确预测人脸的年龄性别属性的模型,用于人脸的年龄性别属性的分析。After a lot of training and parameter adjustment, a model that can accurately predict the age and gender attributes of the face is obtained, which is used for the analysis of the age and gender attributes of the face.
二、在真实场景的使用2. Use in real scenes
如图1所示,在具体的实施例中,使用训练好的第一神经网络模型和第二神经网络模型对真实场景下的数据进行预测年龄和性别,该实施例具体包括:As shown in Figure 1, in a specific embodiment, use the trained first neural network model and the second neural network model to predict the age and gender of the data in the real scene, this embodiment specifically includes:
步骤S1、从视屏中获取需要检测的人脸2D图片;Step S1, obtaining the 2D face image to be detected from the video screen;
步骤S2、通过训练好的第一神经网络模型对单张人脸2D图片进行人脸检测,获取人脸框位置和面部特征点位置;根据人脸框位置和面部特征点位置进行图片矫正和截取,获得经过矫正标准化后的人脸2D图片;Step S2. Perform face detection on a single face 2D picture through the trained first neural network model, and obtain the position of the face frame and the position of facial feature points; perform image correction and interception according to the position of the face frame and the positions of facial feature points , to obtain a corrected and standardized face 2D image;
步骤S3、通过训练好的第二神经网络模型对矫正标准化后的人脸2D图片进行年龄性别属性预测,获得原始预测值;Step S3, predicting the age and sex attributes of the corrected and standardized face 2D image through the trained second neural network model to obtain the original prediction value;
步骤S4、根据所述原始预测值以及年龄性别属性选择策略来确定人脸的年龄性别属性,输出预测的年龄及性别;Step S4, determine the age and gender attributes of the face according to the original predicted value and the age and gender attribute selection strategy, and output the predicted age and gender;
步骤S5、将预测的年龄性别结果输出到后台并记录到数据库内,用于后续的数据分析。Step S5, output the predicted age and gender results to the background and record them in the database for subsequent data analysis.
如图3所示,为本发明在实际场景中预测的年龄和性别效果图,图片中在左上角标注出了预测的性别(男为M,女为F)及其对应的预测值(范围为0-1,越接近0则越像女性,越接近1则越像男性)、以及预测的年龄大小,并且在图片中画出了检测到的人脸框的位置以及五个坐标点(左眼瞳孔,右眼瞳孔,鼻尖,左侧嘴角以及右侧嘴角)的位置。As shown in Figure 3, it is the age and gender effect figure that the present invention predicts in the actual scene, in the upper left corner of the picture, the predicted gender (man is M, woman is F) and its corresponding predicted value (range is 0-1, the closer to 0, the more female, the closer to 1, the more male), and the predicted age, and the position of the detected face frame and five coordinate points (left eye pupil, right eye pupil, nose tip, left mouth corner and right mouth corner).
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910823680.XACN110532970B (en) | 2019-09-02 | 2019-09-02 | Age and gender attribute analysis method, system, device and medium of face 2D image |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910823680.XACN110532970B (en) | 2019-09-02 | 2019-09-02 | Age and gender attribute analysis method, system, device and medium of face 2D image |
| Publication Number | Publication Date |
|---|---|
| CN110532970Atrue CN110532970A (en) | 2019-12-03 |
| CN110532970B CN110532970B (en) | 2022-06-24 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910823680.XAActiveCN110532970B (en) | 2019-09-02 | 2019-09-02 | Age and gender attribute analysis method, system, device and medium of face 2D image |
| Country | Link |
|---|---|
| CN (1) | CN110532970B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111091109A (en)* | 2019-12-24 | 2020-05-01 | 厦门瑞为信息技术有限公司 | Method, system and device for age and gender prediction based on face images |
| CN111881747A (en)* | 2020-06-23 | 2020-11-03 | 北京三快在线科技有限公司 | Information estimation method and device and electronic equipment |
| CN112036249A (en)* | 2020-08-04 | 2020-12-04 | 汇纳科技股份有限公司 | Method, system, medium and terminal for end-to-end pedestrian detection and attribute identification |
| CN112257693A (en)* | 2020-12-22 | 2021-01-22 | 湖北亿咖通科技有限公司 | Identity recognition method and equipment |
| CN112329607A (en)* | 2020-11-03 | 2021-02-05 | 齐鲁工业大学 | Age prediction method, system and device based on facial features and texture features |
| CN112528897A (en)* | 2020-12-17 | 2021-03-19 | Oppo(重庆)智能科技有限公司 | Portrait age estimation method, Portrait age estimation device, computer equipment and storage medium |
| CN113033263A (en)* | 2019-12-24 | 2021-06-25 | 深圳云天励飞技术有限公司 | Face image age feature recognition method |
| CN113283368A (en)* | 2021-06-08 | 2021-08-20 | 电子科技大学中山学院 | Model training method, face attribute analysis method, device and medium |
| CN113796826A (en)* | 2020-06-11 | 2021-12-17 | 懿奈(上海)生物科技有限公司 | Method for detecting skin age of human face of Chinese |
| CN114038044A (en)* | 2021-11-23 | 2022-02-11 | 携程旅游信息技术(上海)有限公司 | Face gender and age recognition method, device, electronic device and storage medium |
| CN114360148A (en)* | 2021-12-06 | 2022-04-15 | 深圳市亚略特科技股份有限公司 | Automatic selling method and device, electronic equipment and storage medium |
| CN114463941A (en)* | 2021-12-30 | 2022-05-10 | 中国电信股份有限公司 | Drowning prevention alarm method, device and system |
| CN115512406A (en)* | 2021-08-31 | 2022-12-23 | 黑芝麻智能科技(上海)有限公司 | Age and Gender Estimation |
| CN116129536A (en)* | 2023-01-31 | 2023-05-16 | 北京达佳互联信息技术有限公司 | Face detection method and device, electronic equipment and storage medium |
| CN119168845A (en)* | 2024-09-12 | 2024-12-20 | 翡梧(上海)创意设计有限公司 | A real-time interactive image generation method based on portrait diffusion data model |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105516585A (en)* | 2015-11-30 | 2016-04-20 | 努比亚技术有限公司 | Apparatus and method for automatically regulating skin colors |
| CN106503623A (en)* | 2016-09-27 | 2017-03-15 | 中国科学院自动化研究所 | Facial image age estimation method based on convolutional neural networks |
| CN108052862A (en)* | 2017-11-09 | 2018-05-18 | 北京达佳互联信息技术有限公司 | Age predictor method and device |
| CN108399379A (en)* | 2017-08-11 | 2018-08-14 | 北京市商汤科技开发有限公司 | The method, apparatus and electronic equipment at facial age for identification |
| CN108596011A (en)* | 2017-12-29 | 2018-09-28 | 中国电子科技集团公司信息科学研究院 | A kind of face character recognition methods and device based on combined depth network |
| CN109447053A (en)* | 2019-01-09 | 2019-03-08 | 江苏星云网格信息技术有限公司 | A kind of face identification method based on dual limitation attention neural network model |
| CN110110663A (en)* | 2019-05-07 | 2019-08-09 | 江苏新亿迪智能科技有限公司 | A kind of age recognition methods and system based on face character |
| CN110147728A (en)* | 2019-04-15 | 2019-08-20 | 深圳壹账通智能科技有限公司 | Customer information analysis method, system, equipment and readable storage medium storing program for executing |
| CN110163114A (en)* | 2019-04-25 | 2019-08-23 | 厦门瑞为信息技术有限公司 | A kind of facial angle and face method for analyzing ambiguity, system and computer equipment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105516585A (en)* | 2015-11-30 | 2016-04-20 | 努比亚技术有限公司 | Apparatus and method for automatically regulating skin colors |
| CN106503623A (en)* | 2016-09-27 | 2017-03-15 | 中国科学院自动化研究所 | Facial image age estimation method based on convolutional neural networks |
| CN108399379A (en)* | 2017-08-11 | 2018-08-14 | 北京市商汤科技开发有限公司 | The method, apparatus and electronic equipment at facial age for identification |
| WO2019029459A1 (en)* | 2017-08-11 | 2019-02-14 | 北京市商汤科技开发有限公司 | Method and device for recognizing facial age, and electronic device |
| US20190138787A1 (en)* | 2017-08-11 | 2019-05-09 | Beijing Sensetime Technology Development Co., Ltd | Method and apparatus for facial age identification, and electronic device |
| CN108052862A (en)* | 2017-11-09 | 2018-05-18 | 北京达佳互联信息技术有限公司 | Age predictor method and device |
| CN108596011A (en)* | 2017-12-29 | 2018-09-28 | 中国电子科技集团公司信息科学研究院 | A kind of face character recognition methods and device based on combined depth network |
| CN109447053A (en)* | 2019-01-09 | 2019-03-08 | 江苏星云网格信息技术有限公司 | A kind of face identification method based on dual limitation attention neural network model |
| CN110147728A (en)* | 2019-04-15 | 2019-08-20 | 深圳壹账通智能科技有限公司 | Customer information analysis method, system, equipment and readable storage medium storing program for executing |
| CN110163114A (en)* | 2019-04-25 | 2019-08-23 | 厦门瑞为信息技术有限公司 | A kind of facial angle and face method for analyzing ambiguity, system and computer equipment |
| CN110110663A (en)* | 2019-05-07 | 2019-08-09 | 江苏新亿迪智能科技有限公司 | A kind of age recognition methods and system based on face character |
| Title |
|---|
| SARAH N. KOHAIL: "Using artificial neural network for human age estimation based on facial images", 《INTERNATIONAL CONFERENCE ON INNOVATIONS IN INFORMATION TECHNOLOGY》* |
| 程建峰: "基于深度学习的多任务人脸属性识别研究与应用", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111091109A (en)* | 2019-12-24 | 2020-05-01 | 厦门瑞为信息技术有限公司 | Method, system and device for age and gender prediction based on face images |
| CN111091109B (en)* | 2019-12-24 | 2023-04-07 | 厦门瑞为信息技术有限公司 | Method, system and equipment for predicting age and gender based on face image |
| CN113033263A (en)* | 2019-12-24 | 2021-06-25 | 深圳云天励飞技术有限公司 | Face image age feature recognition method |
| CN113796826A (en)* | 2020-06-11 | 2021-12-17 | 懿奈(上海)生物科技有限公司 | Method for detecting skin age of human face of Chinese |
| CN111881747A (en)* | 2020-06-23 | 2020-11-03 | 北京三快在线科技有限公司 | Information estimation method and device and electronic equipment |
| CN111881747B (en)* | 2020-06-23 | 2024-05-28 | 北京三快在线科技有限公司 | Information prediction method and device and electronic equipment |
| CN112036249A (en)* | 2020-08-04 | 2020-12-04 | 汇纳科技股份有限公司 | Method, system, medium and terminal for end-to-end pedestrian detection and attribute identification |
| CN112329607A (en)* | 2020-11-03 | 2021-02-05 | 齐鲁工业大学 | Age prediction method, system and device based on facial features and texture features |
| CN112329607B (en)* | 2020-11-03 | 2022-10-21 | 齐鲁工业大学 | Age prediction method, system and device based on facial features and texture features |
| CN112528897A (en)* | 2020-12-17 | 2021-03-19 | Oppo(重庆)智能科技有限公司 | Portrait age estimation method, Portrait age estimation device, computer equipment and storage medium |
| CN112257693A (en)* | 2020-12-22 | 2021-01-22 | 湖北亿咖通科技有限公司 | Identity recognition method and equipment |
| CN113283368A (en)* | 2021-06-08 | 2021-08-20 | 电子科技大学中山学院 | Model training method, face attribute analysis method, device and medium |
| CN113283368B (en)* | 2021-06-08 | 2023-10-20 | 电子科技大学中山学院 | Model training method, face attribute analysis method, device and medium |
| CN115512406A (en)* | 2021-08-31 | 2022-12-23 | 黑芝麻智能科技(上海)有限公司 | Age and Gender Estimation |
| CN114038044A (en)* | 2021-11-23 | 2022-02-11 | 携程旅游信息技术(上海)有限公司 | Face gender and age recognition method, device, electronic device and storage medium |
| CN114360148A (en)* | 2021-12-06 | 2022-04-15 | 深圳市亚略特科技股份有限公司 | Automatic selling method and device, electronic equipment and storage medium |
| CN114463941A (en)* | 2021-12-30 | 2022-05-10 | 中国电信股份有限公司 | Drowning prevention alarm method, device and system |
| CN116129536A (en)* | 2023-01-31 | 2023-05-16 | 北京达佳互联信息技术有限公司 | Face detection method and device, electronic equipment and storage medium |
| CN119168845A (en)* | 2024-09-12 | 2024-12-20 | 翡梧(上海)创意设计有限公司 | A real-time interactive image generation method based on portrait diffusion data model |
| Publication number | Publication date |
|---|---|
| CN110532970B (en) | 2022-06-24 |
| Publication | Publication Date | Title |
|---|---|---|
| CN110532970B (en) | Age and gender attribute analysis method, system, device and medium of face 2D image | |
| CN110163114B (en) | Method and system for analyzing face angle and face blurriness and computer equipment | |
| CN107832672B (en) | Pedestrian re-identification method for designing multi-loss function by utilizing attitude information | |
| CN109284733B (en) | Shopping guide negative behavior monitoring method based on yolo and multitask convolutional neural network | |
| US8805018B2 (en) | Method of detecting facial attributes | |
| CN108388882B (en) | Gesture recognition method based on global-local RGB-D multi-mode | |
| WO2020010785A1 (en) | Classroom teaching cognitive load measuring system | |
| CN111783576A (en) | Person re-identification method based on improved YOLOv3 network and feature fusion | |
| CN111241975B (en) | Face recognition detection method and system based on mobile terminal edge calculation | |
| CN109214298B (en) | Asian female color value scoring model method based on deep convolutional network | |
| CN111814620A (en) | Face image quality evaluation model establishing method, optimization method, medium and device | |
| CN113569639B (en) | Cross-modal pedestrian re-recognition method based on sample center loss function | |
| CN112085534B (en) | Method, system and storage medium for analysis of attention degree | |
| CN109145717A (en) | A kind of face identification method of on-line study | |
| CN116363532A (en) | Traffic target detection method for UAV images based on attention mechanism and reparameterization | |
| CN107741996A (en) | Method and device for constructing family map based on face recognition, and computing equipment | |
| CN114067438B (en) | Method and system for identifying actions of person on tarmac based on thermal infrared vision | |
| CN116563205A (en) | Wheat spike counting detection method based on small target detection and improved YOLOv5 | |
| CN107392251B (en) | Method for improving target detection network performance by using classified pictures | |
| CN110516707A (en) | An image labeling method, device and storage medium thereof | |
| CN104599291A (en) | Structural similarity and significance analysis based infrared motion target detection method | |
| CN113673534A (en) | RGB-D image fruit detection method based on fast RCNN | |
| CN113052039A (en) | Method, system and server for detecting pedestrian density of traffic network | |
| CN115147922A (en) | Monocular pedestrian detection method, system, device and medium based on embedded platform | |
| WO2020232697A1 (en) | Online face clustering method and system |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CP03 | Change of name, title or address | Address after:361000 b1f-112, Zone C, Huaxun building, software park, Xiamen Torch High tech Zone, Xiamen, Fujian Patentee after:Reconova Technologies Co.,Ltd. Country or region after:China Address before:361000 b1f-112, Zone C, Huaxun building, software park, Xiamen Torch High tech Zone, Xiamen, Fujian Patentee before:XIAMEN RUIWEI INFORMATION TECHNOLOGY CO.,LTD. Country or region before:China | |
| CP03 | Change of name, title or address |