Movatterモバイル変換


[0]ホーム

URL:


CN113361488B - Multi-scene adaptive model fusion method and face recognition system - Google Patents

Multi-scene adaptive model fusion method and face recognition system
Download PDF

Info

Publication number
CN113361488B
CN113361488BCN202110777419.8ACN202110777419ACN113361488BCN 113361488 BCN113361488 BCN 113361488BCN 202110777419 ACN202110777419 ACN 202110777419ACN 113361488 BCN113361488 BCN 113361488B
Authority
CN
China
Prior art keywords
model
recognition
vector
combination
face recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110777419.8A
Other languages
Chinese (zh)
Other versions
CN113361488A (en
Inventor
杨帆
张凯翔
朱莹
胡建国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaoshi Technology Jiangsu Co ltd
Original Assignee
Xiaoshi Technology Jiangsu Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaoshi Technology Jiangsu Co ltdfiledCriticalXiaoshi Technology Jiangsu Co ltd
Priority to CN202110777419.8ApriorityCriticalpatent/CN113361488B/en
Publication of CN113361488ApublicationCriticalpatent/CN113361488A/en
Application grantedgrantedCritical
Publication of CN113361488BpublicationCriticalpatent/CN113361488B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The application provides a multi-scene adaptive model fusion method and a face recognition system. According to the application, a plurality of face recognition models meeting the operation speed requirement are screened out according to different target platforms, the face recognition models are combined into a plurality of model combinations, then the precision and the threshold value of each model combination are evaluated under different scenes, and the model combinations with high precision and strong threshold value compatibility are screened out according to the precision and the threshold value of different model combinations to construct a fusion model. The fusion model obtained by the method can realize compatibility of various face recognition scenes by a single threshold value, can be rapidly deployed in different recognition scenes, and improves the recognition speed and recognition accuracy of the face recognition by the system.

Description

Multi-scene adaptive model fusion method and face recognition system
Technical Field
The application relates to the technical field of face recognition, in particular to a multi-scene adaptive model fusion method and a face recognition system.
Background
The face recognition technology performs identity authentication by analyzing and processing the face visual characteristic information. Compared with other biological characteristics, the human face characteristics have the advantages of naturalness, convenience, non-contact property and the like, so that the human face characteristics have great application prospects in the aspects of safety monitoring, identity verification, human-computer interaction and the like. Because of the wide applicability of face recognition technology, face recognition currently occupies an important place in the computer field.
Generally, the face recognition process is divided into two processes, face feature extraction and face similarity score calculation. The face feature extraction process is to extract some key features of face pictures to form face feature vectors, the face similarity score value calculation process is to calculate the similarity between two face feature vectors, the higher the similarity is, the more likely the two face pictures come from the same person, otherwise, the more the two face pictures come from different persons. In some cases, the face feature extraction section is of greater concern.
The existing face feature extraction methods include an LBP (local binary pattern) method, a variant method thereof, and the like, and these local texture feature extraction methods form a histogram vector by performing block statistics on the entire face picture, and cascade-connect the histogram vectors of the respective blocks to finally form a face feature vector. Because the method extracts local texture features of the whole human face, the dimension of the formed feature vector is relatively large, redundant information is contained in the feature vector, and the feature vector is easily influenced by face shields, light rays and environmental background, so that recognition deviation is caused. The existing face recognition technology is not robust to the change of expression or gesture in a complex environment. The existing various face recognition models can be subjected to the problem of over-fitting in a certain scene, namely, the situation that the recognition precision is high in a certain scene but the precision is low in other scenes.
Disclosure of Invention
Aiming at the defects of the prior art, the application provides a multi-scene adaptive model fusion method and a face recognition system, the application realizes the compatibility of various face recognition scenes by a single threshold value through fusion of models, can be rapidly deployed in different recognition scenes, and the recognition speed and recognition accuracy of the face recognition by the system are improved. The application adopts the following technical scheme.
Firstly, in order to achieve the aim, a multi-scene adaptive model fusion method is provided, which comprises the steps that firstly, the combination of face recognition models corresponding to different scenes in a model library is exhausted; step two, screening out the combination with the operation speed meeting the requirement of the target platform in each combination in the step one, and recording the combination as a model combination; thirdly, respectively calculating the accuracy and the threshold value C { A1, T1}, C { A2, T2}, C { An, tn }, of each model combination obtained in the second step, and then respectively carrying out normalization processing on each accuracy and each threshold value to obtain the normalization accuracy An and the normalization threshold value Tn of each model combination in each scene; wherein n represents a scene number, an represents the precision of the model in the nth scene, tn represents the threshold value of the model in the nth scene, a fourth step of calculating a weighted sum acc=c { w1 a1+w2+w2.+ An }, of the normalized precision of each model combination in each scene, calculating variance var=var (T1, T2, T3.+ -., tn) of the normalized threshold value of each model combination in each scene, wherein wn represents a weighted value of the normalized precision corresponding to the model combination in the nth scene, a fifth step of calculating An evaluation value eval=acc+ (1-VAR) of each model combination sum according to the weighted sum ACC of the normalized precision and the variance VAR of the normalized threshold value, a sixth step of screening out a model combination with the highest evaluation value Eval, constructing a fusion model according to the model combination, and splicing and combining the feature vectors extracted by each face recognition model in the model combination, and carrying out face recognition according to the feature vector group obtained after the splicing and combination.
Optionally, in the third step, the precision an of the model combination in the nth scene is obtained by calculating an ROC curve of the model combination in the nth scene according to a test set corresponding to the nth scene, and searching a recall rate or a false detection rate meeting the false detection rate requirement in the ROC curve, and calculating to obtain the precision an of the model combination in the nth scene.
Optionally, in the third step of the multi-scene adaptive model fusion method, the threshold tn of the model combination in the nth scene is obtained by calculating a ROC curve of the model combination in the nth scene according to a test set corresponding to the nth scene, and searching the threshold tn meeting the false detection rate requirement in the ROC curve.
Optionally, the multi-scene adaptive model fusion method according to any one of the above, wherein the step of calculating the ROC curve of the model combination under the nth scene according to the test set corresponding to the nth scene includes the steps of r1 respectively extracting model feature vectors corresponding to face images in the test set according to each face recognition model included in the model combination, r2 splice and combine the model feature vectors respectively extracted by each face recognition model in the step r1 into multi-dimensional vectors, and r3 compare the inter-vector distances between the multi-dimensional vectors obtained by the splice and combination and the recognition vectors corresponding to each face image, and obtain the false detection rate and the recall rate under the threshold according to different thresholds.
The multi-scene adaptive model fusion method is characterized in that the fusion model is used for carrying out recognition processing on face images to be recognized according to the following steps, wherein the step S1 is used for respectively extracting model feature vectors corresponding to the face images to be recognized according to each face recognition model contained in the fusion model, the step S2 is used for splicing and combining the model feature vectors respectively extracted by the face recognition models in the step S1 into multi-dimensional vectors, the step S3 is used for comparing the multi-dimensional vectors obtained by combination with the inter-vector distance between recognition vectors corresponding to the recognition objects, and when the Euclidean distance between the two vectors is smaller than a threshold Tn, the recognition result is output as the recognition object corresponding to the recognition vector.
Optionally, the multi-scene adaptive model fusion method according to any one of the above, wherein the recognition vectors corresponding to the recognition objects are stored in the storage unit in advance according to the steps of firstly extracting model feature vectors corresponding to the recognition objects according to each face recognition model contained in the fusion model, then combining the model feature vectors extracted by the face recognition models into one-dimensional recognition vectors, storing the recognition vectors in the storage unit, and marking the correspondence between the recognition vectors and the recognition objects.
Meanwhile, in order to achieve the above purpose, the application also provides a face recognition system which comprises an image acquisition module, a first storage unit and a second storage unit, wherein the image acquisition module is used for acquiring face images to be recognized, the first storage unit is internally stored with a model library, each face recognition model in the model library corresponds to different scenes respectively, the second storage unit is internally stored with an executable program, and when the executable program is executed by a processor, the processor is enabled to construct a fusion model according to any one of the method steps, so that recognition vectors corresponding to each recognition object are recorded according to the fusion model obtained through construction, and recognition processing is carried out on the face images to be recognized according to the fusion model obtained through construction.
Optionally, the face recognition system according to any one of the above embodiments, wherein the specific step of performing recognition processing on the face image to be recognized according to the obtained fusion model includes a step S1 of respectively extracting a model feature vector corresponding to the face image to be recognized according to each face recognition model included in the fusion model, a step S2 of splicing and combining the model feature vectors respectively extracted by each face recognition model in the step S1 into a multidimensional vector, and a step S3 of comparing a vector distance between the multidimensional vector obtained by the splicing and combining and a recognition vector corresponding to each recognition object, and outputting a recognition result as the recognition object corresponding to the recognition vector when a euclidean distance between the two vectors is smaller than a threshold Tn, otherwise judging that the recognition fails.
Optionally, the face recognition system according to any one of the preceding claims, further comprising an interaction interface, configured to receive, in the fourth step, setting a weighting value wn of normalization precision corresponding to the model combination in the nth scene.
Optionally, the face recognition system according to any one of the preceding claims further comprises a recognition object storage unit for storing recognition vectors corresponding to the recognition objects, wherein the recognition vectors are stored by firstly extracting model feature vectors corresponding to the recognition objects according to each face recognition model contained in the fusion model, then splicing and combining the model feature vectors extracted by each face recognition model into a multidimensional recognition vector, and storing the multidimensional recognition vectors in the recognition object storage unit and marking the correspondence between the multidimensional recognition vectors and the recognition objects.
Advantageous effects
According to the application, a plurality of face recognition models meeting the operation speed requirement are screened out according to different target platforms, the face recognition models are combined into a plurality of model combinations, then the precision and the threshold value of each model combination are evaluated under different scenes, and the model combinations with high precision and strong threshold value compatibility are screened out according to the precision and the threshold value of different model combinations to construct a fusion model. The fusion model obtained by the method can realize compatibility of various face recognition scenes by a single threshold value, can be rapidly deployed in different recognition scenes, and improves the recognition speed and recognition accuracy of the face recognition by the system.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
The accompanying drawings are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and do not limit the application. In the drawings:
FIG. 1 is a schematic diagram of the steps of a multi-scenario adaptive model fusion method of the present application;
fig. 2 is a schematic diagram of the principle of multi-scene adaptive model fusion in the present application.
Detailed Description
In order to make the purpose and technical solutions of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present application. It will be apparent that the described embodiments are some, but not all, embodiments of the application. All other embodiments, which can be made by a person skilled in the art without creative efforts, based on the described embodiments of the present application fall within the protection scope of the present application.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The meaning of "and/or" in the present application means that each exists alone or both exist.
"Connected" as used herein means either a direct connection between components or an indirect connection between components via other components.
Fig. 1 is a schematic diagram of a multi-scenario adaptive model fusion method according to the present application, which can construct a fusion model suitable for multiple application scenarios in the manner shown in fig. 2. The fusion model can be installed in a face recognition system with an image acquisition module through an installation program so as to effectively recognize face images under different working environments according to a unified threshold through the steps of fig. 1.
The face recognition system may be configured to include:
The image acquisition module can be realized by a camera or an image sensor and is used for acquiring the face image to be identified;
The first storage unit can be arranged in the face recognition system and can also realize cloud interaction through a communication network, a model library is stored in the first storage unit, and each face recognition model in the model library corresponds to different scenes respectively;
The second storage unit can be selectively arranged at the local of the image acquisition module and can also selectively provide identification operation through a cloud interaction mode, and an executable program is stored in the second storage unit, and when the executable program is executed by a processor at the local of the image acquisition module or a processor at the cloud, the corresponding processor can be set to construct a fusion model according to the following method steps, so that identification vectors corresponding to all identification objects are recorded according to the fusion model obtained by construction, and identification processing is carried out on a face image to be identified according to the fusion model obtained by construction:
Firstly, exhaustion of the combination of face recognition models corresponding to different scenes in a model library;
screening out the combination with the face recognition model operation speed meeting the target platform requirement in each combination in the first step, and recording the combination as a model combination;
Thirdly, respectively calculating the precision and the threshold value of each model combination in each scene, taking one model combination as An example, wherein the precision and the threshold value of each model combination in n scenes can be respectively recorded as C { a1, t1}, C { a2, t2}, C { An, tn }, and then respectively carrying out normalization processing on each precision and each threshold value to obtain the normalization precision An and the normalization threshold value Tn of each model combination in each scene, wherein n represents the scene label, an represents the precision of the model in the n scene, and Tn represents the threshold value of the model in the n scene;
Calculating a weighted sum of normalization accuracy of each model combination in n scenes for unifying threshold values of the fusion models in the multiple scenes, wherein acc=c { w1 x a1+w2 x a2+ & gt wn x An }, calculating variance var=var (T1, T2, T3,) of the normalization threshold values of each model combination in n scenes, wherein the smaller the variance of the threshold values indicates stronger compatibility of the models across the scenes, and wherein wn represents a weighted value of normalization accuracy corresponding to the currently calculated model combination in the nth scene;
Fifthly, respectively calculating an evaluation value Eval=ACC+ (1-VAR) of the combination sum of the models according to the weighted sum ACC of the normalization precision and the variance VAR of the normalization threshold value so as to synthesize precision and a threshold variance evaluation index, wherein the higher the synthesized value is, the stronger the scene adaptation capability of the fusion model is;
And step six, screening out a model combination with the highest evaluation value Eval, constructing a fusion model according to the model combination, splicing and combining feature vectors extracted by each face recognition model in the model combination through the fusion model, comparing recognition vectors corresponding to each recognition object according to the feature vector group obtained after the splicing and combining, and carrying out face recognition according to whether the distance between the vectors calculated by the comparison reaches a threshold Tn. For example, if one fusion model includes 3 face recognition models, each face recognition model can output a 512-dimensional feature vector, the feature vectors after the combination of the fusion models include three models, the three models are fused into a 3x 512-dimensional feature vector, and the feature vector is compared with the recognition vector corresponding to each recognition object, so as to determine whether the feature vector is the recognition object corresponding to the recognition vector.
In the above process, the precision an of any one model combination in scene n is obtained by the following steps:
Firstly, a test set of face recognition is established, and generally, each different face recognition scene, such as an adult scene, a child scene, a mask wearing scene and the like, needs to be respectively corresponding to different test sets. In order to evaluate the precision of the face recognition model in different scenes, ROC curves need to be calculated on each test set to obtain recall rates under a certain false detection rate, and meanwhile, different thresholds are corresponding, and the test set needs to contain a base of the same person and snap shots in the corresponding scene;
Then, each face recognition model contained in the model combination is used for respectively extracting model feature vectors corresponding to each face of the bottom library in the test set and each face of the snapshot, the model feature vectors respectively extracted by the face recognition models are spliced and combined into multidimensional vectors, and Euclidean distance is calculated between the bottom library of the face and the face snapshot features;
Taking the number of the base libraries as n and the number of the snap shots as m as an example, m x n groups of Euclidean distances can be obtained after the calculation, so that the comparison results of the features between the face base libraries and the snap shots can be sequenced from large to small, the inter-vector distance between the multidimensional vector obtained by splicing and combining and the recognition vector corresponding to each face image is obtained, and the corresponding error detection rate and recall rate under different thresholds are calculated according to the requirements of the scene on the error detection rate, so that ROC curves of the test set corresponding to each scene under the scene combined by the model are obtained;
and searching the recall rate or the false detection rate meeting the false detection rate requirement in the ROC curve, and calculating to obtain the precision an of the model combination in the nth scene.
The ROC curve is used as an index of a measurement model, the abscissa of the curve is the false detection rate, and the ordinate is the recall rate. Thus, the passing rate at different false detection rates can be represented by each point on the curve, corresponding to a different threshold. In order to control the false detection rate of face recognition, the recall rate under a certain false detection rate is generally selected as an index of an evaluation model.
Similarly, the threshold tn for any model combination in scene n is obtained by:
calculating ROC curves of model combinations under the nth scene according to the test set corresponding to the nth scene;
and searching a threshold tn meeting the false detection rate requirement in the ROC curve.
The fusion model obtained in the mode of fig. 2 can perform recognition processing on the face image to be recognized according to the following steps:
the recognition vectors corresponding to the recognition objects are stored in a storage unit in advance according to the following steps a-c to be used as evaluation references of face recognition:
Step a, respectively extracting model feature vectors corresponding to the recognition objects according to each face recognition model contained in the fusion model,
Step b, combining the model feature vectors respectively extracted by the face recognition models into one-dimensional recognition vectors,
Step c, storing the identification vector in a storage unit and marking the corresponding relation between the identification vector and the identification object;
Then, the recognition objects in the storage unit are subjected to face recognition according to steps S1 to S3:
Step S1, respectively extracting model feature vectors corresponding to face images to be recognized according to each face recognition model contained in the fusion model;
s2, splicing and combining model feature vectors respectively extracted by the face recognition models in the step S1 into multidimensional vectors;
and S3, comparing the multi-dimensional vectors obtained by combination with the inter-vector distances between the recognition vectors corresponding to the recognition objects, and outputting a recognition result as the recognition object corresponding to the recognition vector when the Euclidean distance between the two vectors is smaller than a threshold Tn, otherwise, judging that the recognition fails if the inter-vector distance always exceeds the threshold.
Considering that different weighted values need to be set according to scene specific in the construction process of the fusion model under different scenes, the application can also preferably add an interactive interface in the face recognition system to be used for receiving the setting of the weighted value wn of the normalization precision corresponding to the model combination under the nth scene in the fourth step.
Therefore, the application provides a cross-scene adaptive face recognition model fusion method for solving the scene adaptability problem of the face recognition model. The method can fuse a plurality of face recognition models into a fusion model suitable for a plurality of scenes, and can realize face recognition under different scenes through a unified threshold. The application can solve the problem of single scene overfitting of the face recognition model by a model fusion technology, and can use the same threshold value to be compatible with a plurality of use scenes. The method can limit the speed of the fusion model at the same time, and the model with highest precision meeting the speed requirement is obtained.
The foregoing is a description of embodiments of the application, which are specific and detailed, but are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application.

Claims (8)

Translated fromChinese
1.一种多场景适应性模型融合方法,其特征在于,步骤包括:第一步,穷举出模型库中分别对应不同场景的人脸识别模型的组合;1. A multi-scenario adaptive model fusion method, characterized in that the steps include: the first step of exhaustively enumerating the combinations of face recognition models corresponding to different scenarios in the model library;第二步,筛选出第一步各组合中运算速度满足目标平台要求的组合,将其记录为模型组合;The second step is to select the combinations in the first step whose operation speed meets the requirements of the target platform and record them as model combinations;第三步,对第二步所获得的每一个模型组合分别计算其在各场景中的精度和阈值C{a1,t1}, C{a2,t2},... ,C{an,tn},然后分别对各精度和各阈值进行归一化处理,获得每一个模型组合在各场景中的归一化精度An和归一化阈值Tn;其中,n表示场景标号,an表示第n号场景中模型的精度,tn表示第n号场景中模型的阈值;The third step is to calculate the accuracy and threshold C{a1,t1}, C{a2,t2},..., C{an,tn} of each model combination obtained in the second step in each scene, and then normalize each accuracy and threshold to obtain the normalized accuracy An and normalized threshold Tn of each model combination in each scene; where n represents the scene number, an represents the accuracy of the model in the nth scene, and tn represents the threshold of the model in the nth scene;第四步,计算每一个模型组合在各场景中归一化精度的加权和ACC=C{w1*A1+w2*A2+…+wn*An},计算每一个模型组合在各场景中归一化阈值的方差VAR=var(T1, T2, T3,… ,Tn);其中,wn表示模型组合在第n号场景所对应的归一化精度的加权值;The fourth step is to calculate the weighted sum of the normalized accuracy of each model combination in each scene ACC=C{w1*A1+w2*A2+…+wn*An}, and calculate the variance of the normalized threshold of each model combination in each scene VAR=var(T1, T2, T3,… ,Tn); where wn represents the weighted value of the normalized accuracy of the model combination corresponding to the nth scene;第五步,根据所述归一化精度的加权和ACC、归一化阈值的方差VAR分别计算各模型组合和的评估值Eval=ACC+(1-VAR);The fifth step is to calculate the evaluation value Eval=ACC+(1-VAR) of each model combination according to the weighted sum of the normalized accuracy ACC and the variance VAR of the normalized threshold;第六步,筛选出评估值Eval最高的模型组合,根据该模型组合构建融合模型,以拼接组合该模型组合中各人脸识别模型所提取出的特征向量并根据拼接组合后所得特征向量组进行人脸识别;Step 6: Select the model combination with the highest evaluation value Eval, and build a fusion model based on the model combination to combine the feature vectors extracted by each face recognition model in the model combination and perform face recognition based on the feature vector group obtained after the combination.其中,第三步中,模型组合在第n号场景中的精度an由以下步骤获得:按照第n号场景所对应的测试集计算模型组合在该场景下的ROC曲线;Among them, in the third step, the accuracy an of the model combination in the nth scene is obtained by the following steps: according to the test set corresponding to the nth scene, the ROC curve of the model combination in the scene is calculated;在ROC曲线中查找符合误检率要求的召回率或误检测率,计算获得该模型组合在第n号场景中的精度an;Find the recall rate or false detection rate that meets the false detection rate requirement in the ROC curve, and calculate the accuracy an of the model combination in the nth scenario;其中,按照第n号场景所对应的测试集计算模型组合在该场景下的ROC曲线的步骤包括:The step of calculating the ROC curve of the model combination in the scenario according to the test set corresponding to the scenario No. n includes:步骤r1,按照模型组合中所包含的每一个人脸识别模型分别提取测试集中各人脸图像所对应的模型特征向量;Step r1, extracting the model feature vector corresponding to each face image in the test set according to each face recognition model included in the model combination;步骤r2,将步骤r1中各人脸识别模型所分别提取出的模型特征向量拼接组合为多维向量;Step r2, combining the model feature vectors extracted by each face recognition model in step r1 into a multi-dimensional vector;步骤r3,比较拼接组合所获得的多维向量与各人脸图像所对应的识别向量之间的向量间距离,按照不同阈值获得该阈值下的误检率和召回率。Step r3, comparing the inter-vector distances between the multi-dimensional vector obtained by the splicing combination and the recognition vectors corresponding to each face image, and obtaining the false positive rate and the recall rate under different thresholds.2.如权利要求1所述的多场景适应性模型融合方法,其特征在于,第三步中,模型组合在第n号场景中的阈值tn由以下步骤获得:按照第n号场景所对应的测试集计算模型组合在该场景下的ROC曲线;2. The multi-scenario adaptive model fusion method according to claim 1, characterized in that, in the third step, the threshold tn of the model combination in the nth scene is obtained by the following steps: calculating the ROC curve of the model combination in the nth scene according to the test set corresponding to the nth scene;在ROC曲线中查找符合误检率要求的阈值tn。Find the threshold tn that meets the false positive rate requirement in the ROC curve.3.如权利要求2所述的多场景适应性模型融合方法,其特征在于,所述融合模型用于按照以下步骤对待识别人脸图像进行识别处理:步骤S1,按照融合模型中所包含的每一个人脸识别模型分别提取待识别人脸图像所对应的模型特征向量;3. The multi-scenario adaptive model fusion method according to claim 2 is characterized in that the fusion model is used to perform recognition processing on the face image to be recognized according to the following steps: step S1, extracting the model feature vector corresponding to the face image to be recognized according to each face recognition model included in the fusion model;步骤S2,将步骤S1中各人脸识别模型所分别提取出的模型特征向量拼接组合为多维向量;Step S2, combining the model feature vectors extracted by each face recognition model in step S1 into a multi-dimensional vector;步骤S3,比较组合所获得的多维向量与各识别对象所对应的识别向量之间的向量间距离,在两向量间欧式距离小于阈值Tn时,输出识别结果为该识别向量所对应的识别对象。Step S3, comparing the inter-vector distance between the multidimensional vector obtained by the combination and the identification vector corresponding to each identification object, when the Euclidean distance between the two vectors is less than a threshold value Tn, outputting the identification result as the identification object corresponding to the identification vector.4.如权利要求3所述的多场景适应性模型融合方法,其特征在于,各识别对象所对应的识别向量分别按照以下步骤预先存储在存储单元中:首先,按照融合模型中所包含的每一个人脸识别模型分别提取识别对象所对应的模型特征向量;4. The multi-scenario adaptive model fusion method as claimed in claim 3 is characterized in that the recognition vectors corresponding to each recognition object are pre-stored in the storage unit according to the following steps: first, according to each face recognition model included in the fusion model, the model feature vector corresponding to the recognition object is extracted respectively;然后,将各人脸识别模型所分别提取出的模型特征向量组合为一维的识别向量;Then, the model feature vectors extracted by each face recognition model are combined into a one-dimensional recognition vector;将所述识别向量存储在存储单元中并标记其与识别对象之间的对应关系。The recognition vector is stored in a storage unit and the corresponding relationship between the recognition vector and the recognition object is marked.5.一种人脸识别系统,其特征在于,包括:5. A face recognition system, comprising:图像采集模块,用于采集待识别人脸图像;An image acquisition module, used for acquiring facial images to be recognized;第一存储单元,其内部存储有模型库,所述模型库中的各人脸识别模型分别对应于不同场景;A first storage unit, which stores a model library, wherein each face recognition model in the model library corresponds to a different scene;第二存储单元,其内部存储有可执行程序,所述可执行程序被处理器执行时,使得处理器按照权利要求1至4任一所述的方法步骤构建融合模型,以根据构建获得的融合模型记录各识别对象所对应的识别向量、并根据构建获得的融合模型对待识别人脸图像进行识别处理。A second storage unit stores an executable program therein. When the executable program is executed by the processor, the processor constructs a fusion model according to the method steps described in any one of claims 1 to 4, so as to record the recognition vector corresponding to each recognition object according to the constructed fusion model, and perform recognition processing on the face image to be recognized according to the constructed fusion model.6.如权利要求5所述的人脸识别系统,其特征在于,根据所获得的融合模型对待识别人脸图像进行识别处理的具体步骤包括:步骤S1,按照融合模型中所包含的每一个人脸识别模型分别提取待识别人脸图像所对应的模型特征向量;6. The face recognition system as claimed in claim 5, characterized in that the specific steps of performing recognition processing on the face image to be recognized according to the obtained fusion model include: step S1, extracting the model feature vector corresponding to the face image to be recognized according to each face recognition model included in the fusion model;步骤S2,将步骤S1中各人脸识别模型所分别提取出的模型特征向量拼接组合为多维向量;Step S2, combining the model feature vectors extracted by each face recognition model in step S1 into a multi-dimensional vector;步骤S3,比较拼接组合所获得的多维向量与各识别对象所对应的识别向量之间的向量间距离,在两向量间欧式距离小于阈值Tn时,输出识别结果为该识别向量所对应的识别对象,否则判断识别失败。Step S3, compare the inter-vector distance between the multidimensional vector obtained by the splicing combination and the identification vector corresponding to each identification object. When the Euclidean distance between the two vectors is less than the threshold Tn, the identification result is output as the identification object corresponding to the identification vector, otherwise it is judged that the identification fails.7.如权利要求6所述的人脸识别系统,其特征在于,还包括交互接口,其用于在第四步中接收对模型组合在第n号场景下所对应的归一化精度的加权值wn的设定。7. The face recognition system as described in claim 6 is characterized in that it also includes an interactive interface, which is used to receive the setting of the weighted value wn of the normalized accuracy corresponding to the model combination in the nth scene in the fourth step.8.如权利要求7所述的人脸识别系统,其特征在于,还包括识别对象存储单元,用于存储各识别对象所对应的识别向量;8. The face recognition system according to claim 7, further comprising a recognition object storage unit for storing the recognition vector corresponding to each recognition object;所述识别向量由以下步骤存储:The identification vector is stored by the following steps:首先,按照融合模型中所包含的每一个人脸识别模型分别提取识别对象所对应的模型特征向量;Firstly, according to each face recognition model included in the fusion model, the model feature vector corresponding to the recognition object is extracted respectively;然后,将各人脸识别模型所分别提取出的模型特征向量拼接组合为多维的识别向量;Then, the model feature vectors extracted by each face recognition model are concatenated and combined into a multi-dimensional recognition vector;将所述多维的识别向量存储在识别对象存储单元中并标记其与识别对象之间的对应关系。The multi-dimensional recognition vector is stored in a recognition object storage unit and the corresponding relationship between the multi-dimensional recognition vector and the recognition object is marked.
CN202110777419.8A2021-07-092021-07-09 Multi-scene adaptive model fusion method and face recognition systemActiveCN113361488B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110777419.8ACN113361488B (en)2021-07-092021-07-09 Multi-scene adaptive model fusion method and face recognition system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110777419.8ACN113361488B (en)2021-07-092021-07-09 Multi-scene adaptive model fusion method and face recognition system

Publications (2)

Publication NumberPublication Date
CN113361488A CN113361488A (en)2021-09-07
CN113361488Btrue CN113361488B (en)2025-05-06

Family

ID=77538799

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110777419.8AActiveCN113361488B (en)2021-07-092021-07-09 Multi-scene adaptive model fusion method and face recognition system

Country Status (1)

CountryLink
CN (1)CN113361488B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114863359A (en)*2022-04-142022-08-05创新奇智(成都)科技有限公司 Multi-scene detection method, apparatus, electronic device, and computer-readable storage medium
CN115880753B (en)*2022-11-302025-08-22中国工商银行股份有限公司 Face recognition processing method and device
CN119049215B (en)*2024-10-292025-03-07安徽省川佰科技有限公司Fire disaster identification alarm method based on AI image and fire disaster detector

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108197660A (en)*2018-01-172018-06-22中国科学院上海高等研究院Multi-model Feature fusion/system, computer readable storage medium and equipment
CN110929644A (en)*2019-11-222020-03-27南京甄视智能科技有限公司Heuristic algorithm-based multi-model fusion face recognition method and device, computer system and readable medium
CN111626303A (en)*2020-05-292020-09-04南京甄视智能科技有限公司Sex and age identification method, sex and age identification device, storage medium and server

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20110286628A1 (en)*2010-05-142011-11-24Goncalves Luis FSystems and methods for object recognition using a large database
CN107330358B (en)*2017-05-172020-09-01广州视源电子科技股份有限公司Backward search model integration method and device, storage equipment and face recognition system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108197660A (en)*2018-01-172018-06-22中国科学院上海高等研究院Multi-model Feature fusion/system, computer readable storage medium and equipment
CN110929644A (en)*2019-11-222020-03-27南京甄视智能科技有限公司Heuristic algorithm-based multi-model fusion face recognition method and device, computer system and readable medium
CN111626303A (en)*2020-05-292020-09-04南京甄视智能科技有限公司Sex and age identification method, sex and age identification device, storage medium and server

Also Published As

Publication numberPublication date
CN113361488A (en)2021-09-07

Similar Documents

PublicationPublication DateTitle
US11195051B2 (en)Method for person re-identification based on deep model with multi-loss fusion training strategy
CN113361488B (en) Multi-scene adaptive model fusion method and face recognition system
CN111382666B (en) Device and method with user authentication
US20150139485A1 (en)Pose-aligned networks for deep attribute modeling
CN113822254B (en)Model training method and related device
CN113095370A (en)Image recognition method and device, electronic equipment and storage medium
KR102225613B1 (en)Person re-identification apparatus and method
CN113033507B (en)Scene recognition method and device, computer equipment and storage medium
JP2017062778A (en)Method and device for classifying object of image, and corresponding computer program product and computer-readable medium
CN104504362A (en)Face detection method based on convolutional neural network
CN112149538A (en) A Pedestrian Re-identification Method Based on Multi-task Learning
CN110555428B (en)Pedestrian re-identification method, device, server and storage medium
US11481582B2 (en)Dynamic matching a sensed signal to a concept structure
CN117521012A (en)False information detection method based on multi-mode context hierarchical step alignment
CN103150546A (en)Video face identification method and device
CN118172841B (en)Image processing method and related device
US20240127631A1 (en)Liveness detection method and apparatus, and computer device
CN116129523A (en)Action recognition method, device, terminal and computer readable storage medium
Wong et al.Multi-camera face detection and recognition in unconstrained environment
CN114463829B (en)Model training method, relationship identification method, electronic device, and storage medium
CN114333019B (en)Living body detection model training method, living body detection method and related devices
CN112487232B (en)Face retrieval method and related products
CN116977919B (en)Method and system for identifying dressing specification, storage medium and electronic equipment
CN110866458A (en)Multi-user action detection and identification method and device based on three-dimensional convolutional neural network
CN113688657B (en) Face recognition method, device, electronic device and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
CB02Change of applicant information
CB02Change of applicant information

Country or region after:China

Address after:210000 Longmian Avenue 568, High-tech Park, Jiangning District, Nanjing City, Jiangsu Province

Applicant after:Xiaoshi Technology (Jiangsu) Co.,Ltd.

Address before:210000 Longmian Avenue 568, High-tech Park, Jiangning District, Nanjing City, Jiangsu Province

Applicant before:NANJING ZHENSHI INTELLIGENT TECHNOLOGY Co.,Ltd.

Country or region before:China

GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp