Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the technical solutions of the present application, but not all embodiments. All other embodiments, which can be made by a person of ordinary skill in the art without any inventive effort, based on the embodiments described in the present application are intended to be within the scope of the technical solutions of the present application.
Some of the concepts involved in the embodiments of the present application are described below.
Object to be detected: the object to be detected refers to an object for information verification, and the object to be detected may refer to a user or an account of the user.
An object detector: object detectors are used for keypoint extraction, including, but not limited to, monocular multi-object detectors (Single Shot MultiBox Detector, SSD), YOLO-V3, and the like.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like. For example, in the embodiment of the present application, a machine learning technology is adopted, and after at least one sole image to be identified is acquired, a target detector is adopted to extract key points of the acquired at least one sole image to be identified.
It can be appreciated that in the embodiments of the present application, related data of an object to be detected, an image of a footprint to be identified, a candidate footprint template, etc. are related, when the embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and collection, use and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
With the continuous development of computer technology, compared with traditional information verification methods such as password verification and verification code verification, the information verification method based on biological characteristics gradually becomes the main stream of information verification due to verification convenience.
At present, the information verification method based on the biological characteristics generally comprises fingerprint verification and face verification, however, in the case that the face and the fingerprint cannot be identified, the fingerprint verification and the face verification cannot be used, for example, the face verification is affected by the conditions of face shielding, incomplete and the like, and the fingerprint verification is affected by the conditions of fingerprint abrasion, incomplete and the like.
In the embodiment of the application, the sole image to be identified corresponding to at least one sole of the object to be detected is acquired, a plurality of corresponding toe joint key points are extracted from the sole image to be identified, then the corresponding area to be detected of the sole image to be identified is determined according to the extracted toe joint key points, and further, whether the object to be detected passes identity verification is determined based on the similarity between the areas to be detected and the preset candidate sole templates.
Thus, on one hand, the foot print is used as a biological feature, and has biological uniqueness and distinguishing property like biological features such as a human face, an iris, a fingerprint, a palm print and the like, so that the accuracy of information verification can be ensured by the information verification method based on the sole image, and under the condition that facial verification or fingerprint verification cannot be performed, the identity verification of an object to be detected is realized according to the foot print of the object to be detected. Compared with facial verification or fingerprint verification, the method improves verification safety because the image of the sole image is harder to steal.
On the other hand, the information verification efficiency is improved by extracting the to-be-detected area from the sole image to be identified, and in addition, the accuracy of the information verification is ensured because the to-be-detected area is determined according to the toe seam key points contained in the sole image to be identified.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are for illustration and explanation only, and are not intended to limit the present application, and the embodiments of the present application and the features of the embodiments may be combined with each other without conflict.
Fig. 1 is a schematic diagram of an application scenario provided in an embodiment of the present application. The application scenario includes at least aterminal device 101 and aserver 102. The number of theterminal devices 101 may be one or more, and the number of theservers 102 may be one or more, and the number of theterminal devices 101 and theservers 102 is not particularly limited in this application.
In this embodiment, theterminal device 101 is configured to collect, for an object to be detected, sole images to be identified corresponding to at least one sole respectively. Theterminal device 101 may be a device having an image capturing function, and theterminal device 101 may be, for example, but not limited to, an internet of things device, a mobile phone, a computer, an intelligent home appliance, a vehicle-mounted terminal, and the like. The embodiments of the present application may be applied to various information verification scenarios including, but not limited to, payment scenarios, and the like.
Theserver 102 may be a background server, and is configured to perform identity verification on an object to be detected according to acquiring sole images to be identified corresponding to at least one sole. Theserver 102 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligence platform. Theterminal device 101 and theserver 102 may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
The information verification method provided in the embodiment of the present application may be applied to theterminal device 101, theserver 102, and both theterminal device 101 and theserver 102.
For the object to be detected, theterminal device 101 collects the sole image to be identified corresponding to each of the at least one sole, and then sends the collected sole image to be identified corresponding to each of the at least one sole to theserver 101, and theserver 102 performs identity verification on the object to be detected according to the collected sole image to be identified corresponding to each of the at least one sole. See below for specific information verification procedures.
Referring to fig. 2, a flow chart of an information verification method provided in an embodiment of the present application is shown, where the method may be applied to a terminal device or a server, and the specific flow chart is as follows:
s201, acquiring sole images to be identified, which are respectively corresponding to at least one sole acquired by the object to be detected.
As a possible case, in the process of image acquisition, a sole image to be identified corresponding to a sole of an object to be identified may be acquired, for example, a sole image to be identified corresponding to a left foot or a sole image to be identified corresponding to a right foot is acquired. For convenience of description, hereinafter, the sole image to be recognized corresponding to the left foot may also be referred to as a left foot image to be recognized, and the sole image to be recognized corresponding to the right foot may also be referred to as a right foot image to be recognized. Correspondingly, when executing S201, acquiring sole images to be identified corresponding to the sole of the single side.
For example, referring to fig. 3, if the sole image to be identified corresponding to the collected sole is the left foot image to be identified of the object to be detected in the image collection process, then in executing S201, the left foot image to be identified collected for the object to be detected is obtained.
For another example, referring to fig. 3, if the sole image to be identified corresponding to the collected sole on one side is the right foot image to be identified of the object to be detected in the process of image collection of the object to be detected, then the right foot image to be identified collected for the object to be detected is obtained when S201 is executed.
As another possible case, in the process of image acquisition, the sole image to be identified corresponding to the left foot and the sole image to be identified corresponding to the right foot of the object to be identified may be acquired simultaneously. Accordingly, in executing S201, as a possible implementation manner, a sole image to be identified corresponding to the sole of the single side may be acquired, that is, a left foot image to be identified is acquired, or a right foot image to be identified is acquired. As another possible implementation manner, a left foot image to be identified and a right foot image to be identified are acquired.
S202, extracting key points of the acquired at least one sole image to be identified to obtain a plurality of toe joint key points corresponding to the at least one sole image to be identified, wherein each toe joint key point is used for representing a connecting point of one toe joint and the front sole.
Referring to fig. 4, for each sole image to be identified, there are 4 toe seam key points: the toe seamkey points 1, 2, 3 and 4 are the connection points of the toe seam between the big toe and the edible toe and the front sole, the toe seamkey points 2 are the connection points of the toe seam between the edible toe and the middle toe and the front sole, the toe seam key points 3 are the connection points of the toe seam between the middle toe and the undefined toe and the front sole, and the toe seamkey points 4 are the connection points of the toe seam between the undefined toe and the little toe and the front sole.
In the embodiment of the application, the types and the numbers of the extracted toe joint key points can be set according to practical application, and the extracted toe joint key points can enable the to-be-detected area to be located in the central area of the sole. In this context, in order to improve the detection accuracy, the extracted toe joint key points are respectively: toe seam keypoint 1,toe seam keypoint 2, and toe seam keypoint 3.
In order to improve the detection efficiency, in the embodiment of the application, a machine learning model may be used to perform the extraction of the key points. Specifically, the obtained at least one sole image to be identified may be input to the target detector, so as to obtain a plurality of key points of the toe joints corresponding to the at least one sole image to be identified. The target detector can be trained according to the sample sole image.
If the obtained sole image to be identified is a sole image to be identified corresponding to the sole on one side, the sole image to be identified corresponding to the sole on one side can be input into the target detector, and a plurality of toe seam key points corresponding to the sole image to be identified on one side are output.
For example, referring to fig. 5, assume that the acquired sole image to be identified corresponding to the sole is a right foot image to be identified acquired for the object to be detected, and the right foot image to be identified is input into the target detector to obtain a toe seam key point a, a toe seam key point B and a toe seam key point C corresponding to the right foot image to be identified, where the toe seam key point a is a connection point between a toe seam and a front sole between a big toe and a food toe, the toe seam key point B is a connection point between a toe seam and a front sole between a food toe and a middle toe, and the toe seam key point C is a connection point between a toe seam and a front sole between a middle toe and a ring toe.
If the obtained sole image to be identified is a sole image to be identified corresponding to soles at two sides, as a possible implementation manner, the sole images to be identified corresponding to the soles at two sides can be respectively input into the target detector, and a plurality of toe seam key points corresponding to the two sole images to be identified are output.
As another possible implementation manner, the sole images to be identified corresponding to the soles at both sides can be used as one image, and simultaneously input into the target detector, so as to obtain a plurality of toe seam key points corresponding to the sole images to be identified.
For example, referring to fig. 6, it is assumed that a left foot image to be identified and a right foot image to be identified, which are acquired for an object to be detected, are acquired, key point extraction is performed on the acquired left foot image to be identified and right foot image to obtain a toe seam key point a, a toe seam key point B, and a toe seam key point C corresponding to the right foot image to be identified, and a toe seam key point E, a toe seam key point F, and a toe seam key point G corresponding to the right foot image to be identified, where the toe seam key point a and the toe seam key point E are all the connection points of a toe seam between a thumb and a toe and a front sole, the toe seam key point B and the toe seam key point F are all the connection points of a toe seam between a toe and a middle sole, and the connection points of a toe seam between a middle toe and a ring toe and a front sole.
S203, extracting corresponding areas to be detected from at least one sole image to be identified based on a plurality of toe seam key points corresponding to the obtained at least one sole image to be identified.
In some embodiments, in order to extract the fingerprint image of the sole center area to improve the verification efficiency and the verification accuracy, referring to fig. 7, when S203 is performed, the following steps may be adopted:
s2031, determining sole center points corresponding to the sole images to be identified based on the obtained plurality of toe seam key points corresponding to the sole images to be identified.
Since the process of extracting the corresponding region to be detected from the left foot image to be identified is the same as the process of extracting the corresponding region to be detected from the right foot image to be identified, the following description will be given by taking the extraction process of the corresponding region to be detected of the right foot to be identified as an example.
First case: the number of extracted toe joint key points is more than two.
In the first case, the plurality of toe seam key points corresponding to the right foot image to be identified may also be referred to as at least three key points, and any three toe seam key points of the at least three key points are respectively used as a first key point, a second key point and a third key point, where the second key point is located between the first key point and the third key point.
Referring to fig. 8, when S2031 is executed, the following steps may be adopted:
s20311, determining a second distance corresponding to each of the at least one sole image to be identified based on a first distance between a first key point and a third key point corresponding to each of the at least one sole image to be identified and a preset first scaling, wherein the second distance is used for representing a distance between the corresponding second key point and a sole center point.
In order to quickly locate the sole center point, in the embodiment of the present application, a rectangular coordinate system may be constructed based on the first key point, the second key point, and the third key point. Specifically, a first coordinate axis is determined based on the first key point and the third key point, and then a second coordinate axis perpendicular to the first coordinate axis is determined based on the second key point.
For example, referring to fig. 9, the first key point is a toe seam key point a, the second key point is a toe seam key point B, the third key point is a toe seam key point C, a first coordinate axis is constructed according to the toe seam key point a and the toe seam key point C, the first coordinate axis is an X axis, then, a second coordinate axis perpendicular to the X axis is determined according to the toe seam key point B, and the second coordinate axis is a Y axis.
The first scaling is represented by alpha, the first distance between the toe seam key point A and the toe seam key point C is represented by AC, and the second distance between the toe seam key point B and the sole center point is represented by BD, so that the value of BD can be calculated by adopting the following formula: bd=aac.
Assuming that the AC has a value of 2 cm and α has a value of 1.5, BD has a value of 3 cm.
S20312, determining a sole center point in sole center areas corresponding to the sole images to be identified respectively based on the second distances corresponding to the sole images to be identified respectively.
For example, referring to fig. 9, the center point of the sole corresponding to the right foot image to be identified is point D.
Second case: the number of extracted toe joint key points is two.
In the second case, the two toe seam key points may be regarded as a first key point and a third key point in the first case, and the midpoint between the two toe seam key points may be regarded as a second key point, so that the sole center point may be determined by adopting the manner in the first case, which is not described herein.
S2032, extracting corresponding areas to be detected from at least one sole image to be identified based on the determined at least one sole center point.
Specifically, when S2032 is executed, the following two methods may be used, but are not limited to:
mode one: based on the specified detection range, the corresponding areas to be detected are extracted from at least one sole image to be identified by taking the sole center point corresponding to each of the at least one sole image to be identified as the center.
For example, referring to fig. 10, the center point of the sole is a point D, the designated detection range is a square area of a set length, and the area to be detected 12 is extracted from the right foot image to be recognized with the point D as the center.
In this way, in the embodiment of the application, the area to be detected can be rapidly determined through the specified detection range, so that the detection speed is improved.
Mode two: based on the first distance corresponding to each of the at least one sole image to be identified, the preset second scaling and the specified region shape, the corresponding region to be detected is extracted from the at least one sole image to be identified by taking the sole center point corresponding to each of the at least one sole image to be identified as the center.
It should be noted that, in the embodiment of the present application, the number of the second scaling may be one or more, and the number may be set according to the designated area shape.
Taking the shape of the designated area as a rectangle as an example, the second scaling may include two parameters, parameter 1 for indicating the length of the rectangle andparameter 2 for indicating the width of the rectangle.
For example, referring to FIG. 11, the sole center point is point D, the AC value is 2 cm, and the parameter 1 value is
Parameter 2The value is +.>
Extracting a region 12 to be detected from the right foot image to be identified by taking the point D as the center, wherein the length of the region 12 to be detected is +.>
Cm, the width of the area 12 to be detected is +.>
The length of the square meter is equal to the length of the square meter,
in this way, in the embodiment of the application, the to-be-detected area is determined through the distance between the toe seam key points in the sole image to be identified, and the to-be-detected area can be accurately positioned, so that the detection accuracy is improved.
S204, determining a matching result based on the similarity between each preset candidate footprint template and at least one extracted region to be detected.
Wherein each candidate pin pattern template comprises a candidate pin pattern template 1, a candidatepin pattern template 2, candidate pin pattern templates 3 and … … and candidate pin pattern templates n and n, and the values of the candidate pin pattern templates n and n are positive integers.
The following description will be made with respect to the case of acquiring the sole image to be identified corresponding to the sole on one side and the case of acquiring the sole image to be identified corresponding to each sole on both sides, respectively.
Case 1: and acquiring sole images to be identified, which correspond to soles at two sides, namely acquiring a left foot image to be identified and a right foot image to be identified simultaneously.
As a possible implementation manner, it may be determined whether it matches at least one of the two areas to be detected for the candidate footprint template 1, thecandidate footprint template 2, the candidate footprint templates 3, … …, the candidate footprint template n in order.
Specifically, referring to fig. 12, the following steps may be adopted:
s2041, obtaining a candidate footprint template i. The candidate footprint template i is the ith candidate footprint template in the candidate footprint templates.
It should be noted that, in the embodiment of the present application, each candidate footprint template may be a region to be detected corresponding to each sample footprint image of each candidate user. The extraction method of the region to be detected of the sample sole image is the same as the extraction method of the region to be detected corresponding to the sole image to be identified, and is not repeated here.
The candidate pin pattern template i may also be a feature vector corresponding to the region to be detected of the sample pin pattern image, where the feature vector may be extracted by a feature extraction model.
S2042, determining a first similarity between the candidate sole pattern template i and a to-be-detected area corresponding to one sole image to be identified in the two acquired sole images to be identified.
In this embodiment of the present application, a feature extraction model may be used to extract features of a region to be detected corresponding to a sole image to be identified, and the first similarity is calculated according to the extracted feature vector. The first similarity may be expressed by, but is not limited to, cosine distance, euclidean distance, and the like.
S2043, judging whether the first similarity is larger than a first similarity threshold, if so, executing S2044, otherwise, executing S2045.
S2044, the matching result represents that the candidate footprint template i is matched with at least one region to be detected.
S2045, judging whether the first similarity is larger than a second similarity threshold, if yes, executing S2046, otherwise, executing S2048. Wherein the first similarity threshold is greater than the second similarity threshold.
S2046, determining a second similarity between the candidate sole pattern template i and the region to be detected corresponding to the other sole image to be identified.
S2047, judging whether the second similarity is larger than a third similarity threshold, if so, executing S2044, otherwise, executing S2048. The third similarity threshold may be the same as the first similarity threshold or may be different from the first similarity threshold.
S2048, the matching result represents that the candidate footprint template i is not matched with the two areas to be detected.
Therefore, when the candidate pin pattern template matched with at least one area to be detected is identified, the object to be detected is determined to pass the authentication, so that the information authentication efficiency is improved.
As another possible implementation manner, for all candidate footprint templates, after determining the first similarity between the candidate footprint templates and the to-be-detected region corresponding to one sole image to be identified, judging whether there is a candidate footprint template with the first similarity greater than the first similarity threshold, if so, the matching result represents that there is a candidate footprint template matched with at least one to-be-detected region.
Otherwise, judging whether at least one candidate template matched with the to-be-detected area exists or not according to the second similarity between each at least one candidate template and the to-be-detected area corresponding to another to-be-identified sole image in at least one candidate template which is larger than the second similarity threshold and not larger than the first similarity threshold, and the specific matching process is not repeated.
Case 2: and acquiring a sole image to be identified corresponding to the sole of the single side, namely acquiring a left foot image to be identified or a right foot image to be identified.
In this embodiment, according to the similarity between the preset candidate sole pattern templates and the areas to be detected corresponding to the sole images to be identified of the single sole, the process of determining the matching result is similar to S2041-S2044, and will not be described here again.
S205, if the matching result represents that the candidate pin pattern template matched with at least one area to be detected exists, determining that the object to be detected passes the identity verification.
For example, if the candidate pin print template 1 is matched with the to-be-detected area corresponding to the right pin image to be identified, it is determined that the to-be-detected object passes the authentication, where the candidate pin print template 1 is a candidate pin print template corresponding to the user a, and therefore, the to-be-detected object is the user a.
In some embodiments, in order to improve accuracy of information verification and improve information security, after performing living detection on an image to be detected, determining that the image to be detected is a living image, determining a matching result based on similarity between each of preset candidate pin pattern templates and at least one extracted area to be detected.
Specifically, after extracting the areas to be detected corresponding to the at least one sole image to be identified, and before performing similarity matching, living detection can be performed on the at least one sole image to be identified to obtain a living detection result, and if the living detection result represents that any sole image to be identified in the at least one sole image to be identified is a living image, then determining that the object to be detected is a living body.
For example, at least one sole image to be identified may be input into a preset living body detection model, so as to obtain a living body detection result. The in-vivo detection model may employ, but is not limited to, an infrared camera-based in-vivo detection algorithm.
For example, the left foot image to be detected and the right foot image to be detected are input into a preset living body detection model to obtain a living body detection result, and the living body detection result represents that the left foot image to be detected and the right foot image to be detected are both living body images.
In the following, a payment scenario is taken as an example, and the present application is described with reference to a specific embodiment.
Referring to fig. 13a, in the payment scenario, the terminal device is a foot print recognition device disposed on the ground, and when a user is in a hotel room, the user can collect an image of the sole to be recognized of the user through the foot print recognition device.
Referring to fig. 13b, the interaction procedure between the terminal device and the server is as follows:
s1301, the terminal equipment collects sole images to be identified, which correspond to soles on two sides of the object to be detected.
S1302, the terminal equipment sends the acquired two sole images to be identified to the server.
And S1303, the server verifies information of the object to be detected according to the acquired two sole images to be identified, and takes a candidate template matched with at least one area to be detected as a target template when determining that the object to be detected passes the identity verification. And according to the acquired two sole images to be identified, the process of verifying the information of the object to be detected is seen in S201-S205.
S1304, the server takes the candidate object corresponding to the target script template as a target object, and acquires a payment certificate corresponding to the target object. The payment credentials may refer to two-dimensional codes, bar codes, and the like.
And S1305, the server sends the payment certificate corresponding to the target object to the terminal equipment.
S1306, the terminal equipment pays according to the received payment certificate.
Based on the same inventive concept, embodiments of the present application provide an information verification apparatus. As shown in fig. 14, which is a schematic structural diagram of the information verification apparatus 1400, may include:
an acquiring unit 1401, configured to acquire sole images to be identified corresponding to at least one sole acquired by an object to be detected;
a key point extraction unit 1402, configured to extract key points of the acquired at least one sole image to be identified, to obtain a plurality of toe seam key points corresponding to the at least one sole image to be identified, where each toe seam key point is used to represent a connection point between a toe seam and a front sole;
an area extracting unit 1403, configured to extract corresponding areas to be detected from the at least one sole image to be identified, based on the obtained plurality of toe seam key points corresponding to the at least one sole image to be identified, respectively;
A matching unit 1404, configured to determine a matching result based on a similarity between each preset candidate footprint template and at least one extracted region to be detected;
a verification unit 1405, configured to determine that the object to be detected passes identity verification if the matching result indicates that there is a candidate fingerprint template matching with the at least one area to be detected.
As a possible implementation manner, when the region extracting unit 1403 is specifically configured to extract the corresponding region to be detected from the at least one sole image to be identified based on the obtained plurality of toe seam key points corresponding to the at least one sole image to be identified, respectively:
determining sole center points corresponding to the at least one sole image to be identified based on the obtained plurality of toe seam key points corresponding to the at least one sole image to be identified;
and extracting corresponding areas to be detected from the at least one sole image to be identified based on the determined at least one sole center point.
As one possible implementation, the plurality of toe key points includes a first key point, a second key point, and a third key point, wherein the second key point is located between the first key point and the third key point;
The area extracting unit 1403 is specifically configured to, when determining the sole center point corresponding to the at least one sole image to be identified based on the plurality of toe seam key points corresponding to the at least one sole image to be identified, respectively:
determining a second distance corresponding to each of the at least one sole image to be identified based on a first distance between the first key point and the third key point corresponding to each of the at least one sole image to be identified and a preset first scaling, wherein the second distance is used for representing a distance between the corresponding second key point and a sole center point;
and determining a sole center point in the sole center region corresponding to the at least one sole image to be identified based on the second distance corresponding to the at least one sole image to be identified.
As a possible implementation manner, when the area extracting unit 1403 is specifically configured to extract the corresponding area to be detected from the at least one sole image to be identified based on the determined at least one sole center point, respectively:
based on the specified detection range, respectively extracting corresponding areas to be detected from the at least one sole image to be identified by taking the sole center point corresponding to each of the at least one sole image to be identified as the center; or,
Based on the first distance, the second preset scaling and the specified area shape, which are respectively corresponding to the at least one sole image to be identified, the corresponding areas to be detected are respectively extracted from the at least one sole image to be identified by taking the sole center point, which is respectively corresponding to the at least one sole image to be identified, as the center.
As a possible implementation manner, when the acquiring the sole image to be identified corresponding to each sole of at least one side acquired by the object to be detected, the acquiring unit 1401 is specifically configured to:
acquiring sole images to be identified, which are respectively corresponding to soles on two sides and are acquired aiming at an object to be detected;
the verification unit 1405 is specifically configured to, when determining a matching result based on the similarity between each of the preset candidate footprint templates and the at least one extracted region to be detected:
determining first similarity between each candidate sole pattern template and a to-be-detected area corresponding to one sole image to be identified in the two acquired sole images to be identified;
if the candidate fingerprint template with the first similarity being larger than the first similarity threshold exists, the matching result represents that the candidate fingerprint template matched with the at least one area to be detected exists.
As a possible implementation manner, the matching unit 1404 is further configured to:
if at least one candidate template with the first similarity being greater than the second similarity threshold and not greater than the first similarity threshold exists in each candidate template, determining the second similarity between the at least one candidate template and a region to be detected corresponding to another sole image to be identified; wherein the first similarity threshold is greater than the second similarity threshold;
and if the candidate fingerprint template with the second similarity being larger than a third similarity threshold exists in the at least one candidate fingerprint template, the matching result represents that the candidate fingerprint template matched with the at least one area to be detected exists.
As a possible implementation manner, after the corresponding areas to be detected are extracted from the at least one sole image to be identified based on the obtained plurality of toe seam key points corresponding to the at least one sole image to be identified, respectively, the area extracting unit 1403 is further configured to, before determining the matching result, based on the similarity between each of the preset candidate sole templates and the extracted at least one area to be detected:
Performing living body detection on the at least one sole image to be identified to obtain a living body detection result;
and if the living body detection result represents that any sole image to be identified in the at least one sole image to be identified is a living body image, determining that the object to be detected is a living body.
For convenience of description, the above parts are described as being functionally divided into modules (or units) respectively. Of course, the functions of each module (or unit) may be implemented in the same piece or pieces of software or hardware when implementing the present application.
The specific manner in which the respective units execute the requests in the apparatus of the above embodiment has been described in detail in the embodiment concerning the method, and will not be described in detail here.
Those skilled in the art will appreciate that the various aspects of the present application may be implemented as a system, method, or program product. Accordingly, aspects of the present application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
Having described the information verification method and apparatus of the exemplary embodiments of the present application, next, an electronic device according to another exemplary embodiment of the present application is described.
Fig. 15 is a block diagram of an electronic device 1500, according to an example embodiment, the apparatus comprising:
aprocessor 1510;
amemory 1520 for storing instructions executable by theprocessor 1510;
wherein theprocessor 1510 is configured to execute instructions to implement the information verification method in embodiments of the present application, such as the steps shown in fig. 2, 7, 8, or 12.
In an exemplary embodiment, a storage medium is also provided that includes operations, such asmemory 1520 including operations that can be executed byprocessor 1510 of electronic device 1500 to perform the methods described above. Alternatively, the storage medium may be a non-transitory computer readable storage medium, for example, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a portable compact disc Read-Only Memory (Compact Disk Read Only Memory, CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
Based on the same inventive concept, the present application also provides a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the information verification method provided in the various alternative implementations of the above embodiments.
In some possible embodiments, aspects of the information verification method provided herein may also be implemented in the form of a program product comprising a computer program for causing a computer device to perform the steps of the information verification method according to the various exemplary embodiments of the present application described herein above when the program product is run on the computer device, e.g. the computer device may perform the steps as shown in fig. 2, 7, 8 or 12.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a RAM, a ROM, an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product of embodiments of the present application may take the form of a CD-ROM and include program code that can run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with a command execution system, apparatus, or device.
The readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with a command execution system, apparatus, or device. While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.