Movatterモバイル変換


[0]ホーム

URL:


US8380004B1 - Object image matching and applications thereof - Google Patents

Object image matching and applications thereof
Download PDF

Info

Publication number
US8380004B1
US8380004B1US12/477,746US47774609AUS8380004B1US 8380004 B1US8380004 B1US 8380004B1US 47774609 AUS47774609 AUS 47774609AUS 8380004 B1US8380004 B1US 8380004B1
Authority
US
United States
Prior art keywords
object image
list
face
images
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US12/477,746
Inventor
Brian Moffat
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLCfiledCriticalGoogle LLC
Priority to US12/477,746priorityCriticalpatent/US8380004B1/en
Assigned to GOOGLE INC.reassignmentGOOGLE INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: MOFFAT, BRIAN
Application grantedgrantedCritical
Publication of US8380004B1publicationCriticalpatent/US8380004B1/en
Assigned to GOOGLE LLCreassignmentGOOGLE LLCCHANGE OF NAME (SEE DOCUMENT FOR DETAILS).Assignors: GOOGLE INC.
Expired - Fee Relatedlegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

Embodiments of this invention relate to matching object images, such as face images. In an embodiment, a method matches object images from a set of object images to a root object image. A set of object image lists ordered according to the relative similarity of the object images is received. Each face in the set of object images is at the origin of one of the object image lists. On a computing device, at least one element from each of the object image lists is applied to an object extraction data structure. Also on a computing device, a range of object images in the object image list is determined according to elements flagged within the object extraction data structure having a particular pattern. The range of object images matches the root object image.

Description

BACKGROUND
1. Field of the Invention
This invention generally relates to image recognition.
2. Related Art
Widespread use of digital cameras has led to individuals amassing large quantities of digital photos. An individual may share digital photos via photo sharing web sites, such as the PICASSAWEB site available from Google Inc. Some photo sharing sites enable a user to name an object, such as a face, that appears in one or more photos. Naming each face in a large photo collection can be tedious and time-consuming for a user.
Some photo sharing sites attempt to group similar looking faces together, enabling the user to name the group of faces. While these techniques have advantages, they tend not to be scalable. Often, a group of faces that can be matched has a maximum size, above which the accuracy of face matching diminishes.
Systems and methods are needed that match a large number of object images, such as face images.
BRIEF SUMMARY
Embodiments of this invention relate to matching object images, such as face images. In a first embodiment, a method matches object images from a set of object images to a root object image. A set of object image lists, ordered according to the relative similarity of the object images, is determined. Each face in the set of object images is at the origin of one of the object image lists. On a computing device, at least one element from each of the object image lists is applied to an object extraction data structure. Also on a computing device, a range of object images in the object image list is determined according to elements flagged within the object extraction data structure having a particular pattern. The range of object images matches the root object image.
In a second embodiment, a system matches object images from a set of object images to a root object image. The system includes a processor and a memory, coupled to the processor, that stores an object extraction module, a data structure generator module, and a shape detector module. The object extraction module receives a set of object image lists ordered according to the relative similarity of the object images. Each face in the set of object images is at the origin of one of the object image lists. The matrix generator module applies at least one element from each of the object image lists to an object extraction data structure. Finally, a shape detector module determines a range of object images in the object image list according to elements flagged within the object extraction data structure having a particular pattern. The range of object images matches the root object image.
In a third embodiment, a method determines the relative similarity of object images. A computing device, an object similarity matrix with a first axis and a second axis is determined. In the object similarity matrix, a value at a position (x, y) represents a degree of similarity between an object image x and an object image y. On a computing device, a score is determined for each unsorted object image on the first axis. Also on a computing device, values on both axes of the similarity matrix corresponding to the object image are swapped with values on both axes corresponding to the face with the highest score determined by the score evaluator module with an unsorted object image, sorting the previously unsorted object image. Finally, the scoring and swapping are repeated to sort each object image in the object similarity matrix.
In a fourth embodiment, a system determines the relative similarity of object images. The system includes a processor and a memory, coupled to the processor, that stores an object similarity module, a score evaluator module, a swap module, and an object ordering module. The object similarity module determines an object similarity matrix with a first axis and a second axis. In the object similarity matrix, a value at a position (x, y) represents a degree of similarity between an object image x and an object image y. The score evaluator module determines a score for each unsorted object image on the first axis. The swap module swaps values on both axes of the similarity matrix corresponding to the object image with the highest score determined by the score evaluator module with an unsorted object image to sort the previously unsorted object image. Finally, the object ordering module that signals the score evaluator module and the swap module to repeatedly score and swap, sorting each object image in the object similarity matrix.
Further embodiments, features, and advantages of the invention, as well as the structure and operation of the various embodiments of the invention are described in detail below with reference to accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES
Embodiments of the invention are described with reference to the accompanying drawings.
FIG. 1 is a diagram illustrating a system for matching face images according to an embodiment of the present invention.
FIG. 2 is a flowchart illustrating a face ordering method, which may be used in operation of the system inFIG. 1.
FIG. 3 is a flowchart illustrating a face extraction method, which may be used in operation of the system inFIG. 1.
FIGS. 4A-F illustrate an example operation of the method inFIG. 2.
FIGS. 5A-F illustrate another example operation of the method inFIG. 2.
FIGS. 6A-E illustrate an example operation of the method inFIG. 3.
FIG. 7 illustrates an example user interface, which may be used by the system inFIG. 1.
In the drawings, like reference numbers may indicate identical or functionally similar elements.
DETAILED DESCRIPTION OF DRAWINGS
The present invention relates to matching object images, including face images. From a face with a known identity, embodiments can identify other faces having the same identity. As an advantage, some embodiments can identify any number of faces from a group of faces having any size. Thus, embodiments of the present invention offer a scalable way to match object images.
While the present invention is described herein with reference to illustrative embodiments for particular applications, it should be understood that the invention is not limited thereto. Those skilled in the art with access to the teachings provided herein will recognize additional modifications, applications, and embodiments within the scope thereof and additional fields in which the invention would be of significant utility.
Although embodiments are described with respect to face image matching for clarity, it would be recognized by a person of skill in the art that embodiments can also match images of other objects as well.
FIG. 1 illustrates asystem100 for matching face images according to an embodiment of the present invention.System100 includes aface database110, aface processing server102, a matchedface database150, and aface web server152.Face processing server102 is coupled betweenface database110 and matchedface database150.Face web server152 is coupled to matchedface database150.Face web server152 is coupled to matchedface database150. Auser client170 may be coupled to theface web server152 via one ormore networks160, such as the Internet.
In general,system100 operates as follows.Face processing server102 receives aroot face190 and a set of unknown faces182.Face processing server102 may receive set ofunknown faces182 fromface database110. Root face190 may, for example, be a face image having a known identity. For example, a user may have entered a name corresponding to rootface190.Face processing server102 determines one or more matched faces188 that are a set of face images from set ofunknown faces182 that matchroot face190. For example,face processing server102 may determine matchedfaces188 that have the same identity asroot face190.Face processing server102 stores the matched faces188 to a matchedface database150. In response to a request fromuser client170,face web server152 may recall matched faces188 and send them touser client170 for display. In this way,system100 determines faces matching a root face and displays them to a user. The operation ofsystem100 and its various components are described in more detail below.
Face processing server102 identifies faces from set ofunknown faces182 that matchroot face190 and returns them as matched faces188.Face processing server102 includes aface similarity module104, aface ordering module120, and aface extraction module130. The operation offace processing server102 is described generally below with respect toFIG. 1 and is described in more detail with respect toFIGS. 2 and 3.
Face processing server102 may, for example, receive set ofunknown faces182 orroot face190 using a SQL select statement. Alternatively, set ofunknown faces182 orroot face190 may be pushed to faceprocessing server102 via, for example, a web service. These examples are merely illustrative.
Onceface processing server102 receivesroot face190 and set ofunknown faces182,face processing server102 generally operates as follows. Face similarity module determines a facessimilarity matrix184 describing the relative similarity of every face image (includingroot face190 and unknown faces182) to every other face image. The term “matrix” as used herein generally describes any data structure that includes elements addressable with at least two indexes. Other data structures may be used as would be recognized by a person of skill in the art. Row zero and column zero in the root face may correspond to the root face and each additional row and column may correspond to one of the unknown faces182. Each value at position (x, y) inface similarity matrix184 may represent a degree of similarity between a face image x and a face image y.Faces similarity matrix184 may determine the degree of similarity by applying any one of well-known similarity algorithms. In one illustrative example, a similarity algorithm may operate by identifying a feature vector corresponding to each image and determining a Euclidean distance between the feature vectors.
Usingface similarity matrix184,face ordering module120 determines face lists186 ordered according to the relative similarity of the various faces. Face orderingmodule120 includes ascore evaluator module122 and aswap module124.Score evaluator module122 determines a score based onface similarity matrix184.Swap module124 reorders the values inface similarity matrix184 based on the score determined byscore evaluator module122. Face orderingmodule120 controls scoreevaluator module122 andswap module124 and signals the modules to repeatedly score and swap data values to order the values inface similarity matrix184. How the faces may be ordered is described in more detail below with respect toFIG. 2.
At certain points during the face ordering, face lists are captured and sent to faceextraction module130 as face lists186. Each face list is an ordered list of face images. Each face list in face lists186 may have a different face at its origin. For example, each face list may have an unknown face or root face at its origin. Further, each face list may include every face ordered according to the faces' relative similarity. For example, suppose R is a root face and A-F are unknown faces. There may be face lists starting with each of A-F and R. Further, each face list includes A-F and R, and is ordered according to the similarity of the face at the origin to each of the other faces. Example face lists are illustrated inFIG. 6A and described further below.
Using face lists186,face extraction module130 identifies matched faces188 that matchroot face190. To identify matched faces188,face extraction module130 uses amatrix generator module132 and asquare detector module154.Matrix generator module132 may create and initialize a face extraction matrix. As mentioned earlier, use of a matrix is merely illustrative, other data structures may be used as would be recognized by a person skilled in the art given this description.Matrix generator module132 may repeatedly apply an element from each of the face lists186 to the face extraction matrix by, for example, flagging cells in the matrix.
Square detector module154 evaluates the cells flagged in the face extraction matrix. When the cells flagged in the face extraction matrix resemble a square positioned toward the origin of the face extraction matrix, then face extraction module returns the corresponding faces as matched faces188. Other shapes or patterns may be detected aside from a square. Matched faces188 may be, for example, the faces corresponding to the columns or rows occupied to by the square detected bysquare detector module154.
In an embodiment,detector module154 may detect a pattern by determining a range value. The portions of the face lists running to the range value may result in a maximum similarity value when applied to a similarity function. In that case, the range of face images in the face image list beginning with the root object image, the range running to the range value may be the matching face images.
Face processing server102 may store matched faces188 in matchedface database150, or faceprocessing server102 may send matchedfaces188 directly to faceweb server152 to be formatted for display onuser client170.
Each offace similarity module104,face ordering module120,score evaluator module122,swap module124,face extraction module130,matrix generator module132 andsquare detector module154 may be implemented in hardware, software, firmware or any combination thereof.
Face processing server102,face web server152 anduser client170 may be implemented on any type of computing device. Such computing device can include, but is not limited to, a personal computer, mobile device such as a mobile phone, workstation, embedded system, game console, television, set-top box, or any other computing device. Further, a computing device can include, but is not limited to, a device having a processor and memory for executing and storing instructions. Software may include one or more applications and an operating system. Hardware can include, but is not limited to, a processor, memory and graphical user interface display. The computing device may also have multiple processors and multiple shared or separate memory components. For example, the computing device may be a clustered computing environment or server farm.
Face web server152 may be, for example, a photo sharing site, such as a PICASSAWEB site.Face web server152 may include a web server that responds to a hypertext transfer protocol (HTTP) request with an HTTP reply. As illustrative examples, the web server may be, without limitation, Apache HTTP Server, Apache Tomcat, MICROSOFT Internet Information Server, JBoss Application Server, WEBLOGIC Application Server, or SUN Java System Web Server. The web server may serve content such as hypertext markup language (HTML), extendable markup language (XML), documents, videos, images, multimedia features, or any combination thereof. These examples are strictly illustrative and do not limit the present invention.
Face web server152 may send content touser client170.User client170 may include a browser, such as a web browser, that communicates withface web server152 to display content to a user.User client170 may also accept input from a user and send data to faceweb server152.
By communicating withface web server152,user client170 may display matched faces, such as matched faces188, to a user. In this way,system100 identifies faces in set ofunknown faces182 that matchroot face190 and displays the matched faces to the user. To identify the matched faces,system100 executes a face ordering method and a face extraction method. A face ordering method is described with respect toFIG. 2, and a face extraction method is described with respect toFIG. 3.
Face Ordering
FIG. 2 is a flowchart illustrating aface ordering method200. For clarity,method200 is described with respect to illustrative examples onFIGS. 4A-F and5A-F. However, the method should not be limited thereto. A person skilled in the art given this description would recognize other applications offace ordering method200.
Method200 begins by determining a face similarity matrix atstep202. An example face similarity matrix is illustrated in example400 inFIG. 4A.FIG. 4A shows an initialface similarity matrix402 that illustrates a degree of similarity between each of faces A-F and R. Face R may be a root face and faces A-F may be unknown faces. Inface similarity matrix402, each cell (x, y) has a value between zero and one denoting a degree of similarity between a face x and a face y. As discussed above, the degree of similarity may be determined using known similarity functions. As shown inface similarity matrix402, the degree of similarity between face A and face R is 0.35. As another example, the degree of similarity between face A and face D is 0.78. Also, note that the diagonal of the matrix is the maximum similarity value (in this case one) because for those cells faces are compared with themselves. As mentioned earlier, the degree of similarity may be determined by applying any one of well-known similarity algorithms. In one illustrative example, a similarity algorithm may operate by identifying a feature vector corresponding to each image and determining a Euclidean distance between the feature vectors.
Similarity values for the root face are at the origin of each axis. In other words, each value having a row or column zero compares a face with the root face. As mentioned earlier, the root face may be a face image having a known identity. In the case where multiple face images have the known identity (for example multiple faces that the user has identified as being “John Doe”), the root face may be a composite face. Alternatively, the similarity values for the root face may be an aggregate (such as an average or median) of the similarity values for several known face images. For example, suppose the user had identified two face images as being “John Doe”. When determining the similarity value for face B, the face similarity function may be applied to the first known face and face B yielding a value of 0.57, and the face similarity function may be applied to the second known face and face B yielding a value of 0.67. The values 0.57 and 0.67 may be averaged to determine the similarity value between face B and the root face R. Thus, the similarity value may be 0.62.
After the initial face similarity matrix is determined atstep202,method200 enters into an outer and inner loop. A loop index “c” for the inner loop is set to one atstep204. Atstep206, a score is determined according to a function f(0 . . . c-1, r) for each row r starting at c. Although other functions may be used, in examples herein f(0 . . . c-1, r) for a given r is simply the sum of all the values in the columns fromcolumn 0 to column c-1 for row r. Atstep208, the row and column with the maximum score are swapped with row and column c. Atstep210, the value c is incremented. As illustrated bydecision block212, the inner loop may be repeated until c is greater than or equal to the number of columns. An example operation of the inner loop is illustrated inFIGS. 4B-F.
The first iteration of the inner loop is illustrated by a diagram410 inFIG. 4B. At the first iteration, the loop index c is set to 1. To determine the score for each row r, all the values in the columns fromcolumn 0 to column c-1 for the row r are summed. With c being one, the sum runs fromcolumn 0 tocolumn 0. In other words, for the first iteration, no sum is required. The score for each row is simply the value of the row atcolumn 0. Looking to facesimilarity matrix402 inFIG. 4A, the scores for the rows corresponding to faces A, B, C, D, E, and F are 0.35, 0.62, 0.43, 0.52, 0.12, and 0.17 respectively. The row corresponding to face B has the highest score. Thus, the row and column corresponding to face B is swapped with row andcolumn 1. The resulting matrix is illustrated by aface similarity matrix412 inFIG. 4B.
For the second iteration of the inner loop, the loop index c is incremented to 2. The second iteration is illustrated in diagrams420 and430 inFIGS. 4C and 4D. First, the score for each row starting atrow 2 is determined as atstep206. To determine the score for each row, the sum of the values in for columns 0-1 are taken for that row. For example, inrow 2, the values atcolumns 0 and 1 are 0.35 and 0.92. The sum of 0.35 and 0.92 is 1.27. Thus, the score forrow 2 is 1.27. The scores for each row are illustrated in diagram420 atcolumn424.
Once the scores are determined, a face image corresponding to the row with the maximum score is determined. The face's corresponding row and column are swapped with row andcolumn 2 atstep208. In diagram420,row 2 has the highest score. Sincestep208 would effectively swap row andcolumn 2 with itself, no swap is necessary. This is illustrated in diagram430 inFIG. 4D. In diagram430, aface similarity matrix432 has not undergone a swap.
FIGS. 4E-F illustrate a third iteration of the inner loop in diagrams440 and450. For the third iteration, the loop index c is incremented to 3. In diagram440, the scores are calculated atcolumn444.Row 4 has the highest score. So, row andcolumn 4 is swapped with row andcolumn 3 as illustrated in diagram450.
Referring back toFIG. 2,decision block212 repeats the inner loop at steps206-210 until the loop index c is equal to the number of columns. At that point the faces in the matrix are sorted generally according to the degree of similarity to the root face R. So, the row and column of values corresponding to the root face R would be atrow 0 andcolumn 0 in the face similarity matrix. Atstep214, the order of the faces in the matrix is captured. The face list begins with a root face R. For example, the captured face list may be “R, B, C, A, D, E, F”.
After capturing the face list, steps204-214 are repeated for different face similarity matrixes having different faces at their origin. Atstep218, a new face similarity matrix is determined with values corresponding to an unknown face at row and column zero.Decision block216 repeats the process until face lists are captured having every face at an origin. An example of how steps204-214 are repeated is illustrated inFIGS. 5A-F.
FIG. 5A shows a diagram500 with an exampleface similarity matrix502.Face similarity matrix502 has values corresponding to an unknown face A at row and column zero.
FIGS. 5B-F illustrate iterations of the inner loop to sort the face similarity matrix. The inner loop operates in much the same way as described with respect toFIGS. 4B-F. InFIG. 5B, a row from row 1-6 is determined having the maximum score. The corresponding row and column are swapped with row andcolumn 1 offace similarity matrix502. InFIG. 5C, the score f(column 0 . . . 1, row n) is evaluated forcolumns 0 . . . 1 and each row n, where n goes from 2-6. InFIG. 5D, the row and column with the maximum score are swapped with row andcolumn 2. InFIG. 5E, the score f(column 0 . . . 2, row n) is evaluated forcolumns 0 . . . 2 and each row n, where n goes from 3-6. InFIG. 5F, the row and column with the maximum score are swapped with row andcolumn 3. Once the face similarity matrix is sorted, the order of the faces in the matrix is captured. The captures face list begins with an unknown face A. For example, the captured face list may be “A, B, E, D, R, C, F”. The process is repeated for each face.
At the completion ofmethod200 inFIG. 2, a matrix has been sorted for each and every face. As result, each face is at the origin of one captured face list. Example face lists are illustrated inFIG. 6A. How the captured face lists may be used to determine matching faces is illustrated inFIG. 3.
Face Extraction
FIG. 3 is a flowchart illustrating amethod300 for determining matching faces using the face lists, such as those captured inmethod200 inFIG. 2. For clarity,method300 is described with respect to examples inFIGS. 6A-E. However,method300 should not be limited thereto.
Method300 begins by receiving a set of face lists. Each face, including the root face and each unknown face, is at the origin of one face list in the set of face lists. An example set of face lists is illustrated in diagram608 inFIG. 6A. Diagram608 shows a set of face lists602. The set of face lists602 are reordered at604 according to the order of faces in the root face list. This reordering set is optional and is shown here to make the remaining steps ofmethod300 easier to follow.
Once the face lists are received, a face extraction matrix indexed according to the root face list is initialized atstep304.FIG. 6B shows an exampleface extraction matrix610 indexed according to the root face list. Each row and column corresponds to a face and is ordered according to the face list that starts with the root face—“R, B, C, A, D, E, F”.
When the face extraction matrix initialized,method300 enters a loop. The loop index n is set to one atstep306. Atstep308 the nth element of each face list is applied to the face extraction matrix. The nth element of each face list may be applied by flagging the value of the matrix that has a column corresponding to the first element of the face list and a row corresponding to the nth face of the face list.
Atstep310, the flagged elements are examined to see if they resemble a particular shape. For example, the shape may be a square positioned towards the origin of the face extraction matrix. The square may have elements missing. Further, the square may be a requisite size. For example, the square may have to be larger than 2 by 2. Or, in an alternative embodiment, a square smaller than or equal to the requisite size may be acceptable only if no larger square is detected.
If the shape is detected atdecision block312, then faces corresponding to the shape are returned atstep202. Otherwise, the loop index is incremented atstep314 and another iteration ofsteps308 and310 occur. The loop repeats until either the shape is detected or the face extraction matrix fills up. If the face extraction matrix fills up prior to the shape being detected, then no face matches may be returned. In an alternative,method300 may be repeated to look for a different shape. An example operation ofmethod300 is illustrated inFIGS. 6B-E.
FIG. 6B showsface extraction matrix610 where the first element of each face list has been applied. As mentioned above, the nth element of each face list is applied by flagging the value of the matrix that has a column corresponding to the first element of the face list and a row corresponding to the nth face of the face list. So, the first element of each face list is applied by flagging the value of the matrix that has a column corresponding to the first element of the face list and a row corresponding to the first face of the face list. In other words, for each face x the cell (x, x) is flagged. For this reason, the first iteration results in a diagonal acrossface extraction matrix610. At this point, no square is detected inface extraction matrix610, so the loop repeats for a second iteration.
The second iteration is illustrated inFIG. 6B.FIG. 6B showsface extraction matrix620 where the second element of each face list has been applied. Inface extraction matrix620, the second element of each face list is applied by flagging the value of the matrix that has a column corresponding to the first element of the face list and a row corresponding to the second face of the face list. For example, the face list for root face R has as its second element unknown face B. Thus, the element offace extraction matrix610 at column zero corresponding to face R and row one corresponding to face B is flagged. In another example, the face list for unknown face D has as its second element unknown face E. Thus, the element offace extraction matrix610 at column four corresponding to unknown face D and row five corresponding to face E is flagged. After applying the second element of each face list to faceextraction matrix620, the elements flagged inface extraction matrix620 still do not resemble a square positioned towards the matrix's origin. Thus, the loop repeats.
A third iteration of the loop is illustrated with aface extraction matrix640 inFIG. 6D. The third element of each face list has been applied to faceextraction matrix640 in much the same way as described for previous iterations. The elements flagged in theface extraction matrix640 still do not resemble a square positioned at the matrix's origin. As result, the loop repeats for a fourth iteration as illustrated at aface extraction matrix650 inFIG. 6E. Inface extraction matrix650, ashape652 is detected. Shape652 need not be a perfect square. In this example,shape652 only resembles a square as it has an element654 that is not flagged. In other examples, an entire row or column of the shape may not be flagged.
Withshape652 detected, faces corresponding to shape652 are returned as the faces matching the root face R. In this example,shape652 occupies columns 0-3 corresponding to faces R, B, C, and A. Thus, faces A, B, and C match root face R.
As mentioned above, the face matching process described forFIGS. 2-3 is scalable. The matrix may be expanded to search any number of unknown faces with any number of matching faces. In contrast to some other face matching methods, the face matching process described forFIGS. 2-3 may not diminish in accuracy as the number of unknown and matching faces grow. In fact, the accuracy of the face matching process may actually improve as the pool of faces grows. This effect may be due to the way the process aggregates similarity values in scoring and sorting faces.
Example User Interface
FIG. 7 is a diagram700 illustrating an example user interface, which may be used bysystem100 inFIG. 1. The example user interface may be, for example, a user interface of a photo sharing website. In diagram700,screenshot702 may be a main page for the photo sharing website. By clicking on alink712, the user may navigate to a face naming interface at adisplay view704.Display view704 includes a listing of unnamed face images. The face images may be, for example, detected and parsed out of a photographs uploaded by a user.
When a user selects an unnamed face image indisplay view704, the face matching algorithm described above with respect toFIGS. 2-3 may be executed to determine other matching face images. In executing the algorithm, the root face may be the face selected by the user and the unknown faces may be the other unnamed face images that are displayed indisplay view704. The matching face images may be displayed in aregion716 in adisplay view706. The root face image is also displayed at aregion714. In the case where the matched face images are not accurate, the user can manually prune or grow the matched faces using anotherdisplay view708. These display views are illustrative and other display views or user interface elements may be used as would be apparent to a person skilled in the art given this description.
The present invention has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.
The foregoing description of the specific embodiments will so fully reveal the general nature of the invention that others can, by applying knowledge within the skill of the art, readily modify and/or adapt for various applications such specific embodiments, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be within the meaning and range of equivalents of the disclosed embodiments, based on the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.
The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (14)

1. A method for matching object images to a root object image, comprising:
(a) determining a list of object image lists, wherein: (i) each object image in the list of object image lists is at an origin of one of the object image lists, (ii) each object image list is ordered according to the similarity of each object image in the object image list to the object image at the origin of the object image list, and (iii) an object image list ordered according to the similarity of each image to the root object image is at the origin of the list of object image lists;
(b) initializing an object extraction matrix with a column and a row corresponding to the object image list at the origin of the list of object image lists;
repeatedly for respective object image lists in the list of object image lists, until the object extraction matrix is determined to follow a particular pattern in step (d):
(c) repeatedly selecting, on a computing device, a previously unselected object image from the object image list such that the object images are selected from the object image list in an order based on a degree of similarity to the origin object image, for each selected object image:
(i) comparing, on a computing device, the selected object image to the object image at the origin of the object image list, and
(ii) flagging, on a computing device, a position (x, y) in the object extraction matrix, wherein position x corresponds to each object image from the origin of the list of object image lists and position y corresponds to the selected object image in step (c);
(d) determining whether the flagged positions in the object extraction matrix follows the particular pattern; and
(e) when the object extraction matrix is determined to follow the particular pattern, determining, on a computing device, a range of object images in the object image list at the origin of the list of object image lists, the range following the particular pattern, wherein each object image in the range of object images matches the root object image.
4. A system for matching object images from a list of object images to a root object image, comprising:
an object extraction module that:
determines a list of object image lists, wherein: (i) each object image in the list of object image lists is at an origin of one of the object image lists, (ii) each object image list is ordered according to the similarity of each object image in the object image list to the object image at the origin of the object image list, and (iii) an object image list ordered according to the similarity of each image to the root object image is at the origin of the list of object image lists, and
initializes an object extraction matrix with a column and a row corresponding to the object image list at the origin of the list of object image lists;
a data structure generator module that:
repeatedly for respective object image list in the list of object image lists, until the object extraction matrix is determined to follow a particular pattern:
(a) repeatedly selects a previously unselected object image from the object image list such that the object images are selected from the object image list in an order based on a degree of similarity to the origin object image,
for each selected object image:
(b) compares the selected object image to the object image at the origin of the object image list, and
(c) flags a position (x, y) in the object extraction matrix, wherein position x corresponds to each object image from the origin of the list of object image lists and position y corresponds to the selected object image in step (a); and
a shape detector module that:
determines whether the flagged positions in the object extraction matrix follows the particular pattern, and
when the object extraction data matrix is determined to follow the particular pattern, determines a range of object images in the object image list at the origin of the list of object image lists, the range following the particular pattern, wherein each object image in the range of object images matches the root object image,
wherein the object extraction module, data structure generator module, and shape detector module are implemented on at least one processor and memory.
7. A method for determining which object images, from a plurality of object images, are similar to a root object image from the plurality of object images, comprising:
(a) determining, on a computing device, an object similarity matrix with a first axis and a second axis such that each position x along the first axis corresponds to an object image from the plurality of object images and each position y along the second axis corresponds to an object image from the plurality of object images, wherein:
(i) an order of object images along the first axis is the same of an order of object images along the second axis,
(ii) the root object image corresponds to position 0 on the first axis and position 0 on the second axis,
(iii) a value at a position (x, y) in the object similarity matrix represents a degree of similarity between an object image x on the first axis and an object image y on the second axis;
(b) sorting the plurality of object images using the object similarity matrix, the sorting comprising:
(i) for each unsorted object image on the first axis, determining, on a computing device, a score that is calculated using the value at each position (x, y), on the second axis, that corresponds to each sorted object image on the first axis,
(ii) determining which of the unsorted object images has the highest score determined in (i),
(iii) on both the first and second axis of the object similarity matrix, swapping, on a computing device, values corresponding to the object image determined in (ii) to have the highest score with values corresponding to each sorted object image, whereby the swapping sorts the previously unsorted object image, and
(iv) repeating steps (i)-(iv) to sort each remaining unsorted object image in the object similarity matrix;
(d) determining which of the plurality of object images are similar to the root object image based on an order of the sorted plurality of object images determined in (b).
11. A system for determining the relative similarity of object images, comprising:
an object similarity module that determines an object similarity matrix with a first axis and a second axis, wherein a value at a position (x, y) in the object similarity matrix represents a degree of similarity between an object image x on the first axis and an object image y on the second axis;
a score evaluator module that determines for each unsorted object image on the first axis a score that is calculated using the value at each position (x, y), on the second axis, that corresponds to each sorted object image on the first axis, wherein each unsorted object image is determined as not having a highest score;
a swap module that swaps the previously unsorted image on the first axis corresponding to the value at each position (x, y) on the second axis with the highest score determined by the score evaluation module with each sorted object image, wherein the swapped image is a sorted image; and
an object ordering module that signals the score evaluator module and the swap module to repeatedly score and swap, sorting each remaining unsorted object image in the object similarity matrix,
wherein the object similarity module, the score evaluator module, the swap module, and the object ordering module are implemented on at least one processor and memory.
US12/477,7462009-06-032009-06-03Object image matching and applications thereofExpired - Fee RelatedUS8380004B1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US12/477,746US8380004B1 (en)2009-06-032009-06-03Object image matching and applications thereof

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
US12/477,746US8380004B1 (en)2009-06-032009-06-03Object image matching and applications thereof

Publications (1)

Publication NumberPublication Date
US8380004B1true US8380004B1 (en)2013-02-19

Family

ID=47682886

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US12/477,746Expired - Fee RelatedUS8380004B1 (en)2009-06-032009-06-03Object image matching and applications thereof

Country Status (1)

CountryLink
US (1)US8380004B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20140123040A1 (en)*2012-10-252014-05-01Sony CorporationInformation processing apparatus, method, and program
US20150023601A1 (en)*2013-07-192015-01-22Omnivision Technologies, Inc.Robust analysis for deformable object classification and recognition by image sensors
US9444999B2 (en)2014-08-052016-09-13Omnivision Technologies, Inc.Feature detection in image capture
US10990811B2 (en)*2005-09-282021-04-27Avigilon Patent Holding 1 CorporationImage classification and information retrieval over wireless digital networks and the internet

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5261007A (en)*1990-11-091993-11-09Visidyne, Inc.Frequency division, energy comparison signal processing system
US20040037474A1 (en)*2002-06-032004-02-26Omnigon Technologies Ltd.Method of detecting, interpreting, recognizing, identifying and comparing n-dimensional shapes, partial shapes, embedded shapes and shape collages using multidimensional attractor tokens
US20050271297A1 (en)*2004-05-042005-12-08Zbilut Joseph PMethods using recurrence quantification analysis to analyze and generate images
US7120278B2 (en)*2001-08-242006-10-10Kabushiki Kaisha ToshibaPerson recognition apparatus
US7274822B2 (en)2003-06-302007-09-25Microsoft CorporationFace annotation for photo management
US20070286531A1 (en)*2006-06-082007-12-13Hsin Chia FuObject-based image search system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5261007A (en)*1990-11-091993-11-09Visidyne, Inc.Frequency division, energy comparison signal processing system
US7120278B2 (en)*2001-08-242006-10-10Kabushiki Kaisha ToshibaPerson recognition apparatus
US20040037474A1 (en)*2002-06-032004-02-26Omnigon Technologies Ltd.Method of detecting, interpreting, recognizing, identifying and comparing n-dimensional shapes, partial shapes, embedded shapes and shape collages using multidimensional attractor tokens
US7274822B2 (en)2003-06-302007-09-25Microsoft CorporationFace annotation for photo management
US20050271297A1 (en)*2004-05-042005-12-08Zbilut Joseph PMethods using recurrence quantification analysis to analyze and generate images
US20070286531A1 (en)*2006-06-082007-12-13Hsin Chia FuObject-based image search system and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Sorting [online], www.dfstermole.net, 2007 [retrieved on Jun. 21, 2012]. Retrieved from the Internet: , pp. 1-6.*
Sorting [online], www.dfstermole.net, 2007 [retrieved on Jun. 21, 2012]. Retrieved from the Internet: <URL: http://web.archive.org/web/20070206174635/http://www.dfstermole.net/Pascal/hsorting.html>, pp. 1-6.*
Zhang, L., et al., "Efficient Propagation for Face Annotation in Family Albums" Proceedings of the 12th ACM International Conference on Multimedia, Oct. 10-16, 2004, 8 pages.

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10990811B2 (en)*2005-09-282021-04-27Avigilon Patent Holding 1 CorporationImage classification and information retrieval over wireless digital networks and the internet
US20140123040A1 (en)*2012-10-252014-05-01Sony CorporationInformation processing apparatus, method, and program
US20150023601A1 (en)*2013-07-192015-01-22Omnivision Technologies, Inc.Robust analysis for deformable object classification and recognition by image sensors
US9444999B2 (en)2014-08-052016-09-13Omnivision Technologies, Inc.Feature detection in image capture

Similar Documents

PublicationPublication DateTitle
US20220222918A1 (en)Image retrieval method and apparatus, storage medium, and device
US10210179B2 (en)Dynamic feature weighting
US9552511B2 (en)Identifying images using face recognition
CN110059807B (en)Image processing method, device and storage medium
CN106547744B (en)Image retrieval method and system
US8880566B2 (en)Assembler and method thereof for generating a complex signature of an input multimedia data element
US9031999B2 (en)System and methods for generation of a concept based database
JP5282658B2 (en) Image learning, automatic annotation, search method and apparatus
CN108334644B (en)Image-recognizing method and device
CN110162665B (en)Video searching method, computer device and storage medium
CN112101437A (en)Fine-grained classification model processing method based on image detection and related equipment thereof
CN106202362A (en)Image recommendation method and image recommendation device
CN111680183B (en)Object retrieval method and device, storage medium and electronic equipment
WO2022134104A1 (en)Systems and methods for image-to-video re-identification
CN114299128B (en) Multi-view positioning detection method and device
CN112258254A (en)Internet advertisement risk monitoring method and system based on big data architecture
US8380004B1 (en)Object image matching and applications thereof
CN110059212A (en)Image search method, device, equipment and computer readable storage medium
CN112348008A (en) Identification method, device, terminal device and storage medium of certificate information
CN117036897B (en) A few-shot object detection method based on Meta RCNN
CN116628263A (en)Video retrieval method and device based on multiple modes, electronic equipment and storage medium
Guehairia et al.Deep random forest for facial age estimation based on face images
CN115115825A (en)Method and device for detecting object in image, computer equipment and storage medium
CN112632315B (en)Method and device for retrieving remote sensing image
CN114267030A (en) End-to-end license plate detection and key point detection method and device

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:GOOGLE INC., CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOFFAT, BRIAN;REEL/FRAME:029614/0260

Effective date:20050801

STCFInformation on status: patent grant

Free format text:PATENTED CASE

FPAYFee payment

Year of fee payment:4

ASAssignment

Owner name:GOOGLE LLC, CALIFORNIA

Free format text:CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044101/0299

Effective date:20170929

FEPPFee payment procedure

Free format text:MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPSLapse for failure to pay maintenance fees

Free format text:PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCHInformation on status: patent discontinuation

Free format text:PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FPLapsed due to failure to pay maintenance fee

Effective date:20210219


[8]ページ先頭

©2009-2025 Movatter.jp