Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Example one
Fig. 1 is a flowchart of an image matching method according to an embodiment of the present invention, where the embodiment is applicable to a case of matching an image based on feature points and shapes, and the method may be executed by an image matching apparatus according to an embodiment of the present invention, where the apparatus may be implemented in a software and/or hardware manner, as shown in fig. 1, the method specifically includes the following steps:
s110, acquiring a template feature point set of a template image and a target feature point set of an image to be searched, wherein the feature point set comprises salient corner points and edge feature points.
The template image is an image sample used for matching with other images, the template feature point set is a set of feature points with key information in the template image, such as edge information, corner information and gray information, and the template feature point set of the template image includes: template salient corner points and template edge feature points.
The images to be searched can be all images in the image library, or images of a specified type in the image library. The target characteristic point set of the image to be searched comprises the following steps: object salient corner points and object edge feature points.
For example, the method for obtaining the template feature point set of the template image may be to extract template candidate corners of the template image by a corner detection algorithm, remove dummy feature points from the template candidate corners to obtain template salient corners, extract template edge feature points in the neighborhood of the template salient corners by an edge detection algorithm, and determine the template feature point set of the template image according to the template salient corners and the template edge feature points. Similarly, the manner of obtaining the target feature point set of the image to be searched may be to extract a target corner of the image to be searched through a corner detection algorithm, remove a pseudo feature point from the target corner to obtain a target salient corner, extract a target edge feature point in a neighborhood of the target salient corner through an edge detection algorithm, and determine the target feature point set of the image to be searched according to the target salient corner and the target edge feature point. The corner detection algorithm may be a Harris corner detection algorithm, and the edge detection algorithm may be an edge detection algorithm based on a Sobel operator, which is not limited in this embodiment of the present invention.
Optionally, before obtaining the template feature point set of the template image and the target feature point set of the image to be searched, the method further includes:
and denoising the template image and the image to be searched.
Illustratively, a separate acceleration bilateral filter adopting adaptive parameter estimation is adopted to carry out denoising processing on the template image and the image to be searched. The filter is a nonlinear filter designed on the basis of a classical Gaussian filtering algorithm, and has the characteristics of non-iteration, locality, simplicity and the like.
The specific process of denoising the template image and the image to be searched may be as follows: firstly, a local weighted average bilateral filtering method is adopted to obtain the pixel value of the image after denoising treatment, and the formula is as follows:
wherein,
for de-noised images, S
x,yRepresenting a neighborhood of pixels (x, y), g (i, j) being each pixel in the neighborhood, and ω (i, j) being a weighting coefficient.
The weighting coefficient ω (i, j) is defined by a spatial proximity factor ωs(i, j) and a luminance similarity factor ωrThe product of (i, j), i.e., ω (i, j) ═ ωs(i,j)ωr(i, j). Based on the interaction of these two weighting factors, the bilateral filter both smoothly filters the image and preserves the image edges.
For ω, it is noted thatrFor further improvement of brightness similarity factor, one-dimensional weighting factors in horizontal and vertical directions are used to replace weighting factor omega in two-dimensional neighborhoodrThe amount of computation can be effectively reduced without degrading the performance.
The luminance similarity factor in the horizontal direction is:
the luminance similarity factor in the vertical direction is:
wherein σrThe parameter is a filtering parameter, which has a great influence on the filtering effect and can be obtained by self-adaptive calculation according to the image size, and the formula is as follows:
wherein HD ═ (1, -2,1) is a high pass filter associated with the laplacian filter, which represents the convolution followed by the 1/2 downsampling calculation; h and W represent the height and width, respectively, of an image comprising: a template image and an image to be searched.
And S120, traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area.
The target similarity measure may be a maximum value of the similarity measure, or may be a similarity measure greater than a preset threshold; the target similarity measure may be set according to actual requirements.
For example, traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measure of the target feature point set corresponding to the template feature point set and the template image region may be to traverse the template image on the image to be searched starting from the upper left corner of the image to be searched, and calculate the target similarity measure one by one pixel point; the method can also be used for respectively carrying out down-sampling layering on the template image and the image to be searched to obtain a level template image and a level image to be searched, and sequentially calculating the similarity measurement of each level template image and the image to be searched of the corresponding level from top to bottom from rough to fine until the target similarity measurement of the bottommost level image of the template image and the bottommost level image of the image to be searched is obtained, namely the target similarity measurement of the original image of the template image and the original image of the image to be searched.
S130, determining the position of the matched feature point according to the target similarity measurement, and displaying the matched feature point in the image to be searched.
Illustratively, if the similarity measure obtained by calculating the template feature points of the template image and the target feature points of the image to be searched is the target similarity measure, determining the target feature points as matching feature points, acquiring the position coordinates of the matching feature points, and displaying the matching feature points in the image to be searched.
According to the technical scheme of the embodiment, a template feature point set of a template image and a target feature point set of an image to be searched are obtained, wherein the feature point set comprises salient corner points and edge feature points; traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area; the positions of the matched feature points are determined according to the target similarity measurement, the matched feature points are displayed in the image to be searched, image matching can be performed based on corner point information and edge feature information of the image, the problem that the matching accuracy of a traditional feature point matching method for images with few feature points and certain shapes is low is solved, and the matching accuracy of the images with certain shapes is improved.
Example two
Fig. 2a is a flowchart of an image matching method in a second embodiment of the present invention, and this embodiment is optimized based on the above embodiment, and in this embodiment, acquiring a template feature point set of a template image includes: acquiring template characteristic points and a rotation angle of a template image; if the number of the template feature points is larger than a first preset number, carrying out pyramid downsampling layering on the template image to obtain a hierarchical template image; and changing the angle of the template feature points of each level template image according to the rotation angle to obtain a template feature point set of the template image. The method for acquiring the target feature point set of the image to be searched comprises the following steps: acquiring an image to be searched and the number of layers of a hierarchical template image corresponding to the template image; carrying out pyramid downsampling layering on the image to be searched according to the number of layers to obtain a hierarchical image to be searched; extracting target characteristic points of the image to be searched of each level; and determining a target characteristic point set of the image to be searched according to the target characteristic points.
As shown in fig. 2a, the method of this embodiment specifically includes the following steps:
s210, acquiring template characteristic points and rotation angles of the template image.
The template feature points of the template image comprise template salient corner points and template edge feature points.
The rotation angle may be 360 degrees in a unit of a preset angle, and may be, for example, 1 degree or 2 degrees. Or may be 2 degrees or 4 degrees. The rotation angle is set to show that the angle of the template image is consistent with the angle of the image to be searched.
Specifically, the manner of obtaining the template feature points of the template image may be to extract template candidate corners of the template image, extract template edge feature points of the template salient corners in the neighborhood, and determine the template feature points of the template image according to the template salient corners and the template edge feature points.
Optionally, obtaining template feature points of the template image includes:
extracting template candidate angular points of the template image;
acquiring the neighborhood of the template candidate corner points;
if the template candidate corner is a maximum value point in the neighborhood, determining the template candidate corner as a template salient corner;
extracting template edge characteristic points of the template salient corner points in the neighborhood;
sampling the template edge points to obtain template characteristic edge points;
and determining the template feature points of the template image according to the template salient corner points and the template edge feature points.
The size of the template image may be (2m +1) × (2m +1), where m is 1,2,3 …, and if the template size is 3 × 3, the neighborhood is 8 pixels around the pixel.
Illustratively, template candidate corners of a template image are extracted through a Harris corner detection algorithm, pseudo feature points are removed from the template candidate corners to obtain template salient corners, template edge points are extracted in the neighborhood of the template salient corners through an edge detection algorithm based on a Sobel operator, the template edge points are sampled to obtain template feature edge points, the sampling mode can be equal-interval random sampling or non-equal-interval random sampling, and the template feature points of the template image are formed according to the template salient corners and the template edge feature points. The template feature points of the template image integrate the corner features and the edge features of the image, and can fully embody the corner information and the edge feature information of the image.
Illustratively, the specific steps of extracting the template candidate corner of the template image by the Harris corner detection algorithm are as follows: calculating the gradient I of the template image I (x, y) in the x directionxAnd a gradient I in the y-directiony(ii) a The self-similarity of the template image after translation (Δ x, Δ y) at point (x, y) can be calculated by the formula of the autocorrelation function:
wherein β (u, v) is a window function with a point (u, v) as a center, generally a gaussian weighting function, w (x, y) is a pixel point of the template image, and M (x, y) is a gradient covariance matrix of the corner point. The formula for the matrix M (x, y) and the window function β (u, v) is as follows:
let λ1And λ2The two eigenvalues of the matrix M (x, y) respectively, and the positions of the plane, the edge and the angular point in the image can be judged according to the magnitude of the eigenvalue. Calculating the responsivity of the characteristic points when the candidate angular points are actually detected, wherein the points with the responsivity larger than a preset responsivity threshold value are the candidate angular points, and the calculation formula is as follows:
H=det M-k·(traceM)2;
det M=λ1λ2=AB-C2;
traceM=λ1+λ2=A+B;
h is the responsivity of the characteristic point, detM is a determinant of the matrix, and traceM is a trace of the matrix; k is a constant weight coefficient, generally 0.04-0.06.
Optionally, if the template candidate corner is a maximum point in a neighborhood, determining the template candidate corner as a template salient corner includes:
acquiring a first gradient of the candidate corner points of the template;
acquiring a second gradient of the template candidate corner in the target gradient direction in the neighborhood;
and if the first gradient is larger than the second gradient, the template candidate corner is a maximum value point in the neighborhood, and the template candidate corner is determined as a template salient corner.
The target gradient direction may include a horizontal gradient direction, a vertical gradient direction, -45 ° gradient direction, and a 45 ° gradient direction in a neighborhood range of the template candidate angle point; or may include other gradient directions.
Exemplarily, for a 3 × 3 template image, acquiring a first gradient of a template candidate corner and a second gradient of the template candidate corner in a horizontal gradient direction, a vertical gradient direction, a-45 ° gradient direction and a 45 ° gradient direction of the neighborhood, and if the gradient value of the template candidate corner is greater than the gradient values of two pixel points in the gradient direction of the neighborhood, determining the template candidate corner as a significant feature point, wherein the template candidate corner is a maximum value point in the neighborhood of 8 pixel points; and if the gradient value of the template candidate corner is less than or equal to the gradient values of two pixel points in the gradient direction in the neighborhood, removing the template candidate corner from the template candidate corner.
S220, if the number of the template feature points is larger than a first preset number, pyramid downsampling layering is carried out on the template image, and a hierarchical template image is obtained.
The first preset number of points may be set according to actual requirements, which is not limited in the embodiment of the present invention, and may be determined according to the size of the template image.
Specifically, if the number of the template feature points is greater than a first preset number, pyramid adaptive downsampling layering is performed on the template image until the number of the template feature points is less than or equal to the first preset number, a hierarchical template image is obtained, and the number of pyramid layers at the moment, namely the number of layers of the template image, is recorded.
And S230, changing the angle of the template feature points of the template image of each level according to the rotation angle to obtain a template feature point set of the template image.
Specifically, the feature points of each level template image are rotated according to the rotation angle to obtain level template feature points at different angles, and the level template feature points at different angles form a template feature point set. And if the number of the rotation angles is E and the layer number of the template images is F, the template image feature point set comprises E multiplied by F corresponding feature points of the template images.
For example, the manner of changing the angle of the template feature point of each level of the template image according to the rotation angle may be:
wherein, (x, y) is the pixel point coordinate of the level template image before the angle change, (x ', y') is the pixel point coordinate of the level template image after the angle change, l is the length of the introduced intermediate variable vector, alpha is the horizontal included angle of the intermediate variable vector, and theta is the rotation angle.
It should be noted that the construction of the template image library can be completed by storing the information of the template feature point sets into the multiple nested high-efficiency data. The data structure is a bridge formed by connecting a constructed template image and a subsequent matching process, the data structure is constructed in a mode of nesting containers by using a disorder diagram, the outermost layer is the pyramid layer, the next outer layer is different angles, the innermost layer is a feature point set, the overall structure is shown in fig. 2b, the data access speed in the algorithm operation process can be further improved by designing the data structure, and the algorithm efficiency is improved.
S240, acquiring a target feature point set of the image to be searched.
Optionally, obtaining a target feature point set of an image to be searched includes:
acquiring an image to be searched and the number of layers of a hierarchical template image corresponding to the template image;
carrying out pyramid downsampling layering on the image to be searched according to the number of layers to obtain a hierarchical image to be searched;
extracting target characteristic points of the image to be searched of each level;
and determining a target characteristic point set of the image to be searched according to the target characteristic points.
Exemplarily, conducting pyramid downsampling layering on the image to be searched according to the number of layers of the hierarchical template image corresponding to the template image to obtain the hierarchical image to be searched, obtaining target feature points of the image to be searched of each hierarchy, and forming a target feature point set according to the target feature points of the images to be searched of different hierarchies. The manner of obtaining the target feature point of the image to be searched at each level is the same as the manner of obtaining the template feature point of the template image, which is not described in detail herein.
Optionally, the extracting the target feature point of the image to be searched at each level includes:
extracting target candidate corner points of the image to be searched in the hierarchy;
if the target candidate corner point is a maximum value point in the neighborhood, determining the target candidate corner point as a target salient corner point;
extracting target edge points of the target salient corner points in the neighborhood;
sampling the target edge points to obtain target edge feature points;
and determining the target characteristic points of the image to be searched according to the target salient corner points and the target edge characteristic points.
Illustratively, a Harris corner detection algorithm is used for extracting target candidate corners of an image to be searched, pseudo feature points are removed from the target candidate corners to obtain target salient corners, target edge points are extracted from the neighborhood of the target salient corners through an edge detection algorithm based on a Sobel operator, the target edge points are sampled to obtain target feature edge points, the sampling mode can be equal-interval random sampling or unequal-interval random sampling, and the target feature points of the image to be searched are formed according to the target salient corners and the target edge feature points. The target feature points of the template image integrate the corner features and the edge features of the image, and can fully embody the corner information and the edge feature information of the image.
And S250, traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area.
The target similarity measurement is the maximum similarity measurement of the template feature point set of the image at the lowest level of the template image and the target feature point set of the image at the lowest level of the image to be searched.
Specifically, traversing each pixel point of the image to be searched according to the template image may be traversing each pixel point of a corresponding hierarchical image of the image to be searched according to each hierarchical image of the template image, calculating a target similarity measure of the template feature point set and the target feature point set corresponding to the template image region, implementing a process from coarse matching to fine matching, and finally determining a maximum similarity measure of the template feature point set of the lowest hierarchical image of the template image and the target feature point set of the lowest hierarchical image of the image to be searched.
Optionally, traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity metric of the target feature point set corresponding to the template feature point set and the template image region, including:
traversing each pixel point of the highest-level image of the image to be searched according to the highest-level image of the template image;
calculating a template feature point set of a highest-level image of the template images and a first similarity measure of the target feature point set corresponding to the highest-level template image region, and determining a matching point position corresponding to the maximum value of the first similarity measure;
sequentially determining the areas to be matched of the next-level images of the images to be searched according to the positions of the matching points until the areas to be matched of the images at the lowest level of the images to be searched are determined;
and calculating a template feature point set of the lowest-level image of the template images and a second similarity measure of the target feature point set corresponding to the region to be matched, and determining the maximum value of the second similarity measure as the target similarity measure.
Illustratively, traversing each pixel point of the highest-level image of the image to be searched according to the highest-level image of the template image, calculating a template feature point set of the highest-level image of the template image and a first similarity measure of the target feature point set corresponding to the template image region, determining a matching point position corresponding to the maximum value of the first similarity measure, mapping the matching point position to the next-higher-level image of the image to be searched according to a preset mapping strategy, and obtaining the region to be matched of the next-higher-level image of the image to be searched. Traversing each pixel point of the image to be searched in the next higher level according to the image in the next higher level of the template image, calculating the similarity measurement of the template feature point set of the image in the next higher level of the template image and the target feature point set corresponding to the template image area, determining the area to be matched in the next level according to the maximum value of the similarity measurement, and so on until the area to be matched of the image in the lowest level of the image to be searched is determined. And calculating a template feature point set of the lowest-level image of the template images and a second similarity measure of the target feature point set corresponding to the region to be matched, and determining the maximum value of the second similarity measure as the target similarity measure.
For example, the mapping policy for sequentially determining the areas to be matched of the next-level images of the images to be searched according to the positions of the matching points may be:
wherein (x)1,y1) Is the match point location of the current layer level, (x'2,y′2) The coordinate of the upper left corner of the field to be matched of the image of the next layer of the image to be searched, (x ″)2,y″2) The upper left corner coordinate, img, of the field to be matched for the next layer image of the image to be searchednext.colsThe number of the column pixels of the lower template image is set; imgnext.rowsIs the number of line pixels of the lower layer template image.
Optionally, if the similarity measure of the template feature points and the target feature points corresponding to the hierarchical template image region meets a first termination condition, terminating the similarity measure calculation of the current pixel point;
the first termination condition includes:
sj<smin-1+j/n;
wherein s is
jThe similarity measure of the top j of the target feature points corresponding to the template feature points and the hierarchical template image area, and n is the total of the template feature points and the target feature points participating in calculationThe number of the first and second groups is,
the direction vector of the ith target feature point of the image to be searched for is taken as the hierarchy,
direction vector of template characteristic point corresponding to ith target characteristic point, s
minIs a preset threshold value, t'
iThe x-direction gradient of the target feature point of the image to be searched for the hierarchy,
is the gradient of the template feature points of the hierarchical template image in the x direction, u'
iThe y-direction gradient of the target feature point of the image to be searched for the hierarchy,
is the y-direction gradient of the template feature points of the hierarchical template image.
In the image matching process, in order to quickly locate the actual matching position between the template image and the image to be searched, the similarity measurement value at the non-possible target position does not need to be completely calculated, and if the cutoff condition is met, the similarity measurement calculation of the current pixel point is ended in advance, so that the matching speed can be accelerated.
Optionally, if the similarity measure between the first feature point in the hierarchical template image region and the second feature point of the image to be searched corresponding to the hierarchical template image region satisfies a second termination condition, terminating the similarity measure calculation of the current pixel point:
the second termination condition includes:
sj<min(smin-1+fj/n,sminj/n);
wherein, f is (1-gs)min)/(1-smin) When the greedy coefficient g is 1, all the pixels are arrangedAll points adopt strict threshold values to judge termination conditions, sminIs a preset threshold.
The advantages of such an arrangement are: for the images to be searched with shielding and hiding, termination judgment is carried out through different thresholds, the fact that the front n-j terms adopt strict thresholds to judge termination conditions is achieved through setting greedy coefficients g in advance, and the back j terms adopt loose thresholds to judge termination conditions is achieved. Preferably, the greedy coefficient g is set to 0.9.
And S260, determining the position of the matched feature point according to the target similarity measurement, and displaying the matched feature point in the image to be searched.
As shown in fig. 2c, the specific steps of the technical solution of this embodiment are: firstly, extracting characteristic points of a template image. After the template image is subjected to filtering and denoising processing, candidate corners of the template image are extracted through a Harris corner detection algorithm, and the candidate corners with the maximum responsivity are reserved as salient corners according to a non-maximum suppression method of local regions of the candidate corners; extracting edge feature points in the neighborhood of the salient corner points through an edge detection algorithm, screening the edge feature points by adopting an equal-interval sampling strategy, and determining a feature point set of the image to be searched according to the salient corner points and the edge feature points.
And secondly, carrying out scale change on the template image to obtain a feature point set of the template image and form a template library. And layering and angle rotating the corresponding template image by a pyramid self-adaptive layering method to obtain a multi-scale and multi-angle template image feature point set.
And thirdly, extracting a characteristic point set of the image to be searched. And layering the images to be searched according to the number of layers of the template images, acquiring a feature point set of each layer of the images to be searched, calculating the similarity measurement of the template feature point set of each layer of the template images and the feature point set of the images to be searched corresponding to the template image area of the corresponding hierarchy, and if a termination condition is met, finishing the similarity measurement calculation of the current pixel point in advance.
And fourthly, determining the matching position according to the mapping strategy. And gradually determining the matching position points of the bottom template image and the bottom image to be searched according to the mapping strategy from rough matching to fine matching, and displaying the positions of the matching points in the image to be searched.
According to the technical scheme of the embodiment, a template feature point set of a template image and a target feature point set of an image to be searched are obtained, wherein the feature point set comprises salient corner points and edge feature points; traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area; the positions of the matched feature points are determined according to the target similarity measurement, the matched feature points are displayed in the image to be searched, image matching can be carried out based on the corner point information and the edge feature information of the image, the problems that a traditional feature point matching method is low in matching accuracy for images with few feature points and certain shapes are solved, the matching accuracy of the images with the certain shapes is improved, and the matching efficiency can be improved by further extracting the significant corner points from the corner point information.
EXAMPLE III
Fig. 3 is a schematic structural diagram of an image matching apparatus according to a third embodiment of the present invention. The embodiment may be applicable to the case of matching an image based on feature points and shapes, the apparatus may be implemented in software and/or hardware, and the apparatus may be integrated in any device providing the function of image matching, as shown in fig. 3, where the apparatus for image matching specifically includes: anacquisition module 310, acalculation module 320, and adetermination module 330.
The acquiringmodule 310 is configured to acquire a template feature point set of a template image and a target feature point set of an image to be searched, where the feature point set includes salient corner points and edge feature points;
a calculatingmodule 320, configured to traverse each pixel point of the image to be searched according to the template image, and calculate a target similarity metric of the template feature point set and the target feature point set corresponding to the template image region;
a determiningmodule 330, configured to determine, according to the target similarity metric, a position of a matching feature point, and display the matching feature point in the image to be searched.
Optionally, the obtaining module includes:
the first acquisition unit is used for acquiring template characteristic points and a rotation angle of the template image;
the first layering unit is used for conducting pyramid downsampling layering on the template image to obtain a hierarchical template image if the number of the template feature points is larger than a first preset number;
and the rotating unit is used for carrying out angle change on the template feature points of the template image of each level according to the rotating angle to obtain a template feature point set of the template image.
Optionally, the first obtaining unit includes:
the first extraction subunit extracts the template candidate corner points of the template image;
a first determining subunit, configured to determine the template candidate corner as a template salient corner if the template candidate corner is a maximum point in a neighborhood;
the second extraction subunit is used for extracting template edge points of the template salient corner points in the neighborhood;
the first sampling subunit is used for sampling the template edge points to obtain template edge feature points;
and the second determining subunit is used for determining the template feature points of the template image according to the template salient corner points and the template edge feature points.
Optionally, the first determining subunit is specifically configured to:
acquiring a first gradient of the candidate corner points of the template;
acquiring a second gradient of the template candidate corner in the target gradient direction in the neighborhood;
and if the first gradient is larger than the second gradient, the template candidate corner is a maximum value point in the neighborhood, and the template candidate corner is determined as a template salient corner.
Optionally, the obtaining module includes:
the second acquisition subunit is used for acquiring the image to be searched and the layer number of the hierarchical template image corresponding to the template image;
the second layering unit is used for conducting pyramid downsampling layering on the image to be searched according to the layer number to obtain a hierarchical image to be searched;
the extraction unit is used for extracting target characteristic points of the image to be searched of each hierarchy;
and the determining unit is used for determining the target characteristic point set of the image to be searched according to the target characteristic points.
Optionally, the extracting unit is specifically configured to:
extracting target candidate corner points of the image to be searched in the hierarchy;
if the target candidate corner point is a maximum value point in the neighborhood, determining the target candidate corner point as a target salient corner point;
extracting target edge points of the target salient corner points in the neighborhood;
sampling the target edge points to obtain target edge feature points;
and determining the target characteristic points of the image to be searched according to the target salient corner points and the target edge characteristic points.
Optionally, the calculation module includes:
traversing each pixel point of the highest-level image of the image to be searched according to the highest-level image of the template image;
calculating a first similarity measure of a template feature point set of a highest-level image of the template image and the target feature point set corresponding to the template image region, and determining a matching point position corresponding to the maximum value of the first similarity measure;
sequentially determining the areas to be matched of the next-level images of the images to be searched according to the positions of the matching points until the areas to be matched of the images at the lowest level of the images to be searched are determined;
and calculating a template feature point set of the lowest-level image of the template images and a second similarity measure of the target feature point set corresponding to the region to be matched, and determining the maximum value of the second similarity measure as the target similarity measure.
The product can execute the method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a computer device in the fourth embodiment of the present invention. FIG. 4 illustrates a block diagram of anexemplary computer device 12 suitable for use in implementing embodiments of the present invention. Thecomputer device 12 shown in FIG. 4 is only one example and should not bring any limitations to the functionality or scope of use of embodiments of the present invention.
As shown in FIG. 4,computer device 12 is in the form of a general purpose computing device. The components ofcomputer device 12 may include, but are not limited to: one or more processors orprocessing units 16, asystem memory 28, and abus 18 that couples various system components including thesystem memory 28 and theprocessing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible bycomputer device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
Thesystem memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/orcache memory 32.Computer device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only,storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 4, and commonly referred to as a "hard drive"). Although not shown in FIG. 4, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected tobus 18 by one or more data media interfaces.Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) ofprogram modules 42 may be stored, for example, inmemory 28,such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment.Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Computer device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device,display 24, etc.), with one or more devices that enable a user to interact withcomputer device 12, and/or with any devices (e.g., network card, modem, etc.) that enablecomputer device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O)interface 22. In thecomputer device 12 of the present embodiment, thedisplay 24 is not provided as a separate body but is embedded in the mirror surface, and when the display surface of thedisplay 24 is not displayed, the display surface of thedisplay 24 and the mirror surface are visually integrated. Also,computer device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) vianetwork adapter 20. As shown,network adapter 20 communicates with the other modules ofcomputer device 12 viabus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction withcomputer device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
Theprocessing unit 16 executes various functional applications and data processing by running a program stored in thesystem memory 28, for example, implementing an image matching method provided by an embodiment of the present invention: acquiring a template feature point set of a template image and a target feature point set of an image to be searched, wherein the feature point set comprises salient angular points and edge feature points; traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area; and determining the position of the matched feature point according to the target similarity measurement, and displaying the matched feature point in the image to be searched.
EXAMPLE five
Fifth embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the image matching method provided in all the embodiments of the present invention: acquiring a template feature point set of a template image and a target feature point set of an image to be searched, wherein the feature point set comprises salient angular points and edge feature points; traversing each pixel point of the image to be searched according to the template image, and calculating the target similarity measurement of the template feature point set and the target feature point set corresponding to the template image area; and determining the position of the matched feature point according to the target similarity measurement, and displaying the matched feature point in the image to be searched.
Any combination of one or more computer-readable media may be employed. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.