Disclosure of Invention
One or more embodiments of the present specification describe a two-dimensional code correction method, apparatus, and device, which can correct a two-dimensional code with distortion.
In a first aspect, a two-dimensional code correction method is provided, including:
acquiring multiple frames of images containing the same two-dimensional code;
extracting a characteristic point from any first image in the multi-frame images, and tracking the characteristic point in other images in the multi-frame images;
determining three-dimensional coordinates of the feature point in a three-dimensional space based on the coordinates of the feature point in the first image, the coordinates of the feature point in the other images and a three-dimensional reconstruction algorithm;
acquiring a two-dimensional code template corresponding to the version of the two-dimensional code based on at least one frame of image in the multi-frame of images;
determining a characteristic position corresponding to the characteristic point from the two-dimensional code template;
transforming the two-dimensional code template based on the three-dimensional coordinates of the feature points, the coordinates of the feature positions in the two-dimensional code template and an elastic registration algorithm, so that the two-dimensional code template generates the same deformation as the two-dimensional code in the first image;
projecting the transformed two-dimensional code template onto the first image or projecting the first image onto the transformed two-dimensional code template;
and sampling the projected two-dimensional code template to obtain a corrected two-dimensional code.
In a second aspect, a two-dimensional code correction device is provided, including:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring multiple frames of images containing the same two-dimensional code;
an extracting unit, configured to extract a feature point from an arbitrary first image in the multiple frame images, and track the feature point in other images in the multiple frame images;
a determination unit configured to determine a three-dimensional coordinate of the feature point in a three-dimensional space based on the coordinate of the feature point in the first image, the coordinate of the feature point in the other image, and a three-dimensional reconstruction algorithm;
the acquiring unit is further configured to acquire a two-dimensional code template corresponding to the version of the two-dimensional code based on at least one image of the multiple frames of images;
the determining unit is further configured to determine a feature position corresponding to the feature point from the two-dimensional code template acquired by the acquiring unit;
the transformation unit is used for transforming the two-dimensional code template based on the three-dimensional coordinates of the feature points, the coordinates of the feature positions in the two-dimensional code template and the elastic registration algorithm determined by the determination unit so that the two-dimensional code template generates the same deformation as the two-dimensional code in the first image;
the projection unit is used for projecting the two-dimensional code template transformed by the transformation unit onto the first image or projecting the first image onto the transformed two-dimensional code template;
and the sampling unit is used for sampling the projected two-dimensional code template to obtain the corrected two-dimensional code.
In a third aspect, a two-dimensional code correction device is provided, including:
a memory;
one or more processors; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the programs when executed by the processors implement the steps of:
acquiring multiple frames of images containing the same two-dimensional code;
extracting a characteristic point from any first image in the multi-frame images, and tracking the characteristic point in other images in the multi-frame images;
determining three-dimensional coordinates of the feature point in a three-dimensional space based on the coordinates of the feature point in the first image, the coordinates of the feature point in the other images and a three-dimensional reconstruction algorithm;
acquiring a two-dimensional code template corresponding to the version of the two-dimensional code based on at least one frame of image in the multi-frame of images;
determining a characteristic position corresponding to the characteristic point from the two-dimensional code template;
transforming the two-dimensional code template based on the three-dimensional coordinates of the feature points, the coordinates of the feature positions in the two-dimensional code template and an elastic registration algorithm, so that the two-dimensional code template generates the same deformation as the two-dimensional code in the first image;
projecting the transformed two-dimensional code template onto the first image or projecting the first image onto the transformed two-dimensional code template;
and sampling the projected two-dimensional code template to obtain a corrected two-dimensional code.
The two-dimensional code correction method, device and equipment provided by one or more embodiments of the present specification acquire multiple frames of images including the same two-dimensional code. And extracting the characteristic points from any first image in the multi-frame images, and tracking the characteristic points in other images in the multi-frame images. And determining the three-dimensional coordinates of the feature points in the three-dimensional space based on the coordinates of the feature points in the first image, the coordinates of the feature points in other images and a three-dimensional reconstruction algorithm. And acquiring a two-dimensional code template corresponding to the version of the two-dimensional code based on at least one frame of image in the multi-frame images. And determining the characteristic position corresponding to the characteristic point from the two-dimensional code template. And transforming the two-dimensional code template based on the three-dimensional coordinates of the feature points, the coordinates of the feature positions in the two-dimensional code template and an elastic registration algorithm, so that the two-dimensional code template generates the same deformation as the two-dimensional code in the first image. And projecting the transformed two-dimensional code template onto the first image or projecting the first image onto the transformed two-dimensional code template. And sampling the projected two-dimensional code template to obtain a corrected two-dimensional code. That is, in the scheme provided in this specification, first, the deformation generated by the two-dimensional code is determined by restoring the three-dimensional structure of the two-dimensional code, then, the deformation is applied to the two-dimensional code template by the elastic registration algorithm, and finally, the corrected two-dimensional code is obtained by projecting and sampling the deformed two-dimensional code template. Therefore, the two-dimensional codes with different deformations can be accurately corrected, and the method has good universality.
Detailed Description
The scheme provided by the specification is described below with reference to the accompanying drawings.
Before describing the solution provided in the present specification, the following description will be made on the inventive concept of the present solution.
As described in the background art, with the diversification of the application form of the two-dimensional code, the two-dimensional code itself has some distortion and distortion. In order to realize the correction of the two-dimensional code with deformation and distortion, the realization idea can be as follows:
first, the three-dimensional structure of the deformed two-dimensional code is restored. Specifically, multiple frames of images containing the same deformed two-dimensional code can be acquired. In one example, the morphed two-dimensional code can be as shown in FIG. 1. For any one of the multiple frames of images, a plurality of feature points can be extracted from the multiple frames of images. Then, for each feature point, the position of the feature point in the other frame images can be tracked, and the position of the feature point in the three-dimensional space is determined based on the position of the feature point in the above one frame image and the other frame images and a three-dimensional reconstruction algorithm. And restoring the three-dimensional structure of the deformed two-dimensional code based on the determined positions of the plurality of characteristic points in the three-dimensional space.
It can be understood that after the three-dimensional structure of the deformed two-dimensional code is restored, the deformation and distortion of the two-dimensional code are actually determined.
And secondly, applying the deformation to the corresponding two-dimensional code template through an elastic registration algorithm. The two-dimensional code template described in the present specification may include several feature positions with known coordinates. The feature locations may correspond to modules in a two-dimensional code template. The module can be understood as a black and white module in the two-dimensional code, that is, the module can contain a plurality of pixel points, but the pixel values of the pixel points are not determined, so that the color value of the module is null. Specifically, for the extracted feature points, corresponding feature positions are determined from the two-dimensional code template. And determining a first transformation relation between the feature points and the feature positions based on the positions of the feature points in the three-dimensional space and the corresponding feature positions in the two-dimensional code template, and performing first transformation on the two-dimensional code template based on the first transformation relation. The first transformation herein may refer to similarity transformation, which may include, but is not limited to, translation, rotation, scaling, and the like. It can be understood that after the first transformation is performed on the two-dimensional code template, the two-dimensional code template and the two-dimensional code to be reconstructed can be located in the same coordinate system, so that the preliminary registration of the two-dimensional code template and the two-dimensional code to be reconstructed is realized.
It should be noted that before the preliminary registration is performed, the feature points and the feature positions in the two-dimensional code template are not in the same coordinate system, so the corresponding feature positions determined by the above steps are usually not accurate enough. Therefore, after the preliminary registration is completed, for the feature points, feature positions corresponding to the feature points in the two-dimensional code template after the first transformation is performed may be re-determined through an elastic registration algorithm, and a second transformation relationship between the feature points and the re-determined corresponding feature positions may be determined. And performing second transformation on the two-dimensional code template based on the second transformation relation. The second transformation may refer to a non-rigid transformation, and may include, but is not limited to, rotation, translation, scaling, and warping, among others. Therefore, the two-dimension code template generates the same deformation as the deformed two-dimension code.
And finally, projecting and sampling the two-dimensional code template which generates the deformation. It should be noted that, through the above steps, only the two-dimensional code template generates the same deformation as the deformed two-dimensional code, but since the color value of the module corresponding to each feature position in the two-dimensional code template is null, the deformed two-dimensional code template cannot be used as the reconstructed two-dimensional code. In order to obtain the reconstructed two-dimensional code, the following steps can be further executed: and projecting the two-dimensional code template subjected to the second conversion onto any one frame image in the multi-frame images or projecting any one frame image in the multi-frame images onto the two-dimensional code Morban subjected to the second conversion, so as to obtain the reconstructed two-dimensional code. It is understood that the reconstructed two-dimensional code is a two-dimensional code that generates a deformation. Therefore, the two-dimensional code template after projection can be sampled, namely, the color value corresponding to the module after projection is filled in the corresponding non-deformation two-dimensional code template, so that the two-dimensional code after deformation is removed, and the two-dimensional code after deformation is removed is also called as the corrected two-dimensional code.
The above is the inventive concept provided in the present specification, and the present solution can be obtained based on the inventive concept. The present solution is explained in detail below.
Fig. 2 is a schematic diagram of a two-dimensional code decoding system provided in this specification. As shown in fig. 2, the system may include: an acquisition module 202, a correction module 204, and a decoding module 206.
The obtaining module 202 is configured to obtain multiple frames of images containing the same deformed two-dimensional code.
And the correcting module 204 is configured to restore the three-dimensional structure of the two-dimensional code based on the multi-frame image acquired by the acquiring module 202. And determining the deformation generated by the two-dimensional code based on the recovered three-dimensional structure of the two-dimensional code. And applying the deformation to the two-dimensional code template through an elastic registration algorithm. And projecting and sampling the deformed two-dimensional code template to obtain a corrected two-dimensional code. The specific correction process thereof will be described later.
And a decoding module 206, configured to decode the corrected two-dimensional code.
Fig. 3 is a flowchart of a two-dimensional code correction method according to an embodiment of the present disclosure. The execution subject of the method may be a device with processing capabilities: a server or system or module, such as correction module 204 in fig. 2. As shown in fig. 3, the method may specifically include:
step 302, acquiring a plurality of frames of images containing the same two-dimensional code.
The two-dimensional code may be a two-dimensional code that generates a deformation (referred to as a deformed two-dimensional code). The multi-frame image here may refer to a plurality of frames of images which are continuous or discontinuous in time and taken by the user.
Step 304, extracting a feature point from any first image in the multi-frame images, and tracking the feature point in other images in the multi-frame images.
For example, several feature points may be extracted from the first image by a corner point detection algorithm. The corner detection algorithm herein may include, but is not limited to, Harris (Harris) algorithm, Scale-invariant feature transform (SIFT) algorithm, Speeded Up Robust Features (SURF) algorithm, and Rotated binary Features (ORB) algorithm, etc.
The feature points extracted in the present specification may include two types: a first class of feature points and a second class of feature points. The feature points of the first type may also be referred to as key feature points, which may correspond to corner points of the target feature. The target pattern may include, but is not limited to, a probe pattern, a calibration pattern, a positioning pattern, and the like. The second type of feature points may also be referred to as other feature points, which may be feature points with corner features extracted from any position of the two-dimensional code.
After extracting a plurality of feature points from the first image, for each feature point, the process of tracking the feature point in the other images except the first image in the plurality of frames of images may be:
and selecting a corresponding area in the first image based on the coordinates of the feature points in the first image. The corresponding region of the region is determined from the other image, e.g. the second image. And searching for characteristic points matched with the characteristic points extracted from the first image in the corresponding area. The matched feature point is determined as a corresponding feature point in the second image. Since the change in image content is usually small between temporally consecutive multi-frame images, this method can usually accurately and efficiently track the feature points.
Of course, in practical application, the tracking process of the feature points may also be: several feature points are extracted from other images. And performing feature matching on each feature point extracted from the first image and each feature point extracted from the other images based on the feature description words of the feature points. For any feature point extracted from the first image, the feature point matched with the feature point in the other image is determined as a corresponding feature point.
And step 306, determining the three-dimensional coordinates of the feature points in the three-dimensional space based on the coordinates of the feature points in the first image, the coordinates of the feature points in other images and the three-dimensional reconstruction algorithm.
The three-dimensional reconstruction algorithm may be, for example, a Structure From Motion (SFM) algorithm or the like. The basic principle of the SFM algorithm is: based on the imaging principle, the three-dimensional coordinates of the feature points are calculated through double vision or multi-vision, and then the optimal estimation of the positions of the feature points in the three-dimensional space is obtained through minimizing the reprojection error.
And 308, acquiring a two-dimensional code template corresponding to the version of the two-dimensional code based on at least one frame of image in the multi-frame images.
In the present specification, the version of the two-dimensional code can be determined in the following two ways.
In one implementation, three probe patterns and a center point of the probe pattern may be detected in at least one of the plurality of frame images. Two detection patterns located on the same side are selected from the three detection patterns. The distance between the two detection patterns is calculated based on the center points of the two detection patterns. And determining the version of the two-dimensional code according to the distance. It should be noted that the method for determining the version of the two-dimensional code can be applied to two-dimensional codes of any version, so that the method has better universality.
The probe pattern described in this specification can be viewed as consisting of 3 overlapping concentric squares. In a binarized image, the proportion of black and white length of line segments of the detection graph satisfies 1: 1: 3: 1: 1. therefore, the detection pattern and the center point thereof can be detected in the first image based on the feature of the detection pattern. Since the detection of the detection pattern and the center point thereof is a conventional technique, it is not described herein in detail.
In one example, the version determination formula of the two-dimensional code may be: (distance (f1, f 2)/(modular size cos (theta) -10))/4; wherein f1 and f2 are the central points of the two detection patterns positioned on the same side, moduleSize is the length of the module, and theta is the included angle between the scanning line and the connecting line of the central points of the two detection patterns in the transverse direction or the longitudinal direction. The module here may be a black or white module constituting the two-dimensional code, and the black or white module is generally composed of a plurality of pixel points. The module length may refer to a side length of the one black or white module.
In another implementation, the version of the two-dimensional code may also be read from a designated area of the two-dimensional code. For example, for a QR code whose version number is greater than or equal to 7, since a version of a two-dimensional code is included in its functional area, the version of the two-dimensional code can be directly read from the functional area.
After determining the version of the two-dimensional code, the corresponding two-dimensional code template can be obtained based on the version.
The two-dimensional code template in this specification may include a number of feature positions with known coordinates, and the number of the feature positions may be determined according to the size of the two-dimensional code template. In one example, the size of the two-dimensional code template may be: (17+4 × n) (17+4 × n), where n is the version number. Taking the version number as 1 as an example, the size of the two-dimensional code template may be: 21 × 21, so that the two-dimensional code template may contain 21 × 21 feature positions.
It should be noted that the characteristic positions may correspond to modules in the two-dimensional code template. For example, in fig. 4, one module in the two-dimensional code template represents one feature position. The module can be understood as a black and white module in the two-dimensional code, that is, the module can contain a plurality of pixel points, but the pixel values of the pixel points are not determined, so that the color value of the module is null. In addition, the size of the two-dimensional code template in fig. 4 may be: 21 × 21, that is, the two-dimensional code template may include 21 × 21 feature positions. The coordinates of each feature position may be: (1,1), (2,1), …, (21,1), …, (1,2), (2,2), …, (21, 21).
It should be noted that the two-dimensional code template obtained in this step is a standard template and does not have any deformation.
And 310, determining a characteristic position corresponding to the extracted characteristic point from the two-dimensional code template.
Specifically, for the first-class feature points, the positions of corner points of the corresponding target graph are determined. And determining the position in the two-dimensional code template as a corresponding characteristic position. For example, it is assumed that a certain feature point corresponds to a corner point at the lower right corner of the detection pattern at the upper left corner in the QR code, and since the position of the corner point in the two-dimensional code is: (7,7), so that the coordinates in the two-dimensional code template can be: and (7,7) determining the position of the feature point as the feature position corresponding to the feature point.
For the second type of feature points, the distance between the second type of feature points and each feature position in the two-dimensional code template can be calculated, and the feature position corresponding to the minimum distance is determined as the corresponding feature position. Specifically, the distance may be calculated based on the coordinates of the second-class feature points and the known coordinates of the respective feature positions in the two-dimensional code template.
And step 312, transforming the two-dimensional code template based on the three-dimensional coordinates of the feature points, the coordinates of the feature positions in the two-dimensional code template and the elastic registration algorithm to obtain the two-dimensional code template which generates the same deformation as the two-dimensional code in the first image.
The transformation process may specifically be:
first, a first transformation relation between the feature points and the feature positions is determined based on the three-dimensional coordinates of the feature points and the coordinates of the feature positions in the two-dimensional code template. And performing first transformation on the two-dimensional code template based on the first transformation relation.
It should be noted that, since the coordinates of the feature position in the two-dimensional code template are two-dimensional, the coordinates of the feature position may be preprocessed before the above-mentioned determination of the first transformation relation is performed. The preprocessing may specifically be to expand the two-dimensional coordinates of the feature locations into three-dimensional coordinates. Taking the coordinates of the characteristic position as: (1,1) for example, the expanded three-dimensional coordinates may be: (1,1, Z), wherein the value of Z can be 0 or any positive integer.
After preprocessing the coordinates of the feature position, the step of determining the first transformation relationship may specifically be determining the first transformation relationship between the feature point and the feature position by minimizing an error between the three-dimensional coordinates of the feature point and the three-dimensional coordinates of the feature position.
After the first transformation relation is determined, a first transformation may be performed on the two-dimensional code template. Specifically, the first transformation may be performed on each module (or feature position) in the two-dimensional code template. The first transformation herein may refer to similarity transformation, which may include, but is not limited to, translation, rotation, scaling, and the like. It can be understood that after the first transformation is performed on the two-dimensional code template, the two-dimensional code template and the two-dimensional code to be reconstructed can be located in the same coordinate system, so that the preliminary registration of the two-dimensional code template and the two-dimensional code to be reconstructed is realized.
It should be noted that before the preliminary registration is performed, the feature positions determined by the above steps are usually not accurate enough because the feature points and the feature positions in the two-dimensional code template are not in the same coordinate system. Therefore, after the preliminary registration is completed, for the feature points, the feature positions corresponding to the feature points are re-determined in the two-dimensional code template after the first transformation is performed through an elastic registration algorithm, and a second transformation relation between the feature points and the re-determined feature positions is determined. And according to the second transformation relation, performing second transformation on the two-dimensional code template subjected to the first transformation.
It should be noted that, since the feature points of the first type correspond to corner points of the target pattern, that is, they have definite positions, the feature positions corresponding to the feature points of the first type are usually determined. The elastic registration algorithm may thus re-determine the corresponding feature positions only for the second type of feature points and determine a second transformation relationship between the feature points and the re-determined feature positions. Then, the second transformation is performed again on each module (or feature point) after the first transformation is performed, based on the second transformation relationship.
The elastic registration algorithm may include, but is not limited to, an Iterative Closest Point (ICP) algorithm, a Radial Basis Function (RBF), and the like. The second transformation may refer to a non-rigid transformation, and may include, but is not limited to, rotation, translation, scaling, and warping. After the two-dimensional code template performs the second transformation, the two-dimensional code template generates the same deformation as the deformed two-dimensional code.
It should be noted that, according to the scheme, the two-dimensional code template without any deformation is obtained first, and then the deformation of the current two-dimensional code is adaptively applied to the two-dimensional code template based on the three-dimensional structure of the current restored two-dimensional code, so that the deformation two-dimensional codes in different forms can be corrected.
And step 314, projecting the transformed two-dimensional code template onto the first image or projecting the first image onto the transformed two-dimensional code template.
Through the steps, only the two-dimension code template generates the same deformation as the deformed two-dimension code, but the color value of each module in the two-dimension code template is uncertain, so that the deformed two-dimension code template cannot be used as the reconstructed two-dimension code. In order to obtain the reconstructed two-dimensional code, the following steps can be further executed: and projecting the two-dimensional code template subjected to the second transformation onto any one of the multi-frame images, or projecting any one of the multi-frame images onto the two-dimensional code template subjected to the second transformation, so as to obtain the reconstructed two-dimensional code. It is understood that the reconstructed two-dimensional code is a two-dimensional code that generates a deformation.
And step 316, sampling the projected two-dimensional code template to obtain a corrected two-dimensional code.
Here, each module (or feature position) in the projected two-dimensional code template may be sampled to obtain a pixel value of each pixel point included in each module in the projected two-dimensional code template. And then determining the color value of each module based on the pixel value of the sampled pixel point. For example, if a module includes 7 pixels, and the pixel value of 6 pixels is black and the pixel value of 1 pixel is white, the color value of the module is black.
After the color value of each module is determined, the color value of each module is filled into the non-deformation two-dimensional code template, so that the two-dimensional code after deformation is removed can be obtained, and the two-dimensional code after deformation is removed is also called as the corrected two-dimensional code. The non-deformed two-dimensional code template herein may refer to the two-dimensional code template called in step 310.
In summary, the two-dimensional code correction method provided in the embodiments of the present description may not preset a deformation model (that is, not set an overall deformation model), so that the two-dimensional code with different forms may be corrected. In addition, according to the scheme, the deformation characteristic points are obtained in a mode of extracting the image corner points, and deformation correction is carried out by only depending on the correction graph. Due to the encoding characteristics of the two-dimensional code, angular points exist in the two-dimensional code in a large quantity, and therefore the angular points are used as feature points more generally. Finally, the scheme can remove the deformation with higher efficiency under the condition of providing the corrected graph, but the method is still effective under the condition of not correcting the graph.
Corresponding to the two-dimensional code correction method, an embodiment of the present specification further provides a two-dimensional code correction apparatus, as shown in fig. 5, the apparatus may include:
the acquiringunit 502 is configured to acquire multiple frames of images including the same two-dimensional code.
An extractingunit 504, configured to extract a feature point from an arbitrary first image in the multiple frame images acquired by the acquiringunit 502, and track the feature point in other images in the multiple frame images.
The extractingunit 504 may specifically be configured to:
and selecting a corresponding area in the first image based on the coordinates of the feature points in the first image.
The corresponding area of the selected area is determined from the other images.
And searching the corresponding area for the characteristic points matched with the characteristic points in the first image.
And determining the matched feature points as corresponding feature points of the feature points in the first image in other images.
A determiningunit 506, configured to determine three-dimensional coordinates of the feature point in the three-dimensional space based on the coordinates of the feature point in the first image, the coordinates of the feature point in the other images, and the three-dimensional reconstruction algorithm.
The obtainingunit 502 is further configured to obtain a two-dimensional code template corresponding to a version of a two-dimensional code based on at least one frame of image of the multiple frames of images.
The obtainingunit 502 is specifically configured to:
three detection patterns and the central point of the detection pattern are detected in at least one frame image in the multi-frame images.
Two detection patterns located on the same side are selected from the three detection patterns.
The distance between the two detection patterns is calculated based on the center points of the two detection patterns.
And determining the version of the two-dimensional code according to the distance.
And acquiring a two-dimensional code template corresponding to the version of the two-dimensional code.
The determiningunit 506 is further configured to determine a feature position corresponding to the feature point from the two-dimensional code template acquired by the acquiringunit 502.
The transformingunit 508 is configured to transform the two-dimensional code template based on the three-dimensional coordinates of the feature points, the coordinates of the feature positions in the two-dimensional code template, and the elastic registration algorithm, which are determined by the determiningunit 506, so that the two-dimensional code template generates the same deformation as the two-dimensional code in the first image.
Thetransformation unit 508 may specifically be configured to:
and determining a first transformation relation between the characteristic points and the characteristic positions based on the three-dimensional coordinates of the characteristic points and the coordinates of the characteristic positions in the two-dimensional code template.
And performing first transformation on the two-dimensional code template based on the first transformation relation.
And through an elastic registration algorithm, re-determining the feature positions corresponding to the feature points in the two-dimensional code template after the first transformation is performed, and determining a second transformation relation between the feature points and the re-determined feature positions.
And according to the second transformation relation, performing second transformation on the two-dimensional code template subjected to the first transformation.
The first transformation herein may refer to similarity transformation, including translation, rotation, and scaling. The second transformation may refer to a non-rigid transformation, including translation, rotation, scaling, and warping, among others.
And aprojection unit 510, configured to project the two-dimensional code template transformed by thetransformation unit 508 onto the first image or project the first image onto the transformed two-dimensional code template.
Thesampling unit 512 is configured to sample the two-dimensional code template projected by theprojection unit 510 to obtain a corrected two-dimensional code.
Alternatively, the feature points may include a first type of feature point and a second type of feature point. The first type of feature points correspond to corner points of a target graph in the two-dimensional code, and the target graph comprises one or more of the following: detecting patterns, correcting patterns and positioning patterns.
The selectingunit 506 may specifically be configured to:
and determining the positions of corner points of the corresponding target graph for the first class of feature points. And determining the position in the two-dimensional code template as a corresponding characteristic position.
And for the second type of feature points, calculating the distance between the second type of feature points and each feature position in the two-dimensional code template, and determining the feature position corresponding to the minimum distance as the corresponding feature position.
Thetransformation unit 508 may be further specifically configured to:
and expanding the coordinates of the characteristic position in the two-dimensional code template into three-dimensional coordinates.
A first transformation relationship between the feature point and the feature location is determined by minimizing an error between the three-dimensional coordinates of the feature point and the three-dimensional coordinates of the feature location.
The functions of each functional module of the device in the above embodiments of the present description may be implemented through each step of the above method embodiments, and therefore, a specific working process of the device provided in one embodiment of the present description is not repeated herein.
In the two-dimensional code correction apparatus provided in an embodiment of the present specification, the obtainingunit 502 obtains multiple frames of images including the same two-dimensional code. The extractingunit 504 extracts a feature point from an arbitrary first image among the plurality of frame images, and tracks the feature point in other images among the plurality of frame images. Thedetermination unit 506 determines the three-dimensional coordinates of the feature point in the three-dimensional space based on the coordinates of the feature point in the first image, the coordinates of the feature point in the other image, and the three-dimensional reconstruction algorithm. Theacquisition unit 502 acquires a two-dimensional code template corresponding to a version of a two-dimensional code based on at least one of the plurality of frames of images. The determiningunit 506 determines the feature position corresponding to the feature point from the two-dimensional code template. Thetransformation unit 508 transforms the two-dimensional code template based on the three-dimensional coordinates of the feature points, the coordinates of the feature positions in the two-dimensional code template, and the elastic registration algorithm, so that the two-dimensional code template generates the same deformation as the two-dimensional code in the first image. Theprojection unit 510 projects the transformed two-dimensional code template onto the first image or projects the first image onto the transformed two-dimensional code template. And thesampling unit 512 is configured to sample the projected two-dimensional code template to obtain a corrected two-dimensional code. Therefore, the two-dimensional codes with different deformations can be accurately corrected, and the method has good universality.
The two-dimensional code correction device provided in an embodiment of the present specification may be a sub-module or a sub-unit of the correction module 204 in fig. 2.
Corresponding to the two-dimensional code correction method, an embodiment of the present specification further provides a two-dimensional code correction apparatus, as shown in fig. 6, the apparatus may include:memory 602, one or more processors 604, and one or more programs. Wherein the one or more programs are stored in thememory 602 and configured to be executed by the one or more processors 604, the programs when executed by the processors 604 implement the steps of:
and acquiring multiple frames of images containing the same two-dimensional code.
And extracting the characteristic points from any first image in the multi-frame images, and tracking the characteristic points in other images in the multi-frame images.
And determining the three-dimensional coordinates of the feature points in the three-dimensional space based on the coordinates of the feature points in the first image, the coordinates of the feature points in other images and a three-dimensional reconstruction algorithm.
And acquiring a two-dimensional code template corresponding to the version of the two-dimensional code based on at least one frame of image in the multi-frame of images.
And determining the characteristic position corresponding to the characteristic point from the two-dimensional code template.
And transforming the two-dimensional code template based on the three-dimensional coordinates of the feature points, the coordinates of the feature positions in the two-dimensional code template and an elastic registration algorithm, so that the two-dimensional code template generates the same deformation as the two-dimensional code in the first image.
And projecting the transformed two-dimensional code template onto the first image or projecting the first image onto the transformed two-dimensional code template.
And sampling the projected two-dimensional code template to obtain a corrected two-dimensional code.
The two-dimensional code correction equipment provided by one embodiment of the specification can be used for accurately correcting two-dimensional codes with different deformations, so that the two-dimensional code correction equipment has better universality.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or may be embodied in software instructions executed by a processor. The software instructions may consist of corresponding software modules that may be stored in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a server. Of course, the processor and the storage medium may reside as discrete components in a server.
Those skilled in the art will recognize that, in one or more of the examples described above, the functions described in this invention may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above-mentioned embodiments, objects, technical solutions and advantages of the present specification are further described in detail, it should be understood that the above-mentioned embodiments are only specific embodiments of the present specification, and are not intended to limit the scope of the present specification, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the present specification should be included in the scope of the present specification.