Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouchedThe specific embodiment stated is used only for explaining related invention rather than the restriction to the invention.It also should be noted that in order toConvenient for description, illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the application can phaseMutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the method for being used to generate information that can apply the application or the implementation for generating the device of informationThe exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 can include terminal device 101,102,103, network 104 and server 105.Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be withIncluding various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted with using terminal equipment 101,102,103 by network 104 with server 105, to receive or send outSend message etc..Various telecommunication customer end applications can be installed, such as web browser should on terminal device 101,102,103With, shopping class application, searching class application, instant messaging tools, mailbox client, social platform software etc..
Terminal device 101,102,103 can be the various electronic equipments with display screen and supported web page browsing, wrapIt includes but is not limited to smart mobile phone, tablet computer, E-book reader, MP3 player (Moving Picture ExpertsGroup Audio Layer III, dynamic image expert's compression standard audio level 3), MP4 (Moving PictureExperts Group Audio Layer IV, dynamic image expert's compression standard audio level 4) it is player, on knee portableComputer and desktop computer etc..
Server 105 can be to provide the server of various services, such as to being shown on terminal device 101,102,103Webpage provides the backstage web page server supported.Backstage web page server can carry out the processing such as analyzing to the data received,And handling result is fed back into terminal device.
Server 105 can also be the background information obtained to the image shown on terminal device 101,102,103Processing server.Background information processing server can to picture receive or get, to come from different terminal equipment intoThe processing such as row corresponding point matching, and handling result (such as matching result information) is fed back into terminal device.
It should be noted that in practice, what the embodiment of the present application was provided generally requires for the method that generates informationIt is performed by relatively high performance electronic equipment;For generating the generally requiring through relatively high performance electricity of the device of informationSub- equipment is realized to set.For relative termination equipment, server often has higher performance.Thus, it is generally the case that thisWhat application embodiment was provided is generally performed for the method that generates information by server 105, correspondingly, for generating informationDevice is generally positioned in server 105.However, when the performance of terminal device can meet this method execution condition or this setDuring standby setting condition, the embodiment of the present application provided for generate information method can also by terminal device 101,102,103 perform, and the device for generating information can also be set in terminal device 101,102,103.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization needWill, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the flow for being used to generate one embodiment of the method for information according to the application is shown200.This is used for the method for generating information, includes the following steps:
Step 201, each second spy that each fisrt feature point fisrt feature point concentrated is concentrated with second feature pointSign point is matched, and obtains matching characteristic point to set.
In the present embodiment, for generating electronic equipment (such as the service shown in FIG. 1 of the method for information operation thereonDevice or terminal device) each fisrt feature point that fisrt feature point can be concentrated and second feature point concentrate it is each second specialSign point is matched, and obtains matching characteristic point to set.Wherein, fisrt feature point is the object region that the first image includesCharacteristic point, object region include target pixel points, second feature point is the characteristic point of the second image.Above-mentioned target imageThe shapes and sizes in region can be pre-set.Illustratively, the shape of above-mentioned target image can be round, for example,Can be for the center of circle, with the width of the first image with target pixel points (can also be other pixels in the first image)0.5 (can also be other numerical value) times is the circle of radius;The shape of target image can also be rectangle, for example, it may be with0.5 (can also be other numerical value) of the width of the first image times is the length of side, (can also be in the first image with target pixel pointsOther pixels) centered on square etc..Characteristic point can be used to characterize color, the texture information of image in imagePixel.
In practice, above-mentioned electronic equipment can perform the step as follows:
First, above-mentioned electronic equipment can on the first image, and extraction includes asking the object region of pixelSURF (Speed Up Robust Feature) characteristic point obtains the first SURF feature point sets and calculates the first SURF featuresThe feature vector for each first SURF characteristic points that point is concentrated, obtains first eigenvector set.
Then, above-mentioned electronic equipment can extract the SURF characteristic points of the second image, obtain the 2nd SURF feature point sets, withAnd the feature vector of each 2nd SURF characteristic points in the 2nd SURF feature point sets is calculated, obtain second feature vector set.
Later, above-mentioned electronic equipment (can be denoted as A to each first SURF characteristic points in the first SURF feature point setsPoint), in the 2nd SURF feature point sets, determine second minimum with the distance (such as Euclidean distance, manhatton distance etc.) of A pointsSURF characteristic points (are denoted as B1 points).Determine the twoth SURF characteristic point (be denoted as B2 point) small with the distance of A points time.By A points and B1The distance of point is denoted as L1;The distance of A points and B2 points is denoted as L2.
Then, above-mentioned electronic equipment can calculate the ratio of L1 and L2, if the ratio is less than pre-set threshold value,B1 points are determined as to the matching characteristic point of A points.And the combination of A points and B1 points is determined as matching characteristic point pair.Wherein, it is above-mentionedThreshold value can be used for characterizing the similarity of A points and B1 points.
Finally, above-mentioned electronic equipment can determine the matching characteristic point of each first SURF characteristic points.It obtains as a result,With characteristic point to set.
As an example, refer to Fig. 3 A ' and Fig. 3 A ".In Fig. 3 A ', the first image 301 includes object region3010.Object region 3010 includes target pixel points 3013.Server (i.e. above-mentioned electronic equipment) is by fisrt feature point setEach fisrt feature point (such as fisrt feature point in (set of the characteristic point i.e. included by object region 3010)3011st, 3012,3014) each second feature point concentrated with second feature point is matched.Wherein, second feature point can beThe characteristic point included by the second image 302 in Fig. 3 A ".According to above-mentioned steps, it is characteristic point 3011 to determine characteristic point 3021Matching characteristic point, characteristic point 3022 is the matching characteristic point of characteristic point 3012, and characteristic point 3024 is the matching of characteristic point 3014Characteristic point.Based on this, above-mentioned electronic equipment has obtained matching characteristic point to set.
Optionally, above-mentioned electronic equipment can also be directly by the 2nd SURF of the similarity maximum with the first SURF characteristic pointsCharacteristic point is determined as the matching characteristic point (ratio without comparing both minimum range and time small distance of the first SURF characteristic pointsMagnitude relationship between value and pre-set threshold value).Wherein, similarity can be by standardizing Euclidean distance, Hamming distanceDeng characterization.
Step 202, determine to be contained in matching characteristic point to the second feature o'clock in set the second image image-regionIt is close to be determined as matching characteristic point by the closeness of middle distribution for the corresponding image-region of closeness maximum in identified closenessCollect region.
In the present embodiment, based on the matching characteristic point obtained in step 201 to set, above-mentioned electronic equipment can determineBe contained in the closeness that matching characteristic point is distributed the second feature o'clock in set in the image-region of the second image, by reallyThe corresponding image-region of maximum closeness is determined as matching characteristic point close quarters in fixed closeness.Wherein, the second imageThe shapes and sizes of image-region can be pre-set.What closeness can be included by the unit area of image-regionThe quantity of second feature point is characterized.
As an example, please continue to refer to Fig. 3 A ' and Fig. 3 A ".Wherein, server will be big with 3010 grade of object regionRectangle frame as target frame, moved on the second image 302.Determine mobile initial position and every time target after movementThe quantity of second feature point that the image-region of second image of frame institute frame choosing includes.Finally, server by including it is second specialThe image-region 3020 that the quantity of sign point is most is determined as matching characteristic point close quarters.
Step 203, determine to be contained in the second feature point of matching characteristic point close quarters.
In the present embodiment, based on the matching characteristic point close quarters determined in step 202, above-mentioned electronic equipment can be trueSurely it is contained in the second feature point of matching characteristic point close quarters.
As an example, please refer to Fig.3 A ' and Fig. 3 A ".Wherein, server is determined to be contained in matching characteristic point compact districtThe second feature point 3022,3023,3024 in domain 3020.
Step 204, by matching characteristic point to the collection of the matching characteristic point pair comprising identified second feature point in setIt closes, is determined as revised matching characteristic point to set.
In the present embodiment, above-mentioned electronic equipment can be by matching characteristic point to including identified second feature in setThe set of the matching characteristic point pair of point is determined as revised matching characteristic point to set.
As an example, please refer to Fig.3 A ' and Fig. 3 A ".Wherein, server by matching characteristic point in set include secondThe set of the matching characteristic point pair of characteristic point 3022,3023,3024 is determined as revised matching characteristic point to set.
Step 205, the of target pixel points are generated to set and target pixel points based on revised matching characteristic pointOne matching result.
In the present embodiment, based on the revised matching characteristic point that step 204 determines to set and above-mentioned target pictureVegetarian refreshments, above-mentioned electronic equipment can generate the first matching result of target pixel points.Wherein, above-mentioned first matching result can beThe information of the position of corresponding points (i.e. match point) of the target pixel points in the second image;Can also be for characterizing the second imageWhether the information of the corresponding points of target pixel points is included.
As an example, the step can perform as follows:
First, above-mentioned electronic equipment can determine revised matching characteristic point to each fisrt feature point in setPosition (being denoted as location sets A) and revised matching characteristic point in the first image is special to each second in setPosition (be denoted as location sets B) of the sign o'clock in the second image.
Then, above-mentioned electronic equipment can determine that the position at the midpoint of each position in location sets A (is denoted as belowPoint midway A).For example, the abscissa of point midway can be the average of the abscissa of each position in location sets A,The ordinate of point midway can be the average of the ordinate of each position in location sets A.Similarly, above-mentioned electronic equipmentIt can determine the position (being denoted as point midway B below) at the midpoint of each position in location sets B.
Later, above-mentioned electronic equipment can determine the position of target pixel points.
Finally, above-mentioned electronic equipment can be determined according to target pixel points and the relative position of point midway A secondIt is the corresponding points (i.e. match point) of target pixel points with the pixel that point midway B has identical relative position on image.Above-mentioned electronic equipment can generate the first matching result of target pixel points as a result,.Wherein, the first matching result can be targetThe location information of the corresponding points of pixel.
Optionally, the fisrt feature point that above-mentioned electronic equipment can also include set revised matching characteristic pointSet be determined as set A, revised matching characteristic point is determined as collecting to the set of second feature point that set includesClose B.For each fisrt feature point in set A, its neighborhood (image-region for pre-setting shapes and sizes) is enabled as spyRegion is levied, and provides, when the characteristic area of multiple point generations overlaps, its union to can be regarded as same characteristic area.In this approachGenerate the characteristic area collection Ra of set A.The similarly characteristic area collection Rb of generation set B.If target pixel points are located at characteristic areaCollect in some characteristic area rA in Ra, then it is required target picture to enable the corresponding points in matching characteristic region rB of the rA in RbThe corresponding points position of vegetarian refreshments.And based on the first matching result of this generation target pixel points.
Illustratively, B ' and Fig. 3 B " is please referred to Fig.3.Wherein, the first image 303 includes target pixel points 3031.ServiceDevice according to above-mentioned steps, determine target pixel points 3031 be located in characteristic area 3032 (i.e. features described above region rA) andDetermine that the characteristic area 3042 in the second image 304 is characterized the matching characteristic region in region 3032.Also, characteristic areaThe length of 3032 (shape is rectangle) be Wa, width Ha, target pixel points 3031 arrive characteristic area 3032 length (a line)Distance for Ta, the distance of the width (another a line) of target pixel points 3031 to characteristic area 3032 is La.Server also determinesThe length for going out characteristic area 3042 (shape is rectangle) is Wb, width Hb.Server can be by determining Lb and Tb as a result,Value, so that it is determined that going out the position of the corresponding points of target pixel points 3031.Wherein, Lb is length of the corresponding points to characteristic area 3042The distance of (a line), Tb are distance of the corresponding points to the width (another a line) of characteristic area 3042.As an example, server canTo determine the value of Lb and Tb by equation below:
Lb=Wb*La/Wa
Tb=Hb*Ta/Ha
Above-mentioned server can generate the location information of the corresponding points 3041 of target pixel points 3031 as a result,.It is right in diagramThe location information that 3041 should be put is the first matching result.
It is appreciated that when the similarity of corresponding points and target pixel points is less than preset similarity threshold (such as 0.9),First matching result can be " corresponding points are not present " or the similarity for characterizing corresponding points and target pixel pointsInformation can also be other information.
The method that above-described embodiment of the application provides passes through each fisrt feature point for concentrating fisrt feature point and theEach second feature point in two feature point sets is matched, and obtains matching characteristic point to set;It determines to be contained in matching characteristicThe closeness that is distributed in the image-region of the second image to the second feature o'clock in set of point, by identified closeness mostThe corresponding image-region of big closeness is determined as matching characteristic point close quarters;It determines to be contained in matching characteristic point close quartersSecond feature point;By matching characteristic point in set include identified second feature point matching characteristic point pair set,It is determined as revised matching characteristic point to set;It is raw based on revised matching characteristic point to set and target pixel pointsInto the first matching result of target pixel points.This embodiment improves the accuracys of the matching result of determining target pixel points.
In some optional realization methods of the present embodiment, the above method further includes:By target pixel points in the first figureEach pixel in neighborhood as in, using algorithm of region growing, will meet default screening item as the first sub-pixel pointThe zone of convergency of first sub-pixel point of part is determined as first area;Using each pixel in the second image as secondUsing algorithm of region growing, the zone of convergency of second seed pixel for meeting default screening conditions is determined as sub-pixel pointSecond area;The second area for meeting at least one of following matching condition is determined as and the matched second area in first area:The difference of the compactedness of second area and the compactedness of first area is less than preset compactedness threshold value;The length-width ratio of second area withThe difference of the length-width ratio of first area is less than preset length-width ratio threshold value;The similarity of second area and first area is more than presetFirst similarity threshold;By the combination of first area and the matched second area in first area, it is determined as matching area pair;It is based onMatching area pair and target pixel points generate the second matching result of target pixel points.
Wherein, neighborhood of the target pixel points in the first image is the image-region for including target pixel points, also, its shapeShape and size are pre-set.Illustratively, above-mentioned neighborhood can be the rectangle centered on target pixel points or prosThe image-region of shape.Second matching result can be corresponding points (i.e. match point) position of target pixel points in the second imageInformation;Can also be for characterizing the information whether the second image includes the corresponding points of target pixel points.Default screening conditions areIt is pre-set, for screening the zone of convergency so as to obtain the condition of first area.It is appreciated that it is obtained using algorithm of region growingTo the zone of convergency may include region (it is too small to show as area) caused by noise or large stretch of background, white space(it is excessive to show as area), and these regions to matching be do not have it is helpful.Above-mentioned default screening conditions can be used for rejecting thisA little regions.The method of the similarity of above-mentioned determining first area and second area includes but not limited to:The image of feature based pointThe computational methods of similarity, the computational methods of image similarity based on histogram.
Illustratively, the compactedness of image-region (including first area and second area) can be identified below:First, may be usedTo determine the length (can be characterized with pixel) of image-region and the product of width (can be characterized with pixel);It is then possible to reallyDetermine the actual pixels number of image-region and the ratio of above-mentioned product, which is determined as to the compactedness of image-region.
It should be noted that above-mentioned zone growth algorithm is one kind in the algorithm involved by image Segmentation Technology.RegionThe basic thought of growth algorithm is to be aggregating the pixel with similitude to form region.In the embodiment of the present application, regionThe step of growth algorithm, can carry out as follows:First, using each pixel in the neighborhood in the first image as sub-pixelPoint;Then, by the pixel for having same or similar property in sub-pixel point surrounding neighbors with the sub-pixel point (such as colorIdentical pixel) it is merged into the region where the sub-pixel point, then, new pixel can continue as seed pictureVegetarian refreshments is grown around, until the pixel that the condition of satisfaction is not present can be included to get to a zone of convergency.
In practice, can the experience according to technical staff and/or the accuracy to the second matching result (for example, history numberThe accuracy of the second matching result obtained in) reference, determine or adjust above-mentioned preset compactedness threshold value, preset lengthIt is wide than threshold value and preset first similarity threshold.
As an example, please refer to Fig.3 C ' and Fig. 3 C ".Wherein, server determines mesh on the first image 311 firstThe position of pixel 3111 is marked, then, it is determined that image-region 3110 is the neighborhood of target pixel points 3111;Then, serverUsing each pixel in neighborhood 3110 as the first sub-pixel point, using algorithm of region growing, default screening item will be metThe zone of convergency of first sub-pixel point of part is determined as first area;Finally obtain first area 3112.Similarly, server is alsoThe second area in the second image 312 has been obtained, and has been determined and first area 3112 matched according to above-mentioned matching conditionTwo regions 3122.On this basis, server according to the relative positions of first area 3112 and target pixel points 3111 (such as withThe center of first area 3112 is starting point, moves up 10 pixels, then be moved to the left 10 pixels, then reaches target picturePosition where vegetarian refreshments 3111), the position of the corresponding points 3121 of target pixel points 3111 is generated (such as with second area 3122Center for starting point, move up 10 pixels, then be moved to the left 10 pixels, then reach pair of target pixel points 3111The position where 3121 should be put) information, that is, generate the second matching result of target pixel points.
Optionally, above-mentioned electronic equipment can also generate the second matching result of target pixel points in accordance with the following steps:
Illustratively, D ' and Fig. 3 D " is please referred to Fig.3.Wherein, the first image 313 includes target pixel points 3131.PixelIn the first area that point 3130, which is server, to be determined using above-mentioned steps pixel (such as the central point of first area orOther pixels);Pixel 3140 is server using above-mentioned steps, determined in the second image 314 and first areaPixel in matched second area.It should be noted that relative position and picture of the pixel 3140 in the second areaRelative position of the vegetarian refreshments 3130 in first area is consistent.In diagram, WaFor the width of the first image 313, HaFor the first image313 length;WbFor the width of the second image 314, HbLength for the second image 314.As an example, by target pixel points(such as coordinate, distance of the abscissa for the length of the 3131 to the first image of target pixel points 313, ordinate is mesh for 3131 positionThe wide distance of mark pixel the 3131 to the first image 313) coordinate can be denoted as (qx, qy);By the position of pixel 3130(such as coordinate, distance of the abscissa for the length of the 3130 to the first image of pixel 313, ordinate are pixel 3130 to the firstThe wide distance of image 313) coordinate can be denoted as (max, may);By the position of pixel 3140, (such as coordinate, abscissa areThe distance of the length of the 3140 to the second image of pixel 314, ordinate are the wide distance of the 3140 to the second image of pixel 314)Coordinate can be denoted as (mbx, mby).The correspondence that equation below determines target pixel points 3131 may be used in server as a result,Position where point 3141:
tx=mbx–Wb/Wa*(max-qx)
ty=mby–Hb/Ha*(may-qy)
Wherein, txFor characterizing the distance of the length (a line) of the 3141 to the second image of corresponding points 314, tyIt is vertical for characterizingDistance of the coordinate for the width (another a line) of the 3141 to the second image of corresponding points 314.Server generates target pixel points as a result,3131 the second matching result, wherein, which can be 3141 place of corresponding points of target pixel points 3131The information of position.
It is it is appreciated that equal-sized big with the first area when will zoom to the matched second area in first areaHour, it can be by calculating the similarity of first area and second area, and comparing the similarity being calculated and preset phaseLike the magnitude relationship of degree threshold value (such as 0.99), the second matching result is generated.For example, it is preset when the similarity being calculated is less thanSimilarity threshold when, the second matching result can be " there is no corresponding points ".
It should be noted that by the way that the second matching result is compared with the first matching result, contribute to generation more accurateThe matching result of true target pixel points.
In some optional realization methods of the present embodiment, screening conditions are preset including at least one of following:First is pre-If distance values, the height of the first image and the product of the width three of the first image are less than the pixel quantity of the zone of convergency;PolymerizationThe width in region is less than the width of the first image and the product of the second default distance values;The height of the zone of convergency is less than the first imageHeight and the second default distance values product.
Wherein, the zone of convergency can be region or the border circular areas of rectangle.First default distance values and secondDefault distance values can be pre-set, (such as be obtained for characterizing the subgraph in the first image using stingy diagram technologyControl in the image that subgraph or the page are presented when being shown in equipment) the distance between value.In practice, first is pre-If distance values and the second default distance values can be rule of thumb configured by technical staff.For example, the first default distance valuesCan be 0.01, the second default distance values can be 0.3.
It is appreciated that when the similarity of corresponding points and target pixel points is less than preset similarity threshold (such as 0.9),First matching result can be " corresponding points are not present " or the similarity for characterizing corresponding points and target pixel pointsInformation can also be other information.
It should be noted that by pre-setting above-mentioned default screening conditions, so as to be screened to the zone of convergency, and thenFirst area is obtained, helps to generate the matching result of more accurately target pixel points.
In some optional realization methods of the present embodiment, the first word finder is presented in object pixel neighborhood of a pointIt closes, the second image presents the second lexical set;And the above method further includes:For each first in the first lexical setVocabulary determines the second vocabulary with first terminology match, by first vocabulary and first vocabulary in the second lexical setMatched second vocabulary combination, be determined as match vocabulary pair;Based on matching vocabulary pair and target pixel points, target is generatedThe third matching result of pixel.
Wherein, above-mentioned first vocabulary and the second vocabulary can be the vocabulary that directly can carry out replicating etc. operation;Also may be usedTo be the vocabulary (for example, the vocabulary that can not directly replicate) blended with image.It can be with the second vocabulary of the first terminology matchIncluding but not limited to:Second vocabulary of the solid colour presented with the first vocabulary;It is consistent with the font size that the first vocabulary is presentedThe second vocabulary;Second vocabulary consistent with the font that the first vocabulary is presented.
As an example, above-mentioned step can carry out as follows:
E ' and Fig. 3 E " is please referred to Fig.3, first, above-mentioned electronic equipment can pass through OCR (Optical CharacterRecognition, optical character identification) technology, identify on the first image 321 neighborhood 3210 of target pixel points 3211 andThe lexical information (the first lexical set and the second lexical set) of whole region on second image 322, and determine its (vocabulary) coordinate position.
Then, above-mentioned electronic equipment can carry out cutting word to the first lexical set and the second lexical set.For example, according toSpace length carries out cutting word, i.e. word space is less than the vocabulary of preset word space threshold value, is considered same vocabulary, insteadIt, then it is assumed that it is different vocabulary.In diagram, the first lexical set includes " hello ";Second lexical set includes " hello ", " youIt is good ".
Later, above-mentioned electronic equipment can be directed to each first vocabulary in the first lexical set, in the second lexical setIn determine with the second vocabulary of first terminology match (such as color, size, font size consistent vocabulary), by first vocabulary andFirst vocabulary matched second vocabulary combination, be determined as match vocabulary pair.
Finally, above-mentioned electronic equipment can be according to the first vocabulary of matching vocabulary centering, the position of the second vocabulary and meshThe position of pixel 3211 is marked, generates the third matching result of target pixel points 3211.Obtain the correspondence of target pixel points 3211The position of point 3221 (shown in such as Fig. 3 E ' and Fig. 3 E ").
It is appreciated that when there is no during the second vocabulary with the first terminology match, third matching result can be " can notWith " etc. information.
It should be noted that above-mentioned electronic equipment can be by the position (such as center of the first vocabulary) of the first vocabularyCoordinate can be denoted as (max, may), wherein, distance of the abscissa for the position to the length of the first image, ordinate is the positionTo the wide distance of the first image;By the positions of target pixel points, (such as coordinate, abscissa are target pixel points to the first imageLength distance, ordinate for target pixel points to the first image wide distance) coordinate can be denoted as (qx, qy);Will with this(such as coordinate, distance of the abscissa for the position to the length of the second image, indulges and sits for the position of second vocabulary of the first terminology matchThe position is designated as to the wide distance of the second image) coordinate can be denoted as (mbx, mby).Server may be used as follows as a result,Formula determines the position where the corresponding points of target pixel points:
tx=mbx–Wb/Wa*(max-qx)
ty=mby–Hb/Ha*(may-qy)
Wherein, WaFor the width of the first image, HaLength for the first image;WbFor the width of the second image, HbIt is secondThe length of image.txFor characterizing corresponding points to the distance of the length (a line) of the second image, tyFor characterizing ordinate to correspond toO'clock to the second image width (another a line) distance.Server generates the third matching result of target pixel points as a result,.ItsIn, third matching result is the information of the position where the corresponding points of target pixel points.
It should be noted that when the first image and the second image include vocabulary, by being determined in the second lexical setWith the second vocabulary of first terminology match, and then third matching result is generated, contribute to generation more accurately target pixel pointsMatching result.
In some optional realization methods of the present embodiment, the second vocabulary with first terminology match is determined, including:Determine the four-corner system of first vocabulary;Determine that each second vocabulary in the second lexical set is similar to first vocabularyDegree;By second vocabulary identical and/or maximum similarity with the four-corner system of first vocabulary, it is determined as and first vocabularyThe second vocabulary matched.Wherein, similarity can be true by modes such as the Euclidean distances of the feature vector for the characteristic point for calculating imageIt is fixed.
It should be noted that due to the resolution ratio of electronic equipment is relatively low, font is smaller etc., the identification knot of OCR techniqueThere are certain error rates for fruit.Stagger the time when identifying, have very maximum probability can ensure the quadrangle coding of recognition result still with former word oneIt causes.Therefore, by the use of the four-corner system of word (such as Chinese character) as matching foundation, matched accuracy rate can be greatly promoted.
In some optional realization methods of the present embodiment, the above method further includes:Using object pixel neighborhood of a point,Template-matching operation is carried out to the second image, the image-region of the second image and the similarity of neighborhood are determined, by identified phaseIt is determined as matching image-region like the image-region of the second image of similarity maximum in degree;Determine the selected pixel in neighborhoodPoint and the matched pixel point that selected pixel is determined in image-region is matched;Based on selected pixel, matched pixel point withAnd target pixel points, generate the 4th matching result of target pixel points.
Wherein, selected pixel can be any one pixel in neighborhood, and matched pixel point can be matching imageIn region, with the corresponding pixel of selected pixel.Illustratively, when neighborhood is rectangular area, selected pixel can be withIt is the central point of neighborhood, matched pixel point can be the central point for matching image-region.
Herein, template-matching operation is the well-known operations studied extensively of technical staff of image processing field, herein notIt repeats again.
It is appreciated that can be by calculating the image-region of the second image and the similarity of neighborhood, and compare and be calculatedSimilarity and preset similarity threshold (such as 0.99) magnitude relationship, generate the 4th matching result.For example, when calculatingWhen the similarity arrived is less than preset similarity threshold, the 4th matching result can be " there is no corresponding points ".
As an example, please refer to Fig.3 F ' and Fig. 3 F ".In diagram, the first image 331 includes target pixel points 3311, clothesImage-region 3310 is determined as the neighborhood of target pixel points 3311 by business device, and then, server carries out template to the second imageWith operation, each image-region of the second image 332 and the similarity of neighborhood 3310 are determined, it will be similar in identified similarityThe image-region 3320 for spending the second maximum image is determined as matching image-region, and later, server determines selected in neighborhoodPixel 3312 and the matched pixel point 3322 that selected pixel 3312 is determined in image-region is matched, finally, serverBased on selected pixel 3312, matched pixel point 3322 and target pixel points 3311, pair of target pixel points 3311 is generated3321 location information should be put.Wherein, the location information of above-mentioned corresponding points 3321 is the 4th matching result.
It is appreciated that the coordinate of the position of selected pixel can be denoted as (ma by above-mentioned electronic equipmentx, may),In, distance of the abscissa for the position to the length of the first image, wide distance of the ordinate for the position to the first image;By meshMarking the position of pixel, (such as coordinate, abscissa are target pixel points to the distance of the length of the first image, and ordinate is target pictureVegetarian refreshments is to the wide distance of the first image) coordinate can be denoted as (qx, qy);By the position of the matched pixel point for selecting pixelPut (such as coordinate, distance of the abscissa for the position to the length of the second image, ordinate are the position to the wide of the second imageDistance) coordinate can be denoted as (mbx, mby).The correspondence that equation below determines target pixel points may be used in server as a result,Position where point:
tx=mbx–Wb/Wa*(max-qx)
ty=mby–Hb/Ha*(may-qy)
Wherein, WaFor the width of the first image, HaLength for the first image;WbFor the width of the second image, HbIt is secondThe length of image.txFor characterizing corresponding points to the distance of the length (a line) of the second image, tyFor characterizing ordinate to correspond toO'clock to the second image width (another a line) distance.Server generates the 4th matching result (mesh of target pixel points as a result,The information of position where the corresponding points of mark pixel).
It should be noted that by the method for above-mentioned template matches, by matchings such as the 4th matching result, the first matching resultsAs a result it is compared, aids in determining whether out the position of the more accurately corresponding points of target pixel points.
With further reference to Fig. 4, it illustrates for generating the flow 400 of another embodiment of the method for information.The useIn the flow 400 of the method for generation information, include the following steps:
Step 401, each second spy that each fisrt feature point fisrt feature point concentrated is concentrated with second feature pointSign point is matched, and obtains matching characteristic point to set.
In the present embodiment, step 401 and the step 201 in Fig. 2 corresponding embodiments are basically identical, and which is not described herein again.
It should be noted that in the present embodiment, fisrt feature point is the spy for the object region that the first image includesPoint is levied, object region includes target pixel points, and second feature point is the characteristic point of the second image.First image is target networkWhen page is presented on the first electronic equipment, the image shown by the first electronic equipment, the second image is that target webpage is presented on secondDuring electronic equipment, the image shown by the second electronic equipment.
Step 402, determine to be contained in matching characteristic point to the second feature o'clock in set the second image image-regionIt is close to be determined as matching characteristic point by the closeness of middle distribution for the corresponding image-region of closeness maximum in identified closenessCollect region.
In the present embodiment, step 402 and the step 202 in Fig. 2 corresponding embodiments are basically identical, and which is not described herein again.
Step 403, determine to be contained in the second feature point of matching characteristic point close quarters.
In the present embodiment, step 403 and the step 203 in Fig. 2 corresponding embodiments are basically identical, and which is not described herein again.
Step 404, by matching characteristic point to the collection of the matching characteristic point pair comprising identified second feature point in setIt closes, is determined as revised matching characteristic point to set.
In the present embodiment, step 404 and the step 204 in Fig. 2 corresponding embodiments are basically identical, and which is not described herein again.
Step 405, the of target pixel points are generated to set and target pixel points based on revised matching characteristic pointOne matching result.
In the present embodiment, step 405 and the step 205 in Fig. 2 corresponding embodiments are basically identical, and which is not described herein again.
Step 406, based on the matching result generated, final matching results are generated.
In the present embodiment, above-mentioned electronic equipment is also based on obtained matching result, generates final matching results.Wherein, matching result includes above-mentioned first matching result and below at least one:Second matching result, third matching result.4th matching result.This depends at least one of above-mentioned (the second matching result, third matching result.4th matching result) whetherGeneration.It is appreciated that before the step is performed, if only generated the first matching result (without the second matching result of generation,Third matching result, the 4th matching result), then the matching result generated only includes the first matching result (without including secondMatching result, third matching result, the 4th matching result);If only generated the first matching result, the 4th matching result (andDo not generate the second matching result, third matching result), then the matching result generated only includes the first matching result, the 4thMatching result (without including the second matching result, third matching result).
It is appreciated that based on the matching result generated, there are many realization methods that generate final matching results.It is exemplary, when the matching result generated includes the first matching result, the second matching result, third matching result and the 4th matching knotFruit, and the first matching result is " corresponding point coordinates (100,100) ", the second matching result for " corresponding point coordinates (101,101) ", third matching result for " corresponding point coordinates (100,100) " and the 4th matching result for " corresponding point coordinates (99,99) when ", the most matching result of occurrence number can be determined as final matching results (at this point, final by above-mentioned electronic equipmentCan be " corresponding point coordinates (100,100) " with result);The integration information of the matching result generated can also be determined as mostWhole matching result (at this point, final matching results can be " corresponding point coordinates (100,100), (101,101), (99,99),(100,100)”)。
Step 407, based on final matching results, compatibility test is carried out to the website involved by target webpage.
In the present embodiment, above-mentioned electronic equipment is also based on final matching results, to the net involved by target webpageIt stands and carries out compatibility test.Wherein, above-mentioned compatibility test can include but is not limited to:Browser compatibility is tested, screen rulerVery little and resolution ratio compatibility test, Compatibility of Operating System test, distinct device model compatibility test.
Illustratively, when determining pair of the target pixel points of the first image in the second image by final matching resultsAfter should putting, above-mentioned electronic equipment can realize above-mentioned first electronic equipment of simultaneously operating and the second electronic equipment.It for example, canTo click the input frame of the first image and input word, while realize that the input phase is same in the identical input frame on the second imageWord.And further determine that in the first electronic equipment and the second electronic equipment above-mentioned word display whether to exist it is abnormalEtc..
It is appreciated that when final matching results characterize the corresponding points that target pixel points are not present in the second image, explanationThere may be compatibility issues for above-mentioned website.
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, in the present embodiment for the method that generates informationFlow 400 highlight based on obtained matching result, generate final matching results and compatibility test carried out to websiteThe step of.The scheme of the present embodiment description can introduce more matching schemes as a result, so as to further improve determining target pictureThe accuracy of the matching result of vegetarian refreshments helps to improve the efficiency of compatible Website test.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating letterOne embodiment of the device of breath, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which can specifically answerFor in various electronic equipments.
As shown in figure 5, the present embodiment includes for generating the device 500 of information:Matching unit 501, first determines singleFirst 502, second determination unit 503,504 and first generation unit 505 of third determination unit.Wherein, the configuration of matching unit 501 is usedIn each fisrt feature point that fisrt feature point is concentrated is matched with each second feature point that second feature point is concentrated, obtainTo matching characteristic point to set, wherein, fisrt feature point is the characteristic point for the object region that the first image includes, target figurePicture region includes target pixel points, and second feature point is the characteristic point of the second image;First determination unit 502 is configured to determineBe contained in the closeness that matching characteristic point is distributed the second feature o'clock in set in the image-region of the second image, by reallyThe corresponding image-region of maximum closeness is determined as matching characteristic point close quarters in fixed closeness;Second determination unit503 are configured to the second feature point for determining to be contained in matching characteristic point close quarters;Third determination unit 504 be configured to byMatching characteristic point is determined as revised to the set of the matching characteristic point pair comprising identified second feature point in setWith characteristic point to set;First generation unit 505 is configured to based on revised matching characteristic point to set and target pictureVegetarian refreshments generates the first matching result of target pixel points.
In the present embodiment, fisrt feature point can be concentrated for generating the matching unit 501 of the device 500 of informationEach fisrt feature point is matched with each second feature point that second feature point is concentrated, and obtains matching characteristic point to set.Wherein, fisrt feature point is the characteristic point for the object region that the first image includes, and object region includes object pixelPoint, second feature point are the characteristic points of the second image.The shapes and sizes of above-mentioned object region can be shown pre-set.Illustratively, the shape of above-mentioned target image can be round, for example, it may be (can also be the first figure with target pixel pointsOther pixels as in) for the center of circle, the circle with 0.5 (can also be other numerical value) of the width of the first image times for radiusShape;The shape of target image can also be rectangle, for example, it may be (can also be other numbers with the 0.5 of the width of the first imageValue) it is the length of side, the square centered on target pixel points (can also be other pixels in the first image) etc. again.
In the present embodiment, based on the matching characteristic point that matching unit 501 obtains to set, above-mentioned first determination unit502 can determine to be contained in matching characteristic point the second feature o'clock in set is distributed in the image-region of the second image it is closeThe corresponding image-region of closeness maximum in identified closeness is determined as matching characteristic point close quarters by intensity.ItsIn, the shapes and sizes of the image-region of the second image can be pre-set.Closeness can be characterized as image-regionThe quantity of second feature point that unit area includes.
In the present embodiment, above-mentioned second determination unit 503 is close based on the matching characteristic point that the first determination unit 502 obtainsCollection region can determine to be contained in the second feature point of matching characteristic point close quarters.
In the present embodiment, the second feature point that above-mentioned third determination unit 504 is determined based on the second determination unit 503 canWith by matching characteristic point in set include identified second feature point matching characteristic point pair set, after being determined as amendmentMatching characteristic point to set.
In the present embodiment, the revised matching that above-mentioned first generation unit 505 is obtained based on third determination unit 504Characteristic point can generate the first matching result of target pixel points to set and target pixel points.
In some optional realization methods of the present embodiment, above device further includes:4th determination unit (is not shown in figureGo out) it is configured to, using each pixel of the target pixel points in neighborhood in the first image as the first sub-pixel point, adoptWith algorithm of region growing, the zone of convergency for the first sub-pixel point for meeting default screening conditions is determined as first area;TheFive determination unit (not shown)s are configured to, using each pixel in the second image as second seed pixel, useThe zone of convergency of second seed pixel for meeting default screening conditions is determined as second area by algorithm of region growing;6thDetermination unit (not shown) is configured to the second area for meeting at least one of following matching condition being determined as and firstThe second area of Region Matching:The difference of the compactedness of second area and the compactedness of first area is less than preset compactedness thresholdValue;The difference of the length-width ratio of second area and the length-width ratio of first area is less than preset length-width ratio threshold value;Second area and firstThe similarity in region is more than preset first similarity threshold;7th determination unit is configured to first area and the firstth areaThe combination of the matched second area in domain, is determined as matching area pair;Second generation unit, be configured to based on matching area toAnd target pixel points, generate the second matching result of target pixel points.
Wherein, neighborhood of the target pixel points in the first image is the image-region for including target pixel points, also, its shapeShape and size are pre-set.Illustratively, above-mentioned neighborhood can be the rectangle centered on target pixel points or prosThe image-region of shape.Second matching result can be corresponding points (i.e. match point) position of target pixel points in the second imageInformation;Can also be for characterizing the information whether the second image includes the corresponding points of target pixel points.Default screening conditions areIt is pre-set, for screening the zone of convergency so as to obtain the condition of first area.It is appreciated that it is obtained using algorithm of region growingTo the zone of convergency may include region (it is too small to show as area) caused by noise or large stretch of background or white space(it is excessive to show as area), and these regions to the matching of shape be do not have it is helpful.Above-mentioned default screening conditions are used to rejectThese regions.
In some optional realization methods of the present embodiment, screening conditions are preset including at least one of following:First is pre-If distance values, the height of the first image and the product of the width three of the first image are less than the pixel quantity of the zone of convergency;PolymerizationThe width in region is less than the width of the first image and the product of the second default distance values;The height of the zone of convergency is less than the first imageHeight and the second default distance values product.
Wherein, the zone of convergency can be the region of rectangle.First default distance values and the second default distance values can beIt is pre-set, for characterizing between the subgraph (such as equipment vendor presented the image of the control of the page) in the first imageDistance value.In practice, the first default distance values and the second default distance values can rule of thumb be carried out by technical staffSetting.For example, the first default distance values can be 0.01, the second default distance values can be 0.3.
In some optional realization methods of the present embodiment, the first word finder is presented in object pixel neighborhood of a pointIt closes, the second image presents the second lexical set;And above device further includes:8th determination unit (not shown) is configuredFor being directed to each first vocabulary in the first lexical set, the with first terminology match is determined in the second lexical setTwo vocabulary, by first vocabulary and first vocabulary institute matched second vocabulary combination, be determined as matching vocabulary pair;Third is given birth toIt is configured to, based on matching vocabulary pair and target pixel points, generate the third of target pixel points into unit (not shown)With result.
Wherein, above-mentioned first vocabulary and the second vocabulary can be the vocabulary that directly can carry out replicating etc. operation;Also may be usedTo be the vocabulary (for example, the vocabulary that can not directly replicate) blended with image.It can be with the second vocabulary of the first terminology matchIncluding but not limited to:Second vocabulary of the solid colour presented with the first vocabulary;It is consistent with the font size that the first vocabulary is presentedThe second vocabulary;Second vocabulary consistent with the font that the first vocabulary is presented.
In some optional realization methods of the present embodiment, the 8th determination unit includes:First determining module is (in figure notShow) it is configured to determine the four-corner system of first vocabulary;Second determining module (not shown) is configured to determine theThe similarity of each second vocabulary and first vocabulary in two lexical sets;The configuration of third determining module (not shown) is usedIn by second vocabulary identical and/or maximum similarity with the four-corner system of first vocabulary, it is determined as and first vocabularyThe second vocabulary matched.
It should be noted that due to the resolution ratio of electronic equipment is relatively low, font is smaller etc., the identification knot of OCR techniqueThere are certain error rates for fruit.Stagger the time when identifying, have very maximum probability can ensure the quadrangle coding of recognition result still with former word oneIt causes.Therefore, by the use of the four-corner system of word (such as Chinese character) as matching foundation, discrimination can be greatly promoted.
In some optional realization methods of the present embodiment, above device further includes:9th determination unit (is not shown in figureGo out) it is configured to using object pixel neighborhood of a point, template-matching operation is carried out to the second image, determines the image of the second imageThe image-region of second image of similarity maximum in identified similarity is determined as matching by region and the similarity of neighborhoodImage-region;The selected pixel and scheme in matching that tenth determination unit (not shown) is configured in determining neighborhoodMatched pixel point as determining selected pixel in region;4th generation unit (not shown) is configured to based on selected pictureVegetarian refreshments, matched pixel point and target pixel points generate the 4th matching result of target pixel points.
Wherein, selected pixel can be any one pixel in neighborhood, and matched pixel point can be matching imageIn region, with the corresponding pixel of selected pixel.Illustratively, when neighborhood is rectangular area, selected pixel can be withIt is the central point of neighborhood, matched pixel point can be the central point for matching image-region.
In some optional realization methods of the present embodiment, above device further includes:5th generation unit (is not shown in figureGo out) it is configured to, based on the matching result generated, generate final matching results.
Based on obtained matching result, final matching results are generated.Wherein, matching result includes the above-mentioned first matching knotFruit and at least one of following:Second matching result, third matching result.4th matching result.This depends on above-mentioned at least oneItem (the second matching result, third matching result.4th matching result) whether generate.It is appreciated that when performing the step, such asFruit has only generated the first matching result (without generating the second matching result, third matching result, the 4th matching result), then instituteThe matching result of generation only includes the first matching result (without including the second matching result, third matching result.4th matching knotFruit);If the first matching result, the 4th matching result have been only generated (without generating the second matching result, third matching knotFruit), then the matching result generated only includes the first matching result, the 4th matching result (without including the second matching result,Three matching results).
In some optional realization methods of the present embodiment, the first image is that target webpage is presented on the first electronic equipmentWhen, the image shown by the first electronic equipment, when the second image is that target webpage is presented on the second electronic equipment, the second electronics is setStandby shown image.
In some optional realization methods of the present embodiment, above device further includes:Test cell (not shown)It is configured to based on final matching results, compatibility test is carried out to the website involved by target webpage.
Wherein, above-mentioned compatibility test can include but is not limited to:(test program is different clear for browser compatibility testWhether can with normal operation, can function normal use if looking on device), screen size and resolution ratio compatibility test (test programCan normally be shown under different resolution), Compatibility of Operating System test (test program energy below different operating systemNo normal operation, function can normal use, display whether correct etc.), the compatibility test of distinct device model is (such as in mainstreamIt can normal operation, the phenomenon that can or can not collapsing etc. in equipment).
The device that above-described embodiment of the application provides, concentrated fisrt feature point by matching unit 501 each theOne characteristic point is matched with each second feature point that second feature point is concentrated, and obtains matching characteristic point to set, Ran HouOne determination unit 502 determines to be contained in matching characteristic point to the second feature o'clock in set in the image-region of the second image minuteThe corresponding image-region of closeness maximum in identified closeness is determined as matching characteristic point compact district by the closeness of clothDomain, later the second determination unit 503 determine to be contained in the second feature point of matching characteristic point close quarters, subsequent third determines listMatching characteristic point to the set of the matching characteristic point pair comprising identified second feature point in set, is determined as repairing by member 504For matching characteristic point after just to set, last first generation unit 505 is based on revised matching characteristic point to set and meshPixel is marked, the first matching result of target pixel points is generated, so as to improve the standard of the matching result of determining target pixel pointsTrue property.
Below with reference to Fig. 6, it illustrates suitable for being used for realizing the computer system 600 of the electronic equipment of the embodiment of the present applicationStructure diagram.Electronic equipment shown in Fig. 6 is only an example, to the function of the embodiment of the present application and should not use modelShroud carrys out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored inProgram in memory (ROM) 602 or be loaded into program in random access storage device (RAM) 603 from storage section 608 andPerform various appropriate actions and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to alwaysLine 604.
I/O interfaces 605 are connected to lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathodeThe output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.;And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as becauseThe network of spy's net performs communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such asDisk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereonComputer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart descriptionSoftware program.For example, embodiment of the disclosure includes a kind of computer program product, including being carried on computer-readable mediumOn computer program, which includes for the program code of the method shown in execution flow chart.In such realityIt applies in example, which can be downloaded and installed from network by communications portion 609 and/or from detachable media611 are mounted.When the computer program is performed by central processing unit (CPU) 601, perform what is limited in the present processesAbove-mentioned function.
It should be noted that computer-readable medium described herein can be computer-readable signal media or meterCalculation machine readable storage medium storing program for executing either the two arbitrarily combines.Computer readable storage medium for example can be --- but notIt is limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor or arbitrary above combination.MeterThe more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, justIt takes formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type and may be programmed read-only storageDevice (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,Or above-mentioned any appropriate combination.In this application, computer readable storage medium can any include or store journeyThe tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at thisIn application, computer-readable signal media can include in a base band or as a carrier wave part propagation data-signal,Wherein carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but it is unlimitedIn electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer canAny computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used forBy instruction execution system, device either device use or program in connection.It is included on computer-readable mediumProgram code can be transmitted with any appropriate medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentionedAny appropriate combination.
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journeyArchitectural framework in the cards, function and the operation of sequence product.In this regard, each box in flow chart or block diagram can generationThe part of one module of table, program segment or code, the part of the module, program segment or code include one or more useIn the executable instruction of logic function as defined in realization.It should also be noted that it in some implementations as replacements, is marked in boxThe function of note can also be occurred with being different from the sequence marked in attached drawing.For example, two boxes succeedingly represented are actuallyIt can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depended on the functions involved.Also it to noteMeaning, the combination of each box in block diagram and/or flow chart and the box in block diagram and/or flow chart can be with holdingThe dedicated hardware based system of functions or operations as defined in row is realized or can use specialized hardware and computer instructionCombination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hardThe mode of part is realized.Described unit can also be set in the processor, for example, can be described as:A kind of processor packetInclude matching unit, the first determination unit, the second determination unit, third determination unit and generation unit.Wherein, the name of these unitsClaim not forming the restriction to the unit in itself under certain conditions, for example, the first generation unit is also described as " generationThe unit of first matching result of target pixel points ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can beIncluded in electronic equipment described in above-described embodiment;Can also be individualism, and without be incorporated the electronic equipment in.Above computer readable medium carries one or more program, when said one or multiple programs are held by the electronic equipmentDuring row so that the electronic equipment:Each fisrt feature point that fisrt feature point is concentrated and second feature point concentrate each theTwo characteristic points are matched, and obtain matching characteristic point to set, wherein, fisrt feature point is the target image that the first image includesThe characteristic point in region, object region include target pixel points, and second feature point is the characteristic point of the second image;It determines to includeIn the closeness that matching characteristic point is distributed the second feature o'clock in set in the image-region of the second image, by determined byThe corresponding image-region of maximum closeness is determined as matching characteristic point close quarters in closeness;It determines to be contained in matching characteristicThe second feature point of point close quarters;By matching characteristic point to including the matching characteristic point of identified second feature point in setTo set, be determined as revised matching characteristic point to set;Based on revised matching characteristic point to set and targetPixel generates the first matching result of target pixel points.
The preferred embodiment and the explanation to institute's application technology principle that above description is only the application.People in the artMember should be appreciated that invention scope involved in the application, however it is not limited to the technology that the specific combination of above-mentioned technical characteristic formsScheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent featureThe other technical solutions for arbitrarily combining and being formed.Such as features described above has similar work(with (but not limited to) disclosed hereinThe technical solution that the technical characteristic of energy is replaced mutually and formed.