Summary of the invention
The problem of based on above every technology, the present invention provide a kind of three-dimensional reconstruction method and three-dimensional imaging systemThe step of system, structure light is demarcated in omission in advance, and improve the accuracy of three-dimensional data.
The first aspect of the application provides a kind of three-dimensional reconstruction method, comprising:
The image that same target is shot using two cameras, respectively obtains the first image and the second image;
Characteristic point is extracted from the first image and the second image respectively;
Initialization process is carried out to characteristic point, obtains feature point description;
The each characteristic point extracted on first image is handled as follows respectively:
By description for the characteristic point extracted on description for each characteristic point extracted on the first image and the second imageOperation is carried out respectively, obtains the matching degree of two characteristic points,
It is calculated according to the coordinate of highest two characteristic points of matching degree corresponding with highest two characteristic points of matching degreeObject point and then obtains the depth value and three-dimensional of object point corresponding with highest two characteristic points of matching degree in the parallax of two camerasCoordinate.
The application carries out at numeralization characteristic point by comparing the two images of instantaneous acquiring using double camera structureReason obtains description and carries out matching treatment, and then obtains the three-dimensional coordinate of target object to be reconstructed, without calibration in advanceStructure light, and matching speed is fast, high reliablity.
It further, can will be on the first image when being respectively processed to each characteristic point extracted on the first imageDescription for all characteristic points extracted on description for each characteristic point extracted and the second image carries out operation respectively, obtainsThe matching degree of two characteristic points.
It further, can will be on the first image when being respectively processed to each characteristic point extracted on the first imageDescription for each characteristic point extracted is transported respectively with description of the point on the corresponding polar curve extracted on the second imageCalculate, obtain the matching degree of two characteristic points, herein, polar curve refer in space a little with the line of an image center anotherOne magazine projection.
It further, further include projecting pattern to target the step of shooting image respectively using two cameras beforeThe step of light.For the present invention using the fixed structure light pattern not changed over, light source does not need, raising system synchronous with cameraReliability.
Further, the step of carrying out initialization process to characteristic point further comprises: by the first image and the second imageIt is converted into the bianry image of " point-non-dots ", the pixel value of the pixel where bright spot is set as 1, the pixel value of rest of pixels pointIt is set as 0;Description is defined as: the vector constituted around the pixel value of multiple pixels of characteristic point;Matching degree passes through calculatingThe vector dot product of two feature point description and obtain.
The program projects target object using pattern light, can also increase the quantity of effective image characteristic point, becauseThe precision and density of this three-dimensional data points reconstructed can all greatly improve.Simultaneously as the application uses double camera structure, it is emptyBetween the depth value of upper any point obtained by comparing picture point of this in the two images of left and right camera instantaneous acquiring, therefore thisInvention does not need to demarcate structure light in advance, and depth calculation is not needed by comparing with the speckle pattern prestored come complete yetAt, therefore equipment room interference problem is effectively solved.
It further, can be by the pixel value of the pixel nearest from bright spot when bright spot is between multiple pixelsIt is set as 1.
Further, the step of carrying out initialization process to characteristic point further comprises: utilizing point search engine by the first figurePicture and the second image are converted into the image of " point-non-dots ";Description is defined as: around the pixel of multiple pixels of characteristic pointIt is worth the vector constituted;The pixel value of pixel is set as: max (0, L-d), and wherein L is the width of pixel, and d is bright spot to pictureThe distance at vegetarian refreshments center;Matching degree is obtained by calculating the vector dot product of two feature point description.In this way may be usedMore precisely to describe the position of bright spot, to obtain accurate matching result.
Further, description can be defined as: the vector constituted around the brightness value of multiple pixels of characteristic point;Matching degree is by calculating the square differences and (SSD, Sum of Squared Difference), exhausted of two feature point description sonsTo difference and (SAD, Sum of Absolute Difference) and/or normalizated correlation coefficient (NCC, NormalizedCorrelation Coefficient) in any one and obtain.Characteristic matching is carried out using cost functions such as above-mentioned SSD,For aforementioned schemes, it can be not limited to pattern light, but use other structures light or even general natural lightIrradiation can be carried out three-dimensionalreconstruction.
Further, description can be defined as: constitute around the relatively bright angle value of multiple pixels of characteristic pointVector;Wherein, around the brightness of the pixel of characteristic point if it is greater than the brightness of characteristic point, then the relatively bright angle value of the pixelIt is 1, if it is less than the brightness of characteristic point, then the relatively bright angle value of the pixel is -1, should if being equal to the brightness of characteristic pointThe relatively bright angle value of pixel is 0;Matching degree is obtained by calculating the vector dot product of two feature point description.Using relatively brightDegree carries out three-dimensionalreconstruction to define description, can reduce luminance errors bring erroneous matching to a certain extent, while bigThe big operand for simplifying matching algorithm, improves the performance of system.
Further, when the camera lens of camera or target are dynamic or when structure light changes over time, two cameras are sameWhen shoot the image of same target.
It further, further include to the first image and the corrected step of the second image the step of extracting characteristic point beforeSuddenly.
The second aspect of the application provides a kind of three-dimensionalreconstruction device, comprising:
Shooting unit is configured as being shot the image of same target respectively using two cameras, respectively obtains the first imageWith the second image;
Feature point extraction unit is configured to extract characteristic point from the first image and the second image;
Initialization process unit is configured as carrying out initialization process to characteristic point, obtains feature point description;
Post-processing unit is configured as that each characteristic point extracted on the first image is handled as follows respectively:
Feature point description point that will be extracted on description for each characteristic point extracted on first image and the second imageNot carry out operation, obtain the matching degree of two characteristic points,
It is calculated according to the coordinate of highest two characteristic points of matching degree corresponding with highest two characteristic points of matching degreeObject point and then obtains the depth value and three-dimensional of object point corresponding with highest two characteristic points of matching degree in the parallax of two camerasCoordinate.
Further, post-processing unit is configured to: to each characteristic point extracted on the first image respectively intoWhen row processing, by description for each characteristic point extracted on the first image and retouching for all characteristic points for being extracted on the second imageIt states son and carries out operation respectively, obtain the matching degree of two characteristic points.
Further, post-processing unit is configured to: to each characteristic point extracted on the first image respectively intoWhen row processing, by the description of each characteristic point extracted on the first image it is sub on the corresponding polar curve extracted on the second imageDescription of point carries out operation respectively, obtains the matching degree of two characteristic points.
It further, further include pattern light projection unit, for projecting pattern light to target.
Further, initialization process unit is configured to, and the first image and the second image are converted into " point-The pixel value of pixel where bright spot is set as 1 by the bianry image of non-dots ", and the pixel value of rest of pixels point is set as 0;DescriptionSon is defined as: the vector constituted around the pixel value of multiple pixels of characteristic point;
In post-processing unit, matching degree is obtained by calculating the vector dot product of two feature point description.
Further, when bright spot is between multiple pixels, the pixel value of the pixel nearest from bright spot is set as1。
Further, the step of carrying out initialization process to characteristic point further comprises: by the first image and the second imageIt is converted into the image of " point-non-dots ";Description is defined as: the arrow constituted around the pixel value of multiple pixels of characteristic pointAmount;The pixel value of pixel is set as: max (0, L-d), and wherein L is the width of pixel, and d is bright spot to pixel dot centerDistance;Matching degree is obtained by calculating the vector dot product of two feature point description.
Further, in initialization process unit, description son be defined as: around characteristic point multiple pixels it is brightThe vector that angle value is constituted;In post-processing unit, matching degree is by calculating the square differences of two feature point description sons and absoluteDifference and and/or normalizated correlation coefficient in any one and obtain.
Further, in initialization process unit, description is defined as: around the phase of multiple pixels of characteristic pointThe vector that brightness value is constituted;Wherein, around the brightness of the pixel of characteristic point if it is greater than the brightness of characteristic point, then the pixelThe relatively bright angle value of point is 1, and if it is less than the brightness of characteristic point, then the relatively bright angle value of the pixel is -1, if being equal to spyThe brightness of point is levied, then the relatively bright angle value of the pixel is 0;In post-processing unit, matching degree is retouched by calculating two characteristic pointsIt states the vector dot product of son and obtains.
Further, shooting unit is configured to, when the camera lens of camera or target are dynamic or structure lightWhen changing over time, two cameras shoot the image of same target simultaneously.
Further, three-dimensionalreconstruction device above-mentioned further include: correction unit is configured as the first image and the second imageIt is corrected.
The third aspect of the application provides a kind of 3-D imaging system, including light source and two cameras, further includes aforementionedThe three-dimensionalreconstruction device that any implementation of second aspect or second aspect provides.
Further, the 3-D imaging system further include:
Camera calibration module, the inside and outside parameter for calibration for cameras;
Device control module carries out Image Acquisition and image storage, and control light source projects light for controlling camera;
Output module, for exporting 3D information, 3D information include: the shape of target, spatial position, COLOR COMPOSITION THROUGH DISTRIBUTION and/orIt is any one or more in point cloud data.
It further, further include display unit in aforementioned output module, for the 2D/3D after showing point cloud chart picture or reconstructingModel.
The fourth aspect of the application provides a kind of calculating equipment, which includes processor, memory, processor with depositReservoir establishes communication connection;
Processor, for reading the program in memory, to execute any realization of aforementioned first aspect or first aspectThe method that mode provides.
The 5th aspect of the application provides a kind of non-volatile memory medium, stores in the non-volatile memory mediumProgram when the program is run by calculating equipment, calculates any implementation that equipment executes aforementioned first aspect or first aspectThe method of offer.
The 6th aspect of the application provides a kind of three-dimensional imaging device, any including the aforementioned third aspect or the third aspectThe system that implementation provides.
The present invention uses double camera structure, by comparing picture point of the object point in the two images of left and right camera instantaneous acquiringThe depth value is obtained, the application optimizes matching mechanisms, handles to obtain description simultaneously by carrying out numeralization to characteristic pointMatching treatment, and then three-dimensionalreconstruction are carried out, compared with the existing technology, matching speed is fast, and global reliability is high, and does not need to knotStructure light is demarcated in advance, also efficiently solves equipment room interference problem.
Specific embodiment
The present invention will be further described with attached drawing combined with specific embodiments below.It is understood that described hereinSpecific embodiment is of the invention just for the sake of explaining, rather than limitation of the invention.In addition, for ease of description, in attached drawing onlyShow part related to the present invention and not all structure or process.
In addition, "upper", "lower", "left", "right" used in the following description etc., it is only for facilitate explanation originallyThe structure of invention, should not be construed as limiting the invention.
According to one embodiment of present invention, a kind of 3-D imaging system based on binocular camera is provided.Binocular three-dimensional atAs principle is as shown in Fig. 2, left camera OlWith right camera OrThe distance between (i.e. baseline length) be b, for any one in scenePoint P (x, y, z), if it is in the parallax d=x of left and right cameraL-xR, according to similar triangle theory, the depth value Z=fb/ of point PD, i.e. Z=fb/ (xL-xR).When determining the depth information of scene using binocular solid camera, the figure different in two width is neededAs to above finding out matched characteristic point.The present invention is distributed using the geometry of bright spot in speckle pattern as the description of image characteristic pointSon, by calculating the vector dot product value scheduling algorithm of description come the matching degree of characteristic feature point.
Based on the above principles, the 3-D imaging system that one embodiment of the present of invention provides is as shown in figure 3, the three-dimensional imagingSystem 1 may include: camera calibration module 10, device control module 20, three-dimensionalreconstruction module 30 and output module 40 and depositStore up module (being not shown in Fig. 3) etc..
Wherein, camera calibration module 10 is used for the inside and outside parameter of calibration for cameras, specifically includes that in different distance different angleAcquire the photo of 20~50 width camera calibration plates;Run camera calibration program, obtain two cameras inner parameter (such as focal length,Distortion parameter etc.) and camera between geometric parameter (as be displaced, rotation etc.);And the calibrating parameters of camera are stored to dataIn file.
Device control module 20 includes camera control unit and light source (projector) control unit.Camera control unit controlCamera carries out Image Acquisition and image storage, and light source control unit completes speckle knot by the control to projector or other light sourcesThe projection of structure light or other light.
Three-dimensionalreconstruction module 30 is the core of this system, and the detailed description about the part will retouch laterIt states.The module mainly completes correction between original input picture, Feature Points Matching image, generates disparity map according to matching resultAs, post-processing is filtered etc. to the anaglyph of generation with remove bad data point, anaglyph is converted into depth image withAnd depth image is converted into three-dimensional point cloud etc..
Output module 40 for exporting 3D information, the 3D information include: the shape of target, spatial position, COLOR COMPOSITION THROUGH DISTRIBUTION and/Or it is any one or more in point cloud data.It may further include coordinates output unit and image-display units, imageDisplay unit for displaying target various forms parameter, such as reconstruct after 2D/3D model, 2D/3D Overlapping display andPoint cloud chart picture etc., in the form of intuitively reappearing object;And coordinates output unit be then mainly used for export three-dimensional data, so as to forOther three-dimensional applications (such as 3D printing etc.) provide data, greatly strengthen the scalability of this system.
The application carries out at numeralization characteristic point by comparing the two images of instantaneous acquiring using double camera structureReason obtains description and carries out matching treatment, and then obtains the three-dimensional coordinate of target object to be reconstructed, without calibration in advanceStructure light, matching speed is fast, high reliablity.
Three-dimensionalreconstruction module 30 is discussed in detail below with reference to Fig. 1 and Fig. 4 and carries out the side of three-dimensionalreconstruction using the moduleMethod.
According to one embodiment of present invention, three-dimensionalreconstruction module 30 includes a kind of three-dimensionalreconstruction device 300, such as Fig. 4 instituteShow, which includes: shooting unit 301, feature point extraction unit 302, initialization process unit 303 and post-processing unit304。
Wherein, shooting unit 301 is configured as shooting the image of same target respectively using two cameras, respectively obtains theOne image and the second image.
Feature point extraction unit 302 is configured to extract characteristic point from the first image and the second image.
Initialization process unit 303 is configured as carrying out initialization process to characteristic point, obtains feature point description.
Post-processing unit 304 is configured as with the second image above mentioning description for the characteristic point extracted on the first imageFeature point description taken carries out operation respectively, obtains the matching degree of two characteristic points, then highest two according to matching degreeThe coordinate of characteristic point calculates object point corresponding with highest two characteristic points of matching degree in the parallax of two cameras, and then obtainsThe depth value and three-dimensional coordinate of object point corresponding with highest two characteristic points of matching degree.
In addition, in some embodiments, three-dimensionalreconstruction device 300 can also include dissipating to target projection pattern lightSpot project structured light unit;And it is configured as the first image and the corrected correction unit of the second image etc..
According to one embodiment of present invention, method such as Fig. 1 institute of three-dimensionalreconstruction is carried out using aforementioned 3-D imaging systemShow.
Firstly, step S101-S102, the image of same target is shot respectively using two cameras in left and right, respectively obtain theOne image and the second image.It is adopted when the camera lens of two cameras in the imaging system or target are dynamic or in the imaging systemWith structure light source, and when structure light source changes over time, two cameras in left and right need to shoot the image of the target simultaneously.SomeIt in embodiment, after obtaining the first image and the second image, needs to be corrected the original image, determination is that same target is clappedImage.
Then, step S103-S104 extracts characteristic point from the first image and the second image respectively.
Then, step S105-S106 carries out initialization process to characteristic point, obtains feature point description.
Then, step S107, by what is extracted on description for each characteristic point extracted on the first image and the second imageDescription of characteristic point carries out operation respectively, obtains the matching degree of two characteristic points.
In this step, it can will be extracted on description of each characteristic point for being extracted on the first image and the second imageDescription of all characteristic points carries out operation respectively, obtains the matching degree of two characteristic points;It can also will be extracted on first imageDescription of each characteristic point with description of the point on the corresponding polar curve extracted on the second image carry out operation respectively, obtainTo the matching degree of two characteristic points, herein, polar curve refer in space a little with the line of an image center at anotherMagazine projection.
Then, step S108 is calculated and matching degree highest two according to the coordinate of highest two characteristic points of matching degreeParallax of the corresponding object point of a characteristic point in two cameras.
Then, step S109, and then obtain the depth value and three of object point corresponding with highest two characteristic points of matching degreeTie up coordinate.
Below with reference to Fig. 5-7 several Feature Points Matching schemes that the present invention will be described in detail.
According to one embodiment of present invention, step S101 and S102 shown in Fig. 1 are shot respectively using two camerasBefore the step of image, include the steps that projecting pattern light to target, so collected two dimensional image of left and right camera isThe bright spot composition of random distribution, as shown in Figure 5.It herein can be using the fixed structure light pattern not changed over, light sourceIt does not need synchronous with camera, improves the reliability of system.
Then, after step S103-S104 extracts characteristic point, in step S105-S106, characteristic point is carried out initialWhen changing processing, the bianry image that a search engine (Dot Finder) can be converted into speckle picture " point-non-dots " can use,The pixel value of pixel where wherein bright spot is set as 1, the pixel value of rest of pixels point is set as 0.Feature point description can be withIt is defined as the vector constituted around the pixel value of point (including the point) multiple pixels.For example, 5 × 5 pixel as shown in FIG. 6Bianry image corresponding to feature point description sub-vector be (0100000010001000100100000) v=.Feature point descriptionSon can also define in 3 × 3,7 × 7,9 × 9 etc. small template.Feature Points Matching degree is by calculating two feature point descriptionVector dot product and obtain.
If bright spot center is located between several pixel lattice points, can enable the pixel value of nearest pixel is 1.And in order to more accurately describe the position of bright spot, the first image and the second image are converted into " point-non-dots " using search engineImage after, the pixel value of pixel can be set as: max (0, L-d), wherein L is the width of pixel, and d is bright spot to pictureThe distance at vegetarian refreshments center.For this mode for two valued description, obtained matching result is more accurate.
The program is using the geometry distribution of bright spot in speckle pattern as the description of image characteristic point, by calculating twoThe vector dot product value of feature point description carrys out the matching degree of characteristic feature point, and the value the big, illustrates two characteristic points more matches.
For example, for giving a characteristic point on the first image, coordinate is (xL, yL), exist according to the method described aboveThe matching characteristic point found on second image, coordinate are (xR, yR).Because image had corrected that, matching characteristic point is sameOn horizontal line, so yL=yR.Parallax of this in left and right camera are as follows:
D=xL-xR
According to triangulation location principle, the depth value of the point are as follows:
Z=bf/d
Here b is baseline length, and f is the focal length of camera, and d is parallax.
The X, Y coordinates of the point are given by:
X=Z*xL/f
Y=Z*yL/f
This makes it possible to obtain the three-dimensional coordinates of the point, and the three-dimensional coordinate (X, Y, Z) of calculated point is sat in left camera hereOn mark system OXYZ (left image center is coordinate origin O, and left camera primary optical axis is Z axis).
The program utilizes pattern light, the three-dimensional data for increasing the quantity of effective image characteristic point, therefore reconstructingThe precision and density of point can greatly improve, meanwhile, using the fixed structure light not changed over, light sourceIt does not need synchronous with camera, improves the reliability of system.And due to using double camera structure, the spatially depth value of any pointIt is obtained by comparing picture point of this in the two images of left and right camera instantaneous acquiring, therefore does not need to carry out structure light pre-It first demarcates, depth calculation does not need to complete by comparing with the speckle pattern prestored, therefore can also effectively solve equipment roomInterference problem.
According to another embodiment of the invention, when carrying out three-dimensionalreconstruction according to method shown in FIG. 1, in step S105-In S106, when carrying out initialization process to characteristic point, feature point description be can be defined as: around multiple pixels of characteristic pointThe vector that the brightness value of point is constituted.Feature point description can equally be determined in 3 × 3,5 × 5,7 × 7,9 × 9 etc. small templateJustice.And in step S107, when carrying out Feature Points Matching, matching degree can pass through the square differences of two feature point description of calculatingIt is obtained with SSD, antipode and SAD and/or normalizated correlation coefficient NCC etc..
By calculating square differences between description and SSD come when being matched, the left pixel for being d for a parallaxPoint p (x, y), SSD calculation formula are as follows:
Wherein IL、IRIt is the brightness value of left and right pixel, r is the radius of consecutive points.Description son between square differences andSSD is smaller, illustrates that characteristic point more matches.
By calculating antipode between description and SAD come when being matched, the left pixel for being d for a parallaxPoint p (x, y), SAD calculation formula are as follows:
Antipode and SAD between description is smaller, then illustrates that characteristic point more matches.
Feature point description defined according to the present embodiment, matching process can also be by calculating returning between descriptionOne changes related coefficient (NCC) to realize.The left pixel p (x, y) for being d for a parallax, NCC calculation formula are as follows:
WhereinTo describe sub- average brightness.Normalizated correlation coefficient NCC between description illustrates spy closer to 1Sign point more matches;NCC illustrates that correlation is not strong between two description or do not have correlation less than 0.5.
After obtaining the highest characteristic point of matching degree by above method, by the side of disparity computation depth and three-dimensional point coordinateMethod is identical with previous embodiment, and details are not described herein.Include based on the Feature Points Matching algorithm that brightness value defines but unlimitedIn SSD, SAD and NCC.In other embodiments, three-dimensionalreconstruction can be carried out according to scene reasonable selection algorithm appropriate.
When carrying out characteristic matching using cost functions such as above-mentioned SSD, for previous embodiment, structure light can notIt is confined to pattern light, other structures light can also be used, it can with general natural lighting even without structure lightTo reach three-dimensionalreconstruction.In addition, the three-dimensional point number that this scheme can achieve is theoretically equal to the pixel number of two-dimensional camera.
According to another embodiment of the invention, when carrying out three-dimensionalreconstruction according to method shown in FIG. 1, in step S105-In S106, when carrying out initialization process to characteristic point, feature point description be can be defined as: around multiple pixels of characteristic pointThe vector that the relatively bright angle value of point is constituted.In the present solution, feature point description by around the point neighbor pixel it is oppositeBrightness value defines, if brightness of the brightness if it is greater than characteristic point of the pixel around characteristic point, the phase of the pixelIt is 1 to brightness value, if it is less than the brightness of characteristic point, then the relatively bright angle value of the pixel is -1, if being equal to characteristic pointBrightness, then the relatively bright angle value of the pixel is 0.Equally, feature point description equally can be 3 × 3,5 × 5,7 × 7,9 × 9 etc.Small template on define.For example, the vector of relative luminance feature point description of 3 × 3 pixel shown in fig. 7 can indicateAre as follows: v=(1, -1, -1,1,0,1, -1,1, -1), order of the relative luminance of pixel in vector v by from left to right, on toLower arrangement.And in step S107, when carrying out Feature Points Matching, matching degree passes through the vector dot product for calculating two feature point descriptionAnd obtain, vector dot product value is bigger, then illustrates that two characteristic points more match.After obtaining the highest characteristic point of matching degree, by parallaxThe method for calculating depth and three-dimensional point coordinate is identical with previous embodiment, and details are not described herein.
Description is defined using relative luminance and carries out three-dimensionalreconstruction, can be reduced to a certain extent luminance errors and be broughtErroneous matching, while greatly simplifying the operand of matching algorithm, improve the performance of system.Meanwhile the embodiment equally also withoutIt need to be confined to pattern light, other structures light can be used, it can with general natural lighting even without structure lightTo reach three-dimensionalreconstruction.
The process of three-dimensionalreconstruction in various embodiments described above can be completed on CPU, can also be on GPUIt completes, for the two is compared, needs to complete above-mentioned operation point by point on CPU, speed is slow;And it can be right simultaneously on GPUAll the points disposably complete above-mentioned operation, and speed can increase substantially.It in addition to this, can also be using the side such as ASIC, FPGAFormula is realized.
According to another embodiment of the invention, a kind of calculating equipment, including processor and memory are additionally provided, is handledDevice and memory establish communication connection, processor, for reading the program in memory, to execute the three-dimensionalreconstruction side in Fig. 1Method.
According to another embodiment of the invention, a kind of non-volatile memory medium is additionally provided, non-volatile memories are situated betweenProgram is stored in matter, when which is run by calculating equipment, is calculated equipment and is executed the three-dimensional reconstruction method in Fig. 1.
According to another embodiment of the invention, a kind of three-dimensional imaging device, including aforementioned three-dimensional imaging system are additionally providedSystem, the equipment may include two or more camera and a structured light device, and structured light device can be but unlimitedIn Infrared laser emission device.
The embodiment of the present invention is elaborated above in conjunction with attached drawing, but the use of technical solution of the present invention is not onlyThe various applications referred in this patent embodiment are confined to, various structures and modification can be with reference to technical solution of the present invention easilyGround is implemented, to reach various beneficial effects mentioned in this article.Within the knowledge of a person skilled in the art,The various change made without departing from the purpose of the present invention should all belong to the invention patent covering scope.