Movatterモバイル変換


[0]ホーム

URL:


CN109087382A - A kind of three-dimensional reconstruction method and 3-D imaging system - Google Patents

A kind of three-dimensional reconstruction method and 3-D imaging system
Download PDF

Info

Publication number
CN109087382A
CN109087382ACN201810862898.1ACN201810862898ACN109087382ACN 109087382 ACN109087382 ACN 109087382ACN 201810862898 ACN201810862898 ACN 201810862898ACN 109087382 ACN109087382 ACN 109087382A
Authority
CN
China
Prior art keywords
image
point
description
characteristic point
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810862898.1A
Other languages
Chinese (zh)
Inventor
张宝
温志庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Farui Taike Intelligent Technology Co Ltd
Original Assignee
Ningbo Farui Taike Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Farui Taike Intelligent Technology Co LtdfiledCriticalNingbo Farui Taike Intelligent Technology Co Ltd
Priority to CN201810862898.1ApriorityCriticalpatent/CN109087382A/en
Publication of CN109087382ApublicationCriticalpatent/CN109087382A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The present invention relates to a kind of three-dimensional reconstruction method and 3-D imaging systems.Three-dimensional reconstruction method, comprising: the image that same target is shot using two cameras respectively obtains the first image and the second image;Characteristic point is extracted from the first image and the second image respectively;Initialization process is carried out to characteristic point, obtains feature point description;The each characteristic point extracted on first image is handled as follows respectively: description for the characteristic point extracted on description for each characteristic point extracted on the first image and the second image is subjected to operation respectively, obtain the matching degree of two characteristic points, object point corresponding with highest two characteristic points of matching degree is calculated according to the coordinate of highest two characteristic points of matching degree in the parallax of two cameras, and then obtains the depth value and three-dimensional coordinate of object point corresponding with highest two characteristic points of matching degree.The present invention is higher without the data precision of preparatory calibration structure light and offer.

Description

A kind of three-dimensional reconstruction method and 3-D imaging system
Technical field
The present invention relates to a kind of three-dimensional reconstruction method and 3-D imaging systems.
Background technique
Three-dimensional camera is there are mainly two types of technology path: one is time-of-flight camera (Time of Flight), in addition oneKind is stereoscopic camera (Stereo Camera).
Time-of-flight camera is the range Imaging camera that the light velocity known to a kind of basis calculates distance.Pass through measurement optical signalFlight time, can be a little computed to the distance of camera in any object in scene.
Stereoscopic camera is a kind of three-dimensional camera of twin-lens, and twin-lens is used to the binocular vision of simulation human eye.It is this kind ofThe image of cameras record or so camera lens, can't to allow user to generate 3D impression (similar 3D film) by special displayReally calculate depth value, thus cannot be used in it is other need three-dimensional data using upper, such as 3D printing etc..
When stereoscopic camera is used to determine the depth value of every bit in scene, it would be desirable in the different image pair of two widthOn look for corresponding points.How to solve the problems, such as that Feature Points Matching (including matched arithmetic speed) is that the core of this technology path is askedTopic.
Currently, a kind of scheme is that character pair point is directly looked for calculate depth information from color image.This methodAdvantage is not need additional light source, can be worked in natural lighting, and is measured apart from distant;The disadvantage is that smartExactness is low, and especially when the color of image and brightness a period of time, this method will fail.
Another scheme is to utilize structure light.Its advantage is that accuracy is high, it can be only with a camera lens;The disadvantage is that structure lightProjector must be demarcated in advance, while there are also interference problems between instrument.
Summary of the invention
The problem of based on above every technology, the present invention provide a kind of three-dimensional reconstruction method and three-dimensional imaging systemThe step of system, structure light is demarcated in omission in advance, and improve the accuracy of three-dimensional data.
The first aspect of the application provides a kind of three-dimensional reconstruction method, comprising:
The image that same target is shot using two cameras, respectively obtains the first image and the second image;
Characteristic point is extracted from the first image and the second image respectively;
Initialization process is carried out to characteristic point, obtains feature point description;
The each characteristic point extracted on first image is handled as follows respectively:
By description for the characteristic point extracted on description for each characteristic point extracted on the first image and the second imageOperation is carried out respectively, obtains the matching degree of two characteristic points,
It is calculated according to the coordinate of highest two characteristic points of matching degree corresponding with highest two characteristic points of matching degreeObject point and then obtains the depth value and three-dimensional of object point corresponding with highest two characteristic points of matching degree in the parallax of two camerasCoordinate.
The application carries out at numeralization characteristic point by comparing the two images of instantaneous acquiring using double camera structureReason obtains description and carries out matching treatment, and then obtains the three-dimensional coordinate of target object to be reconstructed, without calibration in advanceStructure light, and matching speed is fast, high reliablity.
It further, can will be on the first image when being respectively processed to each characteristic point extracted on the first imageDescription for all characteristic points extracted on description for each characteristic point extracted and the second image carries out operation respectively, obtainsThe matching degree of two characteristic points.
It further, can will be on the first image when being respectively processed to each characteristic point extracted on the first imageDescription for each characteristic point extracted is transported respectively with description of the point on the corresponding polar curve extracted on the second imageCalculate, obtain the matching degree of two characteristic points, herein, polar curve refer in space a little with the line of an image center anotherOne magazine projection.
It further, further include projecting pattern to target the step of shooting image respectively using two cameras beforeThe step of light.For the present invention using the fixed structure light pattern not changed over, light source does not need, raising system synchronous with cameraReliability.
Further, the step of carrying out initialization process to characteristic point further comprises: by the first image and the second imageIt is converted into the bianry image of " point-non-dots ", the pixel value of the pixel where bright spot is set as 1, the pixel value of rest of pixels pointIt is set as 0;Description is defined as: the vector constituted around the pixel value of multiple pixels of characteristic point;Matching degree passes through calculatingThe vector dot product of two feature point description and obtain.
The program projects target object using pattern light, can also increase the quantity of effective image characteristic point, becauseThe precision and density of this three-dimensional data points reconstructed can all greatly improve.Simultaneously as the application uses double camera structure, it is emptyBetween the depth value of upper any point obtained by comparing picture point of this in the two images of left and right camera instantaneous acquiring, therefore thisInvention does not need to demarcate structure light in advance, and depth calculation is not needed by comparing with the speckle pattern prestored come complete yetAt, therefore equipment room interference problem is effectively solved.
It further, can be by the pixel value of the pixel nearest from bright spot when bright spot is between multiple pixelsIt is set as 1.
Further, the step of carrying out initialization process to characteristic point further comprises: utilizing point search engine by the first figurePicture and the second image are converted into the image of " point-non-dots ";Description is defined as: around the pixel of multiple pixels of characteristic pointIt is worth the vector constituted;The pixel value of pixel is set as: max (0, L-d), and wherein L is the width of pixel, and d is bright spot to pictureThe distance at vegetarian refreshments center;Matching degree is obtained by calculating the vector dot product of two feature point description.In this way may be usedMore precisely to describe the position of bright spot, to obtain accurate matching result.
Further, description can be defined as: the vector constituted around the brightness value of multiple pixels of characteristic point;Matching degree is by calculating the square differences and (SSD, Sum of Squared Difference), exhausted of two feature point description sonsTo difference and (SAD, Sum of Absolute Difference) and/or normalizated correlation coefficient (NCC, NormalizedCorrelation Coefficient) in any one and obtain.Characteristic matching is carried out using cost functions such as above-mentioned SSD,For aforementioned schemes, it can be not limited to pattern light, but use other structures light or even general natural lightIrradiation can be carried out three-dimensionalreconstruction.
Further, description can be defined as: constitute around the relatively bright angle value of multiple pixels of characteristic pointVector;Wherein, around the brightness of the pixel of characteristic point if it is greater than the brightness of characteristic point, then the relatively bright angle value of the pixelIt is 1, if it is less than the brightness of characteristic point, then the relatively bright angle value of the pixel is -1, should if being equal to the brightness of characteristic pointThe relatively bright angle value of pixel is 0;Matching degree is obtained by calculating the vector dot product of two feature point description.Using relatively brightDegree carries out three-dimensionalreconstruction to define description, can reduce luminance errors bring erroneous matching to a certain extent, while bigThe big operand for simplifying matching algorithm, improves the performance of system.
Further, when the camera lens of camera or target are dynamic or when structure light changes over time, two cameras are sameWhen shoot the image of same target.
It further, further include to the first image and the corrected step of the second image the step of extracting characteristic point beforeSuddenly.
The second aspect of the application provides a kind of three-dimensionalreconstruction device, comprising:
Shooting unit is configured as being shot the image of same target respectively using two cameras, respectively obtains the first imageWith the second image;
Feature point extraction unit is configured to extract characteristic point from the first image and the second image;
Initialization process unit is configured as carrying out initialization process to characteristic point, obtains feature point description;
Post-processing unit is configured as that each characteristic point extracted on the first image is handled as follows respectively:
Feature point description point that will be extracted on description for each characteristic point extracted on first image and the second imageNot carry out operation, obtain the matching degree of two characteristic points,
It is calculated according to the coordinate of highest two characteristic points of matching degree corresponding with highest two characteristic points of matching degreeObject point and then obtains the depth value and three-dimensional of object point corresponding with highest two characteristic points of matching degree in the parallax of two camerasCoordinate.
Further, post-processing unit is configured to: to each characteristic point extracted on the first image respectively intoWhen row processing, by description for each characteristic point extracted on the first image and retouching for all characteristic points for being extracted on the second imageIt states son and carries out operation respectively, obtain the matching degree of two characteristic points.
Further, post-processing unit is configured to: to each characteristic point extracted on the first image respectively intoWhen row processing, by the description of each characteristic point extracted on the first image it is sub on the corresponding polar curve extracted on the second imageDescription of point carries out operation respectively, obtains the matching degree of two characteristic points.
It further, further include pattern light projection unit, for projecting pattern light to target.
Further, initialization process unit is configured to, and the first image and the second image are converted into " point-The pixel value of pixel where bright spot is set as 1 by the bianry image of non-dots ", and the pixel value of rest of pixels point is set as 0;DescriptionSon is defined as: the vector constituted around the pixel value of multiple pixels of characteristic point;
In post-processing unit, matching degree is obtained by calculating the vector dot product of two feature point description.
Further, when bright spot is between multiple pixels, the pixel value of the pixel nearest from bright spot is set as1。
Further, the step of carrying out initialization process to characteristic point further comprises: by the first image and the second imageIt is converted into the image of " point-non-dots ";Description is defined as: the arrow constituted around the pixel value of multiple pixels of characteristic pointAmount;The pixel value of pixel is set as: max (0, L-d), and wherein L is the width of pixel, and d is bright spot to pixel dot centerDistance;Matching degree is obtained by calculating the vector dot product of two feature point description.
Further, in initialization process unit, description son be defined as: around characteristic point multiple pixels it is brightThe vector that angle value is constituted;In post-processing unit, matching degree is by calculating the square differences of two feature point description sons and absoluteDifference and and/or normalizated correlation coefficient in any one and obtain.
Further, in initialization process unit, description is defined as: around the phase of multiple pixels of characteristic pointThe vector that brightness value is constituted;Wherein, around the brightness of the pixel of characteristic point if it is greater than the brightness of characteristic point, then the pixelThe relatively bright angle value of point is 1, and if it is less than the brightness of characteristic point, then the relatively bright angle value of the pixel is -1, if being equal to spyThe brightness of point is levied, then the relatively bright angle value of the pixel is 0;In post-processing unit, matching degree is retouched by calculating two characteristic pointsIt states the vector dot product of son and obtains.
Further, shooting unit is configured to, when the camera lens of camera or target are dynamic or structure lightWhen changing over time, two cameras shoot the image of same target simultaneously.
Further, three-dimensionalreconstruction device above-mentioned further include: correction unit is configured as the first image and the second imageIt is corrected.
The third aspect of the application provides a kind of 3-D imaging system, including light source and two cameras, further includes aforementionedThe three-dimensionalreconstruction device that any implementation of second aspect or second aspect provides.
Further, the 3-D imaging system further include:
Camera calibration module, the inside and outside parameter for calibration for cameras;
Device control module carries out Image Acquisition and image storage, and control light source projects light for controlling camera;
Output module, for exporting 3D information, 3D information include: the shape of target, spatial position, COLOR COMPOSITION THROUGH DISTRIBUTION and/orIt is any one or more in point cloud data.
It further, further include display unit in aforementioned output module, for the 2D/3D after showing point cloud chart picture or reconstructingModel.
The fourth aspect of the application provides a kind of calculating equipment, which includes processor, memory, processor with depositReservoir establishes communication connection;
Processor, for reading the program in memory, to execute any realization of aforementioned first aspect or first aspectThe method that mode provides.
The 5th aspect of the application provides a kind of non-volatile memory medium, stores in the non-volatile memory mediumProgram when the program is run by calculating equipment, calculates any implementation that equipment executes aforementioned first aspect or first aspectThe method of offer.
The 6th aspect of the application provides a kind of three-dimensional imaging device, any including the aforementioned third aspect or the third aspectThe system that implementation provides.
The present invention uses double camera structure, by comparing picture point of the object point in the two images of left and right camera instantaneous acquiringThe depth value is obtained, the application optimizes matching mechanisms, handles to obtain description simultaneously by carrying out numeralization to characteristic pointMatching treatment, and then three-dimensionalreconstruction are carried out, compared with the existing technology, matching speed is fast, and global reliability is high, and does not need to knotStructure light is demarcated in advance, also efficiently solves equipment room interference problem.
Detailed description of the invention
Fig. 1 is the flow chart of the three-dimensional reconstruction method of embodiment according to the present invention.
Fig. 2 is binocular three-dimensional image-forming principle schematic diagram.
Fig. 3 is the structural schematic diagram of the 3-D imaging system of embodiment according to the present invention.
Fig. 4 is the structural schematic diagram of the three-dimensionalreconstruction device of embodiment according to the present invention.
Fig. 5 is the pattern light pattern of embodiment according to the present invention.
Fig. 6 is the sub- schematic diagram of the corresponding feature point description of bianry image of embodiment according to the present invention.
Fig. 7 is the sub- schematic diagram of feature point description of embodiment according to the present invention defined according to relatively bright angle value.
Specific embodiment
The present invention will be further described with attached drawing combined with specific embodiments below.It is understood that described hereinSpecific embodiment is of the invention just for the sake of explaining, rather than limitation of the invention.In addition, for ease of description, in attached drawing onlyShow part related to the present invention and not all structure or process.
In addition, "upper", "lower", "left", "right" used in the following description etc., it is only for facilitate explanation originallyThe structure of invention, should not be construed as limiting the invention.
According to one embodiment of present invention, a kind of 3-D imaging system based on binocular camera is provided.Binocular three-dimensional atAs principle is as shown in Fig. 2, left camera OlWith right camera OrThe distance between (i.e. baseline length) be b, for any one in scenePoint P (x, y, z), if it is in the parallax d=x of left and right cameraL-xR, according to similar triangle theory, the depth value Z=fb/ of point PD, i.e. Z=fb/ (xL-xR).When determining the depth information of scene using binocular solid camera, the figure different in two width is neededAs to above finding out matched characteristic point.The present invention is distributed using the geometry of bright spot in speckle pattern as the description of image characteristic pointSon, by calculating the vector dot product value scheduling algorithm of description come the matching degree of characteristic feature point.
Based on the above principles, the 3-D imaging system that one embodiment of the present of invention provides is as shown in figure 3, the three-dimensional imagingSystem 1 may include: camera calibration module 10, device control module 20, three-dimensionalreconstruction module 30 and output module 40 and depositStore up module (being not shown in Fig. 3) etc..
Wherein, camera calibration module 10 is used for the inside and outside parameter of calibration for cameras, specifically includes that in different distance different angleAcquire the photo of 20~50 width camera calibration plates;Run camera calibration program, obtain two cameras inner parameter (such as focal length,Distortion parameter etc.) and camera between geometric parameter (as be displaced, rotation etc.);And the calibrating parameters of camera are stored to dataIn file.
Device control module 20 includes camera control unit and light source (projector) control unit.Camera control unit controlCamera carries out Image Acquisition and image storage, and light source control unit completes speckle knot by the control to projector or other light sourcesThe projection of structure light or other light.
Three-dimensionalreconstruction module 30 is the core of this system, and the detailed description about the part will retouch laterIt states.The module mainly completes correction between original input picture, Feature Points Matching image, generates disparity map according to matching resultAs, post-processing is filtered etc. to the anaglyph of generation with remove bad data point, anaglyph is converted into depth image withAnd depth image is converted into three-dimensional point cloud etc..
Output module 40 for exporting 3D information, the 3D information include: the shape of target, spatial position, COLOR COMPOSITION THROUGH DISTRIBUTION and/Or it is any one or more in point cloud data.It may further include coordinates output unit and image-display units, imageDisplay unit for displaying target various forms parameter, such as reconstruct after 2D/3D model, 2D/3D Overlapping display andPoint cloud chart picture etc., in the form of intuitively reappearing object;And coordinates output unit be then mainly used for export three-dimensional data, so as to forOther three-dimensional applications (such as 3D printing etc.) provide data, greatly strengthen the scalability of this system.
The application carries out at numeralization characteristic point by comparing the two images of instantaneous acquiring using double camera structureReason obtains description and carries out matching treatment, and then obtains the three-dimensional coordinate of target object to be reconstructed, without calibration in advanceStructure light, matching speed is fast, high reliablity.
Three-dimensionalreconstruction module 30 is discussed in detail below with reference to Fig. 1 and Fig. 4 and carries out the side of three-dimensionalreconstruction using the moduleMethod.
According to one embodiment of present invention, three-dimensionalreconstruction module 30 includes a kind of three-dimensionalreconstruction device 300, such as Fig. 4 instituteShow, which includes: shooting unit 301, feature point extraction unit 302, initialization process unit 303 and post-processing unit304。
Wherein, shooting unit 301 is configured as shooting the image of same target respectively using two cameras, respectively obtains theOne image and the second image.
Feature point extraction unit 302 is configured to extract characteristic point from the first image and the second image.
Initialization process unit 303 is configured as carrying out initialization process to characteristic point, obtains feature point description.
Post-processing unit 304 is configured as with the second image above mentioning description for the characteristic point extracted on the first imageFeature point description taken carries out operation respectively, obtains the matching degree of two characteristic points, then highest two according to matching degreeThe coordinate of characteristic point calculates object point corresponding with highest two characteristic points of matching degree in the parallax of two cameras, and then obtainsThe depth value and three-dimensional coordinate of object point corresponding with highest two characteristic points of matching degree.
In addition, in some embodiments, three-dimensionalreconstruction device 300 can also include dissipating to target projection pattern lightSpot project structured light unit;And it is configured as the first image and the corrected correction unit of the second image etc..
According to one embodiment of present invention, method such as Fig. 1 institute of three-dimensionalreconstruction is carried out using aforementioned 3-D imaging systemShow.
Firstly, step S101-S102, the image of same target is shot respectively using two cameras in left and right, respectively obtain theOne image and the second image.It is adopted when the camera lens of two cameras in the imaging system or target are dynamic or in the imaging systemWith structure light source, and when structure light source changes over time, two cameras in left and right need to shoot the image of the target simultaneously.SomeIt in embodiment, after obtaining the first image and the second image, needs to be corrected the original image, determination is that same target is clappedImage.
Then, step S103-S104 extracts characteristic point from the first image and the second image respectively.
Then, step S105-S106 carries out initialization process to characteristic point, obtains feature point description.
Then, step S107, by what is extracted on description for each characteristic point extracted on the first image and the second imageDescription of characteristic point carries out operation respectively, obtains the matching degree of two characteristic points.
In this step, it can will be extracted on description of each characteristic point for being extracted on the first image and the second imageDescription of all characteristic points carries out operation respectively, obtains the matching degree of two characteristic points;It can also will be extracted on first imageDescription of each characteristic point with description of the point on the corresponding polar curve extracted on the second image carry out operation respectively, obtainTo the matching degree of two characteristic points, herein, polar curve refer in space a little with the line of an image center at anotherMagazine projection.
Then, step S108 is calculated and matching degree highest two according to the coordinate of highest two characteristic points of matching degreeParallax of the corresponding object point of a characteristic point in two cameras.
Then, step S109, and then obtain the depth value and three of object point corresponding with highest two characteristic points of matching degreeTie up coordinate.
Below with reference to Fig. 5-7 several Feature Points Matching schemes that the present invention will be described in detail.
According to one embodiment of present invention, step S101 and S102 shown in Fig. 1 are shot respectively using two camerasBefore the step of image, include the steps that projecting pattern light to target, so collected two dimensional image of left and right camera isThe bright spot composition of random distribution, as shown in Figure 5.It herein can be using the fixed structure light pattern not changed over, light sourceIt does not need synchronous with camera, improves the reliability of system.
Then, after step S103-S104 extracts characteristic point, in step S105-S106, characteristic point is carried out initialWhen changing processing, the bianry image that a search engine (Dot Finder) can be converted into speckle picture " point-non-dots " can use,The pixel value of pixel where wherein bright spot is set as 1, the pixel value of rest of pixels point is set as 0.Feature point description can be withIt is defined as the vector constituted around the pixel value of point (including the point) multiple pixels.For example, 5 × 5 pixel as shown in FIG. 6Bianry image corresponding to feature point description sub-vector be (0100000010001000100100000) v=.Feature point descriptionSon can also define in 3 × 3,7 × 7,9 × 9 etc. small template.Feature Points Matching degree is by calculating two feature point descriptionVector dot product and obtain.
If bright spot center is located between several pixel lattice points, can enable the pixel value of nearest pixel is 1.And in order to more accurately describe the position of bright spot, the first image and the second image are converted into " point-non-dots " using search engineImage after, the pixel value of pixel can be set as: max (0, L-d), wherein L is the width of pixel, and d is bright spot to pictureThe distance at vegetarian refreshments center.For this mode for two valued description, obtained matching result is more accurate.
The program is using the geometry distribution of bright spot in speckle pattern as the description of image characteristic point, by calculating twoThe vector dot product value of feature point description carrys out the matching degree of characteristic feature point, and the value the big, illustrates two characteristic points more matches.
For example, for giving a characteristic point on the first image, coordinate is (xL, yL), exist according to the method described aboveThe matching characteristic point found on second image, coordinate are (xR, yR).Because image had corrected that, matching characteristic point is sameOn horizontal line, so yL=yR.Parallax of this in left and right camera are as follows:
D=xL-xR
According to triangulation location principle, the depth value of the point are as follows:
Z=bf/d
Here b is baseline length, and f is the focal length of camera, and d is parallax.
The X, Y coordinates of the point are given by:
X=Z*xL/f
Y=Z*yL/f
This makes it possible to obtain the three-dimensional coordinates of the point, and the three-dimensional coordinate (X, Y, Z) of calculated point is sat in left camera hereOn mark system OXYZ (left image center is coordinate origin O, and left camera primary optical axis is Z axis).
The program utilizes pattern light, the three-dimensional data for increasing the quantity of effective image characteristic point, therefore reconstructingThe precision and density of point can greatly improve, meanwhile, using the fixed structure light not changed over, light sourceIt does not need synchronous with camera, improves the reliability of system.And due to using double camera structure, the spatially depth value of any pointIt is obtained by comparing picture point of this in the two images of left and right camera instantaneous acquiring, therefore does not need to carry out structure light pre-It first demarcates, depth calculation does not need to complete by comparing with the speckle pattern prestored, therefore can also effectively solve equipment roomInterference problem.
According to another embodiment of the invention, when carrying out three-dimensionalreconstruction according to method shown in FIG. 1, in step S105-In S106, when carrying out initialization process to characteristic point, feature point description be can be defined as: around multiple pixels of characteristic pointThe vector that the brightness value of point is constituted.Feature point description can equally be determined in 3 × 3,5 × 5,7 × 7,9 × 9 etc. small templateJustice.And in step S107, when carrying out Feature Points Matching, matching degree can pass through the square differences of two feature point description of calculatingIt is obtained with SSD, antipode and SAD and/or normalizated correlation coefficient NCC etc..
By calculating square differences between description and SSD come when being matched, the left pixel for being d for a parallaxPoint p (x, y), SSD calculation formula are as follows:
Wherein IL、IRIt is the brightness value of left and right pixel, r is the radius of consecutive points.Description son between square differences andSSD is smaller, illustrates that characteristic point more matches.
By calculating antipode between description and SAD come when being matched, the left pixel for being d for a parallaxPoint p (x, y), SAD calculation formula are as follows:
Antipode and SAD between description is smaller, then illustrates that characteristic point more matches.
Feature point description defined according to the present embodiment, matching process can also be by calculating returning between descriptionOne changes related coefficient (NCC) to realize.The left pixel p (x, y) for being d for a parallax, NCC calculation formula are as follows:
WhereinTo describe sub- average brightness.Normalizated correlation coefficient NCC between description illustrates spy closer to 1Sign point more matches;NCC illustrates that correlation is not strong between two description or do not have correlation less than 0.5.
After obtaining the highest characteristic point of matching degree by above method, by the side of disparity computation depth and three-dimensional point coordinateMethod is identical with previous embodiment, and details are not described herein.Include based on the Feature Points Matching algorithm that brightness value defines but unlimitedIn SSD, SAD and NCC.In other embodiments, three-dimensionalreconstruction can be carried out according to scene reasonable selection algorithm appropriate.
When carrying out characteristic matching using cost functions such as above-mentioned SSD, for previous embodiment, structure light can notIt is confined to pattern light, other structures light can also be used, it can with general natural lighting even without structure lightTo reach three-dimensionalreconstruction.In addition, the three-dimensional point number that this scheme can achieve is theoretically equal to the pixel number of two-dimensional camera.
According to another embodiment of the invention, when carrying out three-dimensionalreconstruction according to method shown in FIG. 1, in step S105-In S106, when carrying out initialization process to characteristic point, feature point description be can be defined as: around multiple pixels of characteristic pointThe vector that the relatively bright angle value of point is constituted.In the present solution, feature point description by around the point neighbor pixel it is oppositeBrightness value defines, if brightness of the brightness if it is greater than characteristic point of the pixel around characteristic point, the phase of the pixelIt is 1 to brightness value, if it is less than the brightness of characteristic point, then the relatively bright angle value of the pixel is -1, if being equal to characteristic pointBrightness, then the relatively bright angle value of the pixel is 0.Equally, feature point description equally can be 3 × 3,5 × 5,7 × 7,9 × 9 etc.Small template on define.For example, the vector of relative luminance feature point description of 3 × 3 pixel shown in fig. 7 can indicateAre as follows: v=(1, -1, -1,1,0,1, -1,1, -1), order of the relative luminance of pixel in vector v by from left to right, on toLower arrangement.And in step S107, when carrying out Feature Points Matching, matching degree passes through the vector dot product for calculating two feature point descriptionAnd obtain, vector dot product value is bigger, then illustrates that two characteristic points more match.After obtaining the highest characteristic point of matching degree, by parallaxThe method for calculating depth and three-dimensional point coordinate is identical with previous embodiment, and details are not described herein.
Description is defined using relative luminance and carries out three-dimensionalreconstruction, can be reduced to a certain extent luminance errors and be broughtErroneous matching, while greatly simplifying the operand of matching algorithm, improve the performance of system.Meanwhile the embodiment equally also withoutIt need to be confined to pattern light, other structures light can be used, it can with general natural lighting even without structure lightTo reach three-dimensionalreconstruction.
The process of three-dimensionalreconstruction in various embodiments described above can be completed on CPU, can also be on GPUIt completes, for the two is compared, needs to complete above-mentioned operation point by point on CPU, speed is slow;And it can be right simultaneously on GPUAll the points disposably complete above-mentioned operation, and speed can increase substantially.It in addition to this, can also be using the side such as ASIC, FPGAFormula is realized.
According to another embodiment of the invention, a kind of calculating equipment, including processor and memory are additionally provided, is handledDevice and memory establish communication connection, processor, for reading the program in memory, to execute the three-dimensionalreconstruction side in Fig. 1Method.
According to another embodiment of the invention, a kind of non-volatile memory medium is additionally provided, non-volatile memories are situated betweenProgram is stored in matter, when which is run by calculating equipment, is calculated equipment and is executed the three-dimensional reconstruction method in Fig. 1.
According to another embodiment of the invention, a kind of three-dimensional imaging device, including aforementioned three-dimensional imaging system are additionally providedSystem, the equipment may include two or more camera and a structured light device, and structured light device can be but unlimitedIn Infrared laser emission device.
The embodiment of the present invention is elaborated above in conjunction with attached drawing, but the use of technical solution of the present invention is not onlyThe various applications referred in this patent embodiment are confined to, various structures and modification can be with reference to technical solution of the present invention easilyGround is implemented, to reach various beneficial effects mentioned in this article.Within the knowledge of a person skilled in the art,The various change made without departing from the purpose of the present invention should all belong to the invention patent covering scope.

Claims (28)

CN201810862898.1A2018-08-012018-08-01A kind of three-dimensional reconstruction method and 3-D imaging systemPendingCN109087382A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810862898.1ACN109087382A (en)2018-08-012018-08-01A kind of three-dimensional reconstruction method and 3-D imaging system

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810862898.1ACN109087382A (en)2018-08-012018-08-01A kind of three-dimensional reconstruction method and 3-D imaging system

Publications (1)

Publication NumberPublication Date
CN109087382Atrue CN109087382A (en)2018-12-25

Family

ID=64831081

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810862898.1APendingCN109087382A (en)2018-08-012018-08-01A kind of three-dimensional reconstruction method and 3-D imaging system

Country Status (1)

CountryLink
CN (1)CN109087382A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109862262A (en)*2019-01-022019-06-07上海闻泰电子科技有限公司Image weakening method, device, terminal and storage medium
CN109993781A (en)*2019-03-282019-07-09北京清微智能科技有限公司Based on the matched anaglyph generation method of binocular stereo vision and system
CN110009722A (en)*2019-04-162019-07-12成都四方伟业软件股份有限公司Three-dimensional rebuilding method and device
CN110009691A (en)*2019-03-282019-07-12北京清微智能科技有限公司Based on the matched anaglyph generation method of binocular stereo vision and system
CN110146869A (en)*2019-05-212019-08-20北京百度网讯科技有限公司 Method, device, electronic device and storage medium for determining coordinate system conversion parameters
CN110337674A (en)*2019-05-282019-10-15深圳市汇顶科技股份有限公司Three-dimensional rebuilding method, device, equipment and storage medium
CN110462693A (en)*2019-06-282019-11-15深圳市汇顶科技股份有限公司 Door lock and identification method
CN111056404A (en)*2019-12-242020-04-24安徽理工大学 A mine tank fault location system based on binocular vision and laser information fusion
CN111664798A (en)*2020-04-292020-09-15深圳奥比中光科技有限公司Depth imaging method and device and computer readable storage medium
CN111986246A (en)*2019-05-242020-11-24北京四维图新科技股份有限公司 3D model reconstruction method, device and storage medium based on image processing
CN112146848A (en)*2019-06-272020-12-29华为技术有限公司Method and device for determining distortion parameter of camera
CN112598808A (en)*2020-12-232021-04-02深圳大学Data processing method and device, electronic equipment and storage medium
CN114923665A (en)*2022-05-272022-08-19上海交通大学 Image reconstruction method and image reconstruction test system of wave three-dimensional height field
CN115346006A (en)*2022-10-202022-11-15潍坊歌尔电子有限公司3D reconstruction method, device, electronic equipment and storage medium
CN116091617A (en)*2022-12-222023-05-09成都航天科工大数据研究院有限公司 A tire position and posture detection method and system based on machine vision
CN116309325A (en)*2023-02-082023-06-23深圳市振华兴智能技术有限公司Patch detection method and system based on deep learning
CN116778188A (en)*2023-06-282023-09-19磅客策(上海)智能医疗科技有限公司 A hair information identification method, system and storage medium
CN117635849A (en)*2024-01-262024-03-01成都万联传感网络技术有限公司Dynamic real-time high-precision three-dimensional imaging system
CN118175423A (en)*2024-05-152024-06-11山东云海国创云计算装备产业创新中心有限公司Focal length determining system, method, equipment, medium and product

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103761768A (en)*2014-01-222014-04-30杭州匡伦科技有限公司Stereo matching method of three-dimensional reconstruction
CN105894574A (en)*2016-03-302016-08-24清华大学深圳研究生院Binocular three-dimensional reconstruction method
CN108288292A (en)*2017-12-262018-07-17中国科学院深圳先进技术研究院A kind of three-dimensional rebuilding method, device and equipment
CN108335331A (en)*2018-01-312018-07-27华中科技大学A kind of coil of strip binocular visual positioning method and apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103761768A (en)*2014-01-222014-04-30杭州匡伦科技有限公司Stereo matching method of three-dimensional reconstruction
CN105894574A (en)*2016-03-302016-08-24清华大学深圳研究生院Binocular three-dimensional reconstruction method
CN108288292A (en)*2017-12-262018-07-17中国科学院深圳先进技术研究院A kind of three-dimensional rebuilding method, device and equipment
CN108335331A (en)*2018-01-312018-07-27华中科技大学A kind of coil of strip binocular visual positioning method and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘同海: ""基于双目视觉的猪体体尺参数提取算法优化及三维重构"", 《CNKI博士电子期刊》*

Cited By (26)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109862262A (en)*2019-01-022019-06-07上海闻泰电子科技有限公司Image weakening method, device, terminal and storage medium
CN110009691B (en)*2019-03-282021-04-09北京清微智能科技有限公司Parallax image generation method and system based on binocular stereo vision matching
CN109993781A (en)*2019-03-282019-07-09北京清微智能科技有限公司Based on the matched anaglyph generation method of binocular stereo vision and system
CN110009691A (en)*2019-03-282019-07-12北京清微智能科技有限公司Based on the matched anaglyph generation method of binocular stereo vision and system
CN110009722A (en)*2019-04-162019-07-12成都四方伟业软件股份有限公司Three-dimensional rebuilding method and device
CN110146869A (en)*2019-05-212019-08-20北京百度网讯科技有限公司 Method, device, electronic device and storage medium for determining coordinate system conversion parameters
CN111986246B (en)*2019-05-242024-04-30北京四维图新科技股份有限公司 Three-dimensional model reconstruction method, device and storage medium based on image processing
CN111986246A (en)*2019-05-242020-11-24北京四维图新科技股份有限公司 3D model reconstruction method, device and storage medium based on image processing
CN110337674A (en)*2019-05-282019-10-15深圳市汇顶科技股份有限公司Three-dimensional rebuilding method, device, equipment and storage medium
CN112146848B (en)*2019-06-272022-02-25华为技术有限公司Method and device for determining distortion parameter of camera
CN112146848A (en)*2019-06-272020-12-29华为技术有限公司Method and device for determining distortion parameter of camera
CN110462693A (en)*2019-06-282019-11-15深圳市汇顶科技股份有限公司 Door lock and identification method
CN111056404A (en)*2019-12-242020-04-24安徽理工大学 A mine tank fault location system based on binocular vision and laser information fusion
WO2021218196A1 (en)*2020-04-292021-11-04奥比中光科技集团股份有限公司Depth imaging method and apparatus, and computer readable storage medium
CN111664798A (en)*2020-04-292020-09-15深圳奥比中光科技有限公司Depth imaging method and device and computer readable storage medium
US12188759B2 (en)2020-04-292025-01-07Orbbec Inc.Depth imaging method and device and computer-readable storage medium
CN112598808B (en)*2020-12-232024-04-02深圳大学Data processing method, device, electronic equipment and storage medium
CN112598808A (en)*2020-12-232021-04-02深圳大学Data processing method and device, electronic equipment and storage medium
CN114923665A (en)*2022-05-272022-08-19上海交通大学 Image reconstruction method and image reconstruction test system of wave three-dimensional height field
CN115346006A (en)*2022-10-202022-11-15潍坊歌尔电子有限公司3D reconstruction method, device, electronic equipment and storage medium
CN116091617A (en)*2022-12-222023-05-09成都航天科工大数据研究院有限公司 A tire position and posture detection method and system based on machine vision
CN116309325A (en)*2023-02-082023-06-23深圳市振华兴智能技术有限公司Patch detection method and system based on deep learning
CN116778188A (en)*2023-06-282023-09-19磅客策(上海)智能医疗科技有限公司 A hair information identification method, system and storage medium
CN117635849A (en)*2024-01-262024-03-01成都万联传感网络技术有限公司Dynamic real-time high-precision three-dimensional imaging system
CN117635849B (en)*2024-01-262024-04-09成都万联传感网络技术有限公司Dynamic real-time high-precision three-dimensional imaging system
CN118175423A (en)*2024-05-152024-06-11山东云海国创云计算装备产业创新中心有限公司Focal length determining system, method, equipment, medium and product

Similar Documents

PublicationPublication DateTitle
CN109087382A (en)A kind of three-dimensional reconstruction method and 3-D imaging system
CN109919911B (en)Mobile three-dimensional reconstruction method based on multi-view photometric stereo
CN108734776B (en)Speckle-based three-dimensional face reconstruction method and equipment
CN104661010B (en)Method and device for establishing three-dimensional model
CN104504671B (en)Method for generating virtual-real fusion image for stereo display
CN102447934B (en)Synthetic method of stereoscopic elements in combined stereoscopic image system collected by sparse lens
CN107346425B (en) A three-dimensional texture camera system, calibration method and imaging method
CN108475327A (en)three-dimensional acquisition and rendering
JP2017112602A (en)Image calibrating, stitching and depth rebuilding method of panoramic fish-eye camera and system thereof
CN109155070A (en)Use the method and computer program product of flat mirror calibration stereo imaging system
CN107483845B (en)Photographing method and device
CN107228625B (en)Three-dimensional reconstruction method, device and equipment
CN106033614B (en)A kind of mobile camera motion object detection method under strong parallax
CN103299343A (en)Range image pixel matching method
CN111127540B (en) A three-dimensional virtual space automatic ranging method and system
CN111009030A (en) A multi-view high-resolution texture image and binocular 3D point cloud mapping method
JP4217100B2 (en) Image composition method, apparatus, and program, and stereo model rendering method, apparatus, and program
CN101729920A (en)Method for displaying stereoscopic video with free visual angles
CN104599317A (en)Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function
WO2018032841A1 (en)Method, device and system for drawing three-dimensional image
CN108629841A (en)One kind being based on laser speckle multiple views three-dimensional vision information method and system
CN108876840A (en)A method of vertical or forward projection 3-D image is generated using virtual 3d model
JP2024052755A (en) Three-dimensional displacement measuring method and three-dimensional displacement measuring device
JP2007507945A (en) 3D visualization
CN115731336B (en)Image rendering method, image rendering model generation method and related devices

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WD01Invention patent application deemed withdrawn after publication

Application publication date:20181225

WD01Invention patent application deemed withdrawn after publication

[8]ページ先頭

©2009-2025 Movatter.jp