Summary of the invention
The present invention is directed to the defect must doing manual markings to target existed in existing reconstructing three-dimensional model technology, a kind of method for reconstructing three-dimensional model and system are provided, to realize without the need to doing manual markings to target, the accurate three-dimensional model of target can be obtained.
Solution of the above problems of the present invention is, provides a kind of method for reconstructing three-dimensional model, comprises the following steps:
S1, use at least one depth camera to carry out image acquisition to target, obtain the depth image of target;
S2, to obtain depth image carry out pre-service;
S3, obtain intensive cloud data according to the depth map of target, carry out the reconstruction of target depth information point cloud grid;
S4, to rebuild after multiframe depth image merge, registration, obtain three-dimensional model.
Method for reconstructing three-dimensional model of the present invention, wherein step S1 also comprises:
S11, color camera and depth camera is used synchronously to obtain color-depth image.
Method for reconstructing three-dimensional model of the present invention, wherein the pre-service of step S2 comprises:
Denoising disposal, smoothing processing, the segmentation of front and back scape.
Method for reconstructing three-dimensional model of the present invention, wherein in step S4, the step of the multiframe depth image after reconstruction being carried out to registration also comprises:
Local rigid registration is between two carried out to the grid sequence generated, chooses keyframe mesh, reduce motion blur and data redundancy.
Method for reconstructing three-dimensional model of the present invention, wherein in step S4, the step of the multiframe depth image after reconstruction being carried out to registration also comprises:
Curved surface fusion, filling-up hole are carried out to all data after registration.
Method for reconstructing three-dimensional model of the present invention, also comprises:
S5, to obtain three-dimensional model preserve, set up three-dimensional modeling data storehouse.
The present invention also provides a kind of reconstructing three-dimensional model system, comprising:
At least one depth camera, for carrying out image acquisition to target, obtains the depth image of target;
The image processor be connected with described depth camera, for carrying out pre-service to the depth image obtained;
The point of density cloud Data Generator be connected with described image processor, for obtaining intensive cloud data according to the depth map of target, carries out the reconstruction of target depth information point cloud grid;
The Model Reconstruction device be connected with described point of density cloud Data Generator, for merging pretreated multiframe depth image, registration, obtains three-dimensional model.
Reconstructing three-dimensional model system of the present invention, also comprises at least one color camera, described color camera and depth camera parallel join, for synchronously obtaining color-depth image with depth camera.
Reconstructing three-dimensional model system of the present invention, image processor carries out Denoising disposal, smoothing processing, the segmentation of front and back scape to image.
Reconstructing three-dimensional model system of the present invention, Model Reconstruction device carries out local rigid registration between two to the grid sequence generated, and chooses keyframe mesh, reduces motion blur and data redundancy.
Reconstructing three-dimensional model system of the present invention, Model Reconstruction device carries out curved surface fusion, filling-up hole to all data after registration.
Reconstructing three-dimensional model system of the present invention, also comprises the three-dimensional modeling data storehouse be connected with Model Reconstruction device, for preserving the three-dimensional model obtained.
Implement method for reconstructing three-dimensional model of the present invention and system, when setting up three-dimensional model, directly undertaken identifying the foundation just can carrying out model by depth camera, manually choose the crucial identification point in image without the need to user, thus improve the precision, the speed that obtain three-dimensional model; Also by with the coordinating of color camera, achieve that single depth image and color-depth image two kinds are different identifies needs, for user provides more choices.
Embodiment
The present invention is directed in existing reconstructing three-dimensional model process, must by the crucial identification point of user's hand labeled, three-dimensional model could be set up out, operation inconvenience, the defect that precision is lower, sets up the mode of mode and mesh reconstruction by optimizing the depth point cloud had especially, under not needing user manually to choose the prerequisite of the crucial identification point in image, the splicing just can carrying out image is merged, thus realizes improve the acquisition precision of three-dimensional model, the object of speed.
Now with embodiment, invention is described in detail by reference to the accompanying drawings.
The present invention's process flow diagram of method for reconstructing three-dimensional model of providing of preferred embodiment is first provided.In the present embodiment, first carry out step S1: using at least one depth camera to carry out multi-angle, continuous acquisition to needing the target of carrying out modeling, generating multiple depth map.The depth camera used in this step includes but not limited to following kind: TOF camera (Time Of Flight, flight time), structured light, binocular camera, laser scanning etc.Preferably use TOF camera in the present embodiment, light pulse is sent continuously to subject by TOF camera, then receive the light returned from object with sensor, obtain object distance by the flight time of detecting optical pulses, the range data that obtains passed through obtains depth image.
For the multiple depth images obtained, need in step s 2 to carry out pre-service, these pretreated steps specifically comprise: Denoising disposal, smoothing processing, front and back scape segmentation etc.Due to usually can by absorptions such as the non-process targets in background, environment in the initial depth image obtained; set up out three-dimensional model to be rebuild in computer environment by the object of reality accurately, the image of target area can be obtained by some conventional denoisings, smoothing algorithm.And by process that front and back scape is split, three-dimensional modeling object and background can be separated, can directly by setting the flight time threshold value returned, target be chosen out from background image in TOF camera, several different contours extract algorithm can be used in addition, comprise watershed divide, feed search, background subtraction and Binarization methods etc., preferably use the contours extract mode of feed search in the present embodiment: between first using simply, before value segmentation, background determines target location, place profile and generate the target's center of seed in estimation, again by the deep search based on smoothness constraint, diffusion profiles, generate accurate degree of depth portrait profile.Again by morphology, or adopt watershed algorithm can improve the profile obtaining three-dimensional model.Always be less than the hypothesis of a certain degree based on action in the change of interframe, in conjunction with the contours extract result of previous frame, improve and accelerate the extraction of present frame.
After pre-service was carried out to image, carry out step S3: obtain intensive cloud data according to the depth map of target, carry out the reconstruction of target depth information point cloud grid.In technology in the past, setting up the reconstruction that depth point cloud carries out grid is need user to carry out manual selection key point, is that the key point that these are chosen connects into grid.But the present embodiment does not use manual mode, utilize own image when obtaining depth image itself to be one group of orderly two-dimentional point set, it has contained the syntople between the spatial point of its correspondence, determines whether adjacent point splices by different judgment modes.In Fig. 2 (a) ~ Fig. 2 (g), such as provide the 7 kind situations of 4 points when setting up triangular mesh, according to these somes distance relation spatially, by the point of same level, or the point of distance in threshold range connects into triangular mesh.Certainly, those skilled in the art can set up difform grid according to actual depth image.
After establishing the grid in each region, need these grids to splice, generating three-dimensional models.Step S4 mode of operation is in the present embodiment as follows: to all grids established, with { N1, N2nnrepresent, the initial position of these data is alignd, calculates each grid along the displacement t of three coordinate axis and rotation r:{t according to orderxi, tyi, tzi, rxi, rxi, rxi, be designated as transformation matrix rti, adopt summit stochastic sampling, arrive the modes such as the distance error correction of plane to grid N based on subpoint coupling, pointi, calculate.
For different acquisition targets, can different operations be carried out: for the object of motion, calculate the distance between transformation matrix, set a threshold value, when actual range departs from threshold value time, show that moving object is too fast, cause image blurring, the trellis frame of correspondence can be cast out.Namely local rigid registration is between two carried out to the grid sequence generated, choose keyframe mesh, reduce motion blur and data redundancy.
And for stationary body, adopt Existence of Global Stable sampling and local weighted mode in the present embodiment, iterative operation is carried out to the adjacent point recently in grid, finds out accurate stitching position, carry out merging, registration, obtain three-dimensional model.
Certainly, in the process of above-mentioned reconstruction, need to carry out necessary Optimum Operation to grid: carry out curved surface fusion, filling-up hole to all data after registration.So that the three-dimensional model generated is a reliable image continuously.After all Optimum Operations complete, just the three-dimensional model of generation can be preserved.
Preferably, in this application, step S4, when processing shot object, carries out differentiating and processing to object according to rigid body and non-rigid:
For rigid body, the inherent parameters of the structured light produced in depth camera is used to process in this application.Structured light generally has the feature of periodic arrangement, such as hot spot is latticed, lattice-like etc., in time using structured light to scan object, automatically using the dot matrix of the intersection point of these latticed hot spots, lattice-like hot spot as the unique point chosen, the monumented point adaptively using the parameter attribute of these structured lights point as merging, in registration process.
And for non-rigid, then adopt the mode selected characteristic point of random point, when structured light is after non-rigid, because non-rigid constantly can change shape and structure, and depth camera automatically cannot be followed non-rigid and carried out adaptation and change when shooting one frame, therefore adopt the mode of random selecting monumented point as merging, monumented point in registration process.
But general object can not be rigid body or completely non-rigid completely usually, for this reason, the mode of weighting is used rigid body and non-rigid to be combined in the present embodiment.
Suppose that the result merging with the carrying out of rigid body, configure is x, carry out the result of registration for y with non-rigid, after so adopting the present invention to be weighted, what obtain can be expressed as the merging registration result with general object:
z=Ax+By;
Wherein A, B are weighted index, and z is result after the registration finally obtained.
When the object of this scanning is rigid body, A=1, B=0, then A=0, B=1 when the object scanned is non-rigid.
For the object that will carry out arbitrarily scanning, carrying out at most twice adjustment about weighted index can obtain the most identical weighted index numerical value, makes the result of registration reach best.
Preferably, in the present invention, process uses multiple stage depth camera to carry out outside the acquisition of depth image, and color camera can also be used to carry out collaborative shooting simultaneously.The chromaticity diagram of subject is obtained while shooting obtains depth image, directly the data of depth image are associated with the data of color image when setting up depth image in step sl, make when last acquisition three-dimensional model, directly obtain and original target subject color same color effect.Concrete mode has two kinds: first, the data that depth camera and color camera obtain directly are pieced together a matrix, the data of such as depth camera represent with [D], color image represents with [RGB], directly the matrix of simple split is [D, RGB], in follow-up mesh reconstruction process, only [D] is operated, the data of [RGB], just as following matrix, do not participate in computing, after to the last having operated [D], with its result, thus generate band the colorful one three-dimensional model.Another kind of mode sets up strongly connected matrix data [D-RGB], while to depth image operation, also processes RGB data, finally must be with the colorful one three-dimensional model.
On the other hand, the present invention also provides a kind of system for realizing above-mentioned reconstructing three-dimensional model, and its structure as shown in Figure 3.Comprising at least one depth camera within the system, for carrying out image acquisition to target, obtaining the depth image of target; Also comprising at least one color camera, for taking with depth camera is collaborative, obtaining color-depth image.The image processor be connected with described depth camera, color camera, for carrying out pre-service to the depth image obtained; The point of density cloud Data Generator be connected with described image processor, for obtaining intensive cloud data according to the depth map of target, carries out the reconstruction of target depth information point cloud grid; The Model Reconstruction device be connected with described point of density cloud Data Generator, for merging pretreated multiframe depth image, registration, obtains three-dimensional model.
Preferably, point of density cloud Data Generator, image processor, Model Reconstruction device and three-dimensional modeling data storehouse are all built in computer system, relevant function is completed by computer system, depth camera is connected with this computer system with the mode of color camera respectively by data line or radio communication, sends relevant data to computer system.
When adopting this system to carry out the reconstruction of three-dimensional model, first obtain original view data by depth camera and color camera, the image processor be sent in computer system carries out pretreatment operation.Image processor, by relevant image algorithm, carries out the operations such as Denoising disposal, smoothing processing, front and back scape segmentation to image.Pending three-dimensional model and background are separated, and obtain image comparatively clearly.
Then be sent to point of density cloud Data Generator through the pretreated image of image processor and carry out the enter a higher level generation of electric fish and the reconstruction of grid.In point of density cloud Data Generator, directly utilize the information of depth image to operate, avoid user and manually choose key point, directly generate three-dimensional model gridding accurately.
For the grid generated, the reconstruction operation of carrying out three-dimensional model in Model Reconstruction device will be sent to, Model Reconstruction device is for dynamic, that static state two kinds is different situation, carry out sort operation: for dynamic situation, calculate the distance between transformation matrix, set a threshold value, when actual range departs from threshold value time, show that moving object is too fast, cause image blurring, the trellis frame of correspondence can be cast out.Namely local rigid registration is between two carried out to the grid sequence generated, choose keyframe mesh, reduce motion blur and data redundancy.And for stationary body, adopt Existence of Global Stable sampling and local weighted mode, iterative operation is carried out to the adjacent point recently in grid, finds out accurate stitching position, carry out merging, registration, obtain three-dimensional model.
Finally the three-dimensional model of acquisition is sent in three-dimensional modeling data storehouse and preserves, to carry out network display use for follow-up.
In order to can clearer displaying the present invention process of rebuilding actual three-dimensional body, below the reconstructing three-dimensional model process of a knapsack be described in detail.First, to the knapsack of such as Fig. 4 placement, use depth camera to obtain the depth image of its correspondence, as shown in Figure 5.
In order to three-dimensional model more accurately can be obtained, multiple angle is usually used to carry out repetitive operation.Such as, the shooting angle of depth camera is changed, as shown in Fig. 6, Fig. 8, and obtain corresponding depth image respectively, as shown in Fig. 7, Fig. 9.Now, step S1 of the present invention is namely completed.
Then, carry out pre-service for depth image, these pretreated steps specifically comprise: Denoising disposal, smoothing processing, front and back scape segmentation etc.The preprocessing means specifically selected according to concrete shooting to depth image demand select.Now, step S2 of the present invention is namely completed.
Then, need to carry out step S3, obtain intensive cloud data according to the depth map of target, carry out the reconstruction of target depth information point cloud grid.In the present invention, rigid body/non-rigid two kinds of modes can be adopted.For knapsack, in time placing motionless, can think that its surface does not change in time, can rigid body be considered as.According to the operation for rigid body in aforementioned, the inherent parameters of the structured light produced in depth camera is used to process.Such as, Figure 10 is the common array type structure luminous point produced by depth camera.When these light spot are on knapsack, these points are rebuild as depth information point cloud grid.With reference to Fig. 2 when rebuilding, in the depth map of different angles, the point that correlativity is higher pieces together one, and is omitted by point low for correlativity.So far, the reconstruction of the depth information point cloud grid of 3 width knapsack depth maps is completed.
Then, the depth image of 3 knapsacks after rebuilding to be merged, registration, obtain three-dimensional model.So far, completing steps S4.
Certainly, if consider, the weight of knapsack self may cause deformation in the process of shooting, also the rigid body mode in step S3 can be replaced to non-rigid mode, now do not re-use the structured light carried in depth camera, but use the random luminous point dot matrix produced as monumented point, carry out depth information point cloud grid and rebuild.Still that in the depth map for different angles, the point that correlativity is higher pieces together one, and is omitted by point low for correlativity when rebuilding.Difference is only, needs to carry out interative computation to each point in random dot matrix and other all points, confirms its degree of correlation.Due to a little all carried out computing, the result of acquisition is more accurate.
Because the computing of the random dot matrix of complete non-rigid is larger, in order to save time in computing, usually rigid body, non-rigid are combined, and give and rigid body, weight that non-rigid is suitable: suppose that the result merging with the carrying out of rigid body, configure is x, the result of registration is carried out for y with non-rigid, after being so weighted, what obtain can be expressed as the merging registration result with knapsack:
z=Ax+By;
Wherein A, B are weighted index, and z is result after the registration finally obtained.For knapsack, A value 0.85 ~ 0.95, B corresponding selection 0.15 ~ 0.05.Obtain the three-dimensional model shown in Figure 11 the most at last.If obtain the coloured image of RGB in step sl simultaneously, further colored three-dimensional model can also be obtained.
Below again another concrete instance is then described.First under 3 different angles, (Figure 12 ~ 14) obtain the depth image about cup, as seen in figs. 15-17.
Then according to the structured light dot matrix shown in Figure 10, cup is set up to the depth information point cloud grid of depth map.Because cup is pure rigid body, the structured light in depth camera can be directly adopted to carry out network analysis.
After completing the reconstruction of depth information point cloud grid of 3 width cup depth maps, the depth image of 3 cups after rebuilding is merged, registration, obtain three-dimensional model.After further optimization, obtain the three-dimensional model of cup as shown in figure 18.
The above embodiment of the present invention is taken object by depth camera, utilize the relevant information intelligence in depth image to generate relevant grid, carry out Model Reconstruction, whole process participates in selecting without the need to user, thus the foundation of the model accelerated, improve the accuracy of Modling model.
These are only the specific embodiment of the invention, scope of the present invention can not be limited with this, the equalization change that those skilled in the art in the art do according to this creation, and the change that those skilled in that art know, all still should belong to the scope that the present invention is contained.