Movatterモバイル変換


[0]ホーム

URL:


CN112489193A - Three-dimensional reconstruction method based on structured light - Google Patents

Three-dimensional reconstruction method based on structured light
Download PDF

Info

Publication number
CN112489193A
CN112489193ACN202011334701.0ACN202011334701ACN112489193ACN 112489193 ACN112489193 ACN 112489193ACN 202011334701 ACN202011334701 ACN 202011334701ACN 112489193 ACN112489193 ACN 112489193A
Authority
CN
China
Prior art keywords
image
point
points
camera
structured light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011334701.0A
Other languages
Chinese (zh)
Other versions
CN112489193B (en
Inventor
李锋
汪平
张勇停
臧利年
周斌斌
刘玉红
孙晗笑
叶童玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and TechnologyfiledCriticalJiangsu University of Science and Technology
Priority to CN202011334701.0ApriorityCriticalpatent/CN112489193B/en
Publication of CN112489193ApublicationCriticalpatent/CN112489193A/en
Application grantedgrantedCritical
Publication of CN112489193BpublicationCriticalpatent/CN112489193B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于结构光的三维重建方法,属于三维重建技术领域。本发明包括以下步骤:对相机进行标定,并获得相机的内外参数;根据相机的内外参数,求取畸变映射矩阵;使用投影仪投射RGB格式结构光点图案到目标物体表面;利用左右相机对目标物体拍照采集左图像和右图像;进行图像分割、图像点聚类;匹配两个视图中的点;计算完所有的样本点之后,重建目标物体表面。本发明的有益效果为:双目立体视觉和结构光技术的结合,避免了投影仪的标定,简化了三维重建的步骤;投影RGB点图案的设计和迭代点分割的方法,可以有效地分割图像中的点,使左右图像匹配精度更高,且本发明可实现在线测量,对于色彩不鲜艳、纹理不丰富的区域重建效果良好。

Figure 202011334701

The invention discloses a three-dimensional reconstruction method based on structured light, and belongs to the technical field of three-dimensional reconstruction. The invention includes the following steps: calibrating the camera and obtaining the internal and external parameters of the camera; obtaining a distortion mapping matrix according to the internal and external parameters of the camera; using a projector to project the RGB format structured light point pattern on the surface of the target object; The object is photographed to collect left and right images; image segmentation and image point clustering are performed; points in the two views are matched; after all sample points are calculated, the surface of the target object is reconstructed. The beneficial effects of the invention are as follows: the combination of binocular stereo vision and structured light technology avoids the calibration of the projector and simplifies the steps of three-dimensional reconstruction; the design of the projected RGB point pattern and the iterative point segmentation method can effectively segment the image The point in the middle makes the matching accuracy of the left and right images higher, and the present invention can realize on-line measurement, and the reconstruction effect is good for the areas with not bright colors and rich textures.

Figure 202011334701

Description

Three-dimensional reconstruction method based on structured light
Technical Field
The invention relates to the technical field of three-dimensional reconstruction, in particular to a three-dimensional reconstruction method based on structured light.
Background
Computer vision is that a computer acquires descriptions and information of an objective world through processing pictures or picture sequences so as to help people to better understand contents contained in the pictures. The three-dimensional reconstruction technology is a branch of computer vision and is a research direction combining computer vision and computer graphic image processing. The method is widely applied to industrial automation, reverse engineering, cultural relic protection, computer-assisted medical treatment, virtual reality, augmented reality, robot application and other scenes.
Structured light three-dimensional reconstruction is one of the important techniques in computer vision. However, most of the existing methods require multiple projections of the designed pattern to implement closed form solutions, which makes them unable to measure dynamic objects. Most of related systems are based on the reconstruction of three-dimensional color images, the edge detection of the images and the feature matching algorithm, the three colors are independently processed according to RGB, the association between the color information of the images is artificially stripped, and the reliability of the detection is influenced.
The binocular stereo vision technology is characterized in that a left image and a right image of an object are shot by two cameras at two angles, then the same-name points in the left image and the right image are found out by utilizing a stereo matching algorithm, and the three-dimensional space position coordinate information of the object to be measured is calculated by combining the internal and external parameters of the cameras and utilizing triangular intersection. The binocular stereo vision technology does not need to actively project pattern information, has simple hardware structure, but has the defects of low reconstruction point cloud precision, low reconstruction speed, easy occurrence of matching point errors and the like for objects with little surface texture information. The structured light technology projects a specific coding pattern to the surface of an object through a projector, then shoots the coding pattern modulated by the surface of the object through a camera, and recovers the depth information of the object through decoding operation of the coding pattern. Structured light technology rebuilds the precision height, and is fast, even the object that surface texture information is few also can obtain fine reconstruction effect, but traditional structured light system of rebuilding mostly all is the single-purpose, need mark the projecting apparatus at the in-process of calculating depth information, and the process of demarcating of projecting apparatus is extremely loaded down with trivial details again.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a three-dimensional reconstruction method combining binocular stereo vision and a structured light technology, which utilizes the structured light in active vision to increase the texture characteristics of the surface of an object and realizes three-dimensional reconstruction through passive vision, thereby being beneficial to improving the precision and efficiency of the three-dimensional reconstruction and saving the cost.
A three-dimensional reconstruction method based on structured light comprises the following steps:
s1, building a three-dimensional reconstruction system, wherein the three-dimensional reconstruction system comprises two cameras, a projector and a computer;
s2, calibrating the camera and obtaining internal and external parameters of the camera;
s3, solving a distortion mapping matrix according to the parameters of the camera;
s4, projecting the designed RGB point structured light pattern to the object by using a projector;
s5, acquiring a left image and a right image of a reconstructed object by using a camera of a binocular stereo vision system, and performing stereo correction on the images;
s6, based on the regional similarity, red points, green points and blue points of the left image and the right image are respectively segmented by a three-channel combined RGB point segmentation method;
s7, clustering the points with the same color according to the Euclidean distance;
s8, matching points in the left view and points in the right view, and obtaining the parallax of corresponding matching points on the left image and the right image relative to a point P on the object;
s9, combining internal and external parameters of a camera, and obtaining three-dimensional space coordinates of each point on the object by using a parallax principle;
and S10, generating a point cloud picture with sparse object according to the three-dimensional space coordinates of multiple points on the object, and finishing the three-dimensional reconstruction of the object.
Preferably, the step S2 includes the following sub-steps:
s21, calibrating a left camera to obtain internal and external parameters of the left camera, wherein the external parameters comprise a rotation matrix and a translation matrix;
s22, the internal parameters and the rotation matrixes of the left camera and the right camera are the same, and the translation matrix of the right camera is obtained by the translation matrix of the left camera and the distance between the two cameras, so that the internal and external parameters of the right camera are obtained.
Preferably, in step S4, the RGB structured-light dot pattern is based on a structured-light dot matrix of RGB three primary colors.
Preferably, the RBG structured light dot pattern is a structured light dot matrix capable of forming the same color of each dot of each row of red, green and blue and different colors of adjacent rows.
Preferably, in S4, the structured light is projected toward the object from the front of the object.
Preferably, the step S5 includes the following sub-steps:
s51, respectively obtaining a left correction matrix and a right correction matrix based on the internal and external parameters and the distortion mapping matrix of the left camera and the internal and external parameters and the distortion mapping matrix of the right camera;
s52, performing stereo correction on the left image by using the left correction matrix, performing stereo correction on the right image by using the right correction matrix, wherein the point in the left image processed by the left correction matrix and the matching point in the right image processed by the right correction matrix are on the same scanning line, namely the y-axis coordinate of the point and the matching point is the same.
Preferably, the step S6 includes the following sub-steps:
s601, calculating a first threshold value T for a point image of an R channel by using a threshold value selection method based on slope difference distribution;
s602, adopting a threshold value T pairDividing the point image, and dividing the divided binary image I1All points in (a) are marked as:
Figure BDA0002796324310000041
wherein (x, y) is an index of the binary image;
s603, setting the image resolution as NX×NYDefining a set X as {1, 2.,. NXIn the set Y, the set is {1, 2.,. N }YAnd calculating an index set of the kth mark point as:
(Xk,Yk)={(x,y)|I1(x,y)=k}(2);
calculating the area A of the k-th mark pointkComprises the following steps:
Ak=|Xk|=|Yk| (3);
s604. area set
Figure BDA0002796324310000043
Is ordered as
Figure BDA0002796324310000042
The following conditions are satisfied:
Figure BDA0002796324310000051
since the regions of the points are similar, the difference in the sorted regions should not be too large if all points are segmented accurately enough. Therefore, the accuracy of the segmentation result can be judged by the calculated difference value of the classification areas;
s605, calculating a difference value D of the sequencing regioniComprises the following steps:
Figure BDA0002796324310000052
the maximum difference is calculated as:
Dmax=max Di,i=1,2,...,NB-1 (6);
if the maximum difference Dmax is greater than a threshold (which is calculated as the aggregate area)
Figure BDA0002796324310000053
Because the selected global threshold T is smaller than the optimal threshold, some adjacent points in the segmentation result are combined into one point; the optimal threshold is defined as the threshold that can separate all bright and dark spots from the background; on the other hand, a smaller threshold may more completely segment the bright spots relative to the optimal threshold; therefore, some points of smaller threshold segmentation should be used. The region threshold is used to select which segmentation points to use;
s606, calculating a region set
Figure BDA0002796324310000054
Average value of (d):
Figure BDA0002796324310000055
s607, all the areas are smaller than AmIndex set of division points of
Figure BDA0002796324310000056
The calculation is as follows:
Figure BDA0002796324310000057
s608, updating the global threshold value as follows:
T=T+ΔT (9);
where Δ T is the step size of the loop, its value being an integer greater than or equal to 1; in order to accelerate convergence speed, selecting delta T as 10;
s609, segmenting the image by using the updated threshold value again;
s610, repeating the steps S601 to S609 until Dmax is smaller than the area set
Figure BDA0002796324310000061
One tenth of the median;
s611, repeating the steps for m times to obtain a m-th segmentation result ImIndex set (X) of all divided pointsm,Ym) Comprises the following steps:
(Xm,Ym)={(x,y)|Im(x,y)>0} (10);
s612, finally segmenting the image IRResolution N ofX×NYThe initialization is as follows:
Figure BDA0002796324310000062
calculating to obtain:
Figure BDA0002796324310000063
s613, segmenting the images of the G channel and the B channel to obtain an IGAnd IBAnd adding the division results to form a final division result.
Preferably, the step S7 includes the following substeps:
s71, spreading the divided points 5 times by using a structural element B which is {0,0} in each channel, and connecting adjacent points to form a linear image;
and S72, multiplying the clustering straight line image in each channel with the corresponding segmentation point image to generate a clustering point image. In each lane, points on different lines are assigned different identification numbers, including line identification numbers, row identification numbers.
Preferably, the step S8 includes the following substeps:
s81, in two views of each channel, firstly, matching is carried out according to line identification numbers of points, and then the points with the same line identification numbers are matched according to the line identification numbers of the points, so that clustering points in the two views are matched;
s82, obtaining the pixel coordinates of the matched corner points of the left image and the right image, namely the certain corner point l (x) of the left imagel,yl) And a corner r (x) of the right imager,yr);
S83. since the image has been stereo corrected to achieve line alignment, the y-axis coordinates of point l and point r are the same, and the parallax of the corresponding matching point on the left and right images with respect to point P on the object can be directly expressed as d ═ xl-xr
Preferably, the step S9 includes the following substeps:
s91, obtaining a triangular PO according to the parallax of the corresponding matching points on the left image and the right image relative to the point P on the object and the optical centers of the left camera and the right cameralOrSimilar to triangle P1r, wherein the similar triangle scale formula is:
Figure BDA0002796324310000071
where T is the optical center distance of the left and right cameras, d is the parallax d ═ x of the corresponding matching point on the left and right images with respect to the point P on the objectl-xrF is the focal length of the left and right cameras, Z is the depth value of point P, OlIs the optical center of the left camera, OrIs the right camera optical center;
s92, obtaining the three-dimensional coordinates (X, Y, Z) of the point P by using the formula (13),
Figure BDA0002796324310000072
and finally, obtaining the three-dimensional coordinates (X, Y, Z) of all the points on the image.
The invention has the beneficial effects that: the invention combines binocular stereo vision and structured light technology, avoids the calibration of the projector, and simplifies the steps of three-dimensional reconstruction; the double-view reconstruction method designed by the invention only needs one projection, and can realize the measurement of the dynamic object; the designed RGB structure light point pattern and the region similarity based on the point structure light pattern provide the iterative point segmentation method, which can effectively segment red, blue and green points of the three-channel combined RGB structure light point pattern respectively, so that subsequent unsupervised point clustering is facilitated, and points in left and right views can be rapidly matched; and the three-dimensional reconstruction is realized through passive vision, the precision and the efficiency of the three-dimensional reconstruction are favorably improved, and the method has a good reconstruction effect on targets with non-bright colors, non-abundant textures and unobvious shielding.
Drawings
FIG. 1 is a flow chart of a method for reconstructing a surface of an object based on a structured light point pattern;
FIG. 2 is a schematic diagram of an imaging system set up;
FIG. 3 is a designed RGB format point structured light pattern;
FIG. 4 is a photograph of a spherical object projected with regular structured light according to the present invention;
FIG. 5 is a three-dimensional reconstructed volumetric imaging model;
fig. 6 is a schematic view of a similar triangle using the parallax principle according to the present invention.
Detailed Description
The geometric model adopted by the invention is shown in fig. 5, wherein Ol is the optical center of the left camera, Or is the optical center of the right camera, P is any point in the space, and the optical centers of the left camera and the right camera and the point P form a plane POlOr. Pl and Pr are the image points of the P point in the left and right cameras respectively and are called a pair of homonymous points, and the intersecting lines Lp1 and Lpr of the plane POLOr and the left and right image planes are called a pair of epipolar lines.
The invention is further illustrated by the following figures and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings.
Example 1
Referring to fig. 1 and 2, a three-dimensional reconstruction method based on structured light includes the following steps:
s1, building a three-dimensional reconstruction system, wherein the three-dimensional reconstruction system comprises two cameras, a projector and a computer;
s2, calibrating the camera and obtaining internal and external parameters of the camera;
the step S2 includes the following sub-steps:
s21, calibrating a left camera to obtain internal and external parameters of the left camera, wherein the external parameters comprise a rotation matrix and a translation matrix;
s22, the internal parameters and the rotation matrixes of the left camera and the right camera are the same, and the translation matrix of the right camera is obtained by the translation matrix of the left camera and the distance between the two cameras, so that the internal and external parameters of the right camera are obtained.
Namely the rotation matrix and the translation matrix of the calibrated left camera and the calibrated right camera are respectively R1、t1And R2、t2Wherein R is1=R2,t1=(x,y,z)T,t2=(x+d,y,z)TAnd d is the translation distance from the left camera to the right camera.
S3, solving a distortion mapping matrix according to the parameters of the camera;
s4, referring to the figures 2, 3 and 4, projecting the designed RGB point structured light pattern to an object by using a projector;
in step S4, the RGB structural dot pattern is a structured light lattice based on RGB three primary colors, and the structured light is projected from the front of the object to the object.
In this embodiment, the RBG structure dot pattern is a structured light dot matrix that can form the same color for each row of dots of red, green, and blue, and different colors for adjacent rows. The structured light lattice projecting the RGB three primary colors is chosen because the color structured light dot pattern is more favorable for matching of the feature points.
S5, acquiring a left image and a right image of a reconstructed object by using a camera of a binocular stereo vision system, and performing stereo correction on the images;
the step S5 includes the following sub-steps:
s51, respectively obtaining a left correction matrix and a right correction matrix based on the internal and external parameters and the distortion mapping matrix of the left camera and the internal and external parameters and the distortion mapping matrix of the right camera;
s52, performing stereo correction on the left image by using the left correction matrix, performing stereo correction on the right image by using the right correction matrix, wherein the point in the left image processed by the left correction matrix and the matching point in the right image processed by the right correction matrix are on the same scanning line, namely the y-axis coordinate of the point and the matching point is the same.
S6, based on the regional similarity, red points, green points and blue points of the left image and the right image are respectively segmented by a three-channel combined RGB point segmentation method;
the step S6 includes the following sub-steps:
s601, calculating a first threshold value T for a point image of an R channel by using a threshold value selection method based on slope difference distribution;
s602, segmenting the point image by adopting a threshold value T, and segmenting the segmented binary image I1All points in (a) are marked as:
Figure BDA0002796324310000101
wherein (x, y) is an index of the binary image;
s603, setting the image resolution as NX×NYDefining a set X as {1, 2.,. NXIn the set Y, the set is {1, 2.,. N }YAnd calculating an index set of the kth mark point as:
(Xk,Yk)={(x,y)|I1(x,y)=k} (2);
calculating the area A of the k-th mark pointkComprises the following steps:
Ak=|Xk|=|Yk| (3);
s604. area set
Figure BDA0002796324310000111
Is ordered as
Figure BDA0002796324310000112
The following conditions are satisfied:
Figure BDA0002796324310000113
since the regions of the points are similar, the difference in the sorted regions should not be too large if all points are segmented accurately enough. Therefore, the accuracy of the segmentation result can be judged by the calculated difference value of the classification areas;
s605, calculating a difference value D of the sequencing regioniComprises the following steps:
Figure BDA0002796324310000114
the maximum difference is calculated as:
Dmax=max Di,i=1,2,...,NB-1 (6);
if the maximum difference Dmax is greater than a threshold (which is calculated as the aggregate area)
Figure BDA0002796324310000115
Because the selected global threshold T is smaller than the optimal threshold, some adjacent points in the segmentation result are combined into one point; the optimal threshold is defined as the threshold that can separate all bright and dark spots from the background; on the other hand, a smaller threshold may more completely segment the bright spots relative to the optimal threshold; therefore, some points of smaller threshold segmentation should be used. The region threshold is used to select which segmentation points to use;
s606, calculating a region set
Figure BDA0002796324310000121
Average value of (d):
Figure BDA0002796324310000122
s607, all the areas are smaller than AmIndex set of division points of
Figure BDA0002796324310000123
The calculation is as follows:
Figure BDA0002796324310000124
s608, updating the global threshold value as follows:
T=T+ΔT (9);
where Δ T is the step size of the loop, its value being an integer greater than or equal to 1; in order to accelerate convergence speed, selecting delta T as 10;
s609, segmenting the image by using the updated threshold value again;
s610, repeating the steps S601 to S609 until Dmax is smaller than the area set
Figure BDA0002796324310000125
One tenth of the median;
s611, repeating the steps for m times to obtain a m-th segmentation result ImIndex set (X) of all divided pointsm,Ym) Comprises the following steps:
(Xm,Ym)={(x,y)|Im(x,y)>0} (10);
s612, finally segmenting the image IRResolution N ofX×NYThe initialization is as follows:
Figure BDA0002796324310000126
calculating to obtain:
Figure BDA0002796324310000127
s613, segmenting the images of the G channel and the B channel to obtain an IGAnd IBAnd adding the division results to form a final division result.
S7, clustering the points with the same color according to the Euclidean distance;
the step S7 includes the following substeps:
s71, spreading the divided points 5 times by using a structural element B which is {0,0} in each channel, and connecting adjacent points to form a linear image;
and S72, multiplying the clustering straight line image in each channel with the corresponding segmentation point image to generate a clustering point image. In each lane, points on different lines are assigned different identification numbers, including line identification numbers, row identification numbers.
S8, matching points in the left view and points in the right view, and obtaining the parallax of corresponding matching points on the left image and the right image relative to a point P on the object;
the step S8 includes the following substeps:
s81, in two views of each channel, firstly, matching is carried out according to line identification numbers of points, and then the points with the same line identification numbers are matched according to the line identification numbers of the points, so that clustering points in the two views are matched;
s82, obtaining the pixel coordinates of the matched corner points of the left image and the right image, namely the certain corner point l (x) of the left imagel,yl) And a corner r (x) of the right imager,yr);
S83. since the image has been stereo corrected to achieve line alignment, the y-axis coordinates of point l and point r are the same, and the parallax of the corresponding matching point on the left and right images with respect to point P on the object can be directly expressed as d ═ xl-xr
S9, combining internal and external parameters of a camera, and obtaining three-dimensional space coordinates of each point on the object by using a parallax principle;
the step S9 includes the following substeps:
s91, obtaining a triangular PO according to the parallax of the corresponding matching points on the left image and the right image relative to the point P on the object and the optical centers of the left camera and the right cameralOrSimilar to triangle P1r, wherein the similar triangle scale formula is:
Figure BDA0002796324310000141
where T is the optical center distance of the left and right cameras, d is the parallax d ═ x of the corresponding matching point on the left and right images with respect to the point P on the objectl-xrF isFocal lengths of the left and right cameras, Z being the depth value of point P, OlIs the optical center of the left camera, OrIs the right camera optical center;
s92, obtaining the three-dimensional coordinates (X, Y, Z) of the point P by using the formula (13),
Figure BDA0002796324310000142
and finally, obtaining the three-dimensional coordinates (X, Y, Z) of all the points on the image.
And S10, generating a point cloud picture with sparse object according to the three-dimensional space coordinates of multiple points on the object, and finishing the three-dimensional reconstruction of the object.

Claims (10)

Translated fromChinese
1.一种基于结构光的三维重建方法,其特征在于,包括以下步骤:1. a three-dimensional reconstruction method based on structured light, is characterized in that, comprises the following steps:S1.搭建三维重建系统,所述三维重建系统包括两台相机、一台投影仪,一台计算机;S1. Build a three-dimensional reconstruction system, the three-dimensional reconstruction system includes two cameras, a projector, and a computer;S2.对相机进行标定,并获得相机的内外参数;S2. Calibrate the camera, and obtain the internal and external parameters of the camera;S3.根据相机的参数,求取畸变映射矩阵;S3. According to the parameters of the camera, obtain the distortion mapping matrix;S4.利用投影仪向物体投射所设计的RGB点结构光图案;S4. Use the projector to project the designed RGB point structured light pattern to the object;S5.利用双目立体视觉系统的相机采集重建物体的左图像和右图像,并对图像进行立体校正;S5. Use the camera of the binocular stereo vision system to collect the left image and the right image of the reconstructed object, and perform stereo correction on the images;S6.基于区域相似性,利用三通道组合RGB点分割方法,对左图像和右图像的红点、绿点、蓝点分别进行分割;S6. Based on the regional similarity, use the three-channel combined RGB point segmentation method to segment the red, green and blue points of the left image and the right image respectively;S7.根据欧几里得距离,对相同颜色的点进行聚类;S7. According to the Euclidean distance, the points of the same color are clustered;S8.匹配左视图与右视图中的点,并获得左图像和右图像上的对应匹配点关于物体上点P的视差;S8. Match the points in the left view and the right view, and obtain the disparity of the corresponding matching points on the left image and the right image with respect to the point P on the object;S9.结合相机的内外参数,利用视差原理得到物体上各点的三维空间坐标;S9. Combine the internal and external parameters of the camera, and use the parallax principle to obtain the three-dimensional space coordinates of each point on the object;S10.根据物体上多点的三维空间坐标生成物体稀疏的点云图,完成物体三维重建。S10. Generate a sparse point cloud image of the object according to the three-dimensional space coordinates of multiple points on the object, and complete the three-dimensional reconstruction of the object.2.根据权利要求1所述的基于结构光的三维重建方法,其特征在于:所述步骤S2包括以下子步骤:2. The three-dimensional reconstruction method based on structured light according to claim 1, wherein the step S2 comprises the following sub-steps:S21.先对左相机进行标定,得到左相机的内外参数,外参数包括一个旋转矩阵和一个平移矩阵;S21. First calibrate the left camera to obtain the internal and external parameters of the left camera, the external parameters include a rotation matrix and a translation matrix;S22.左相机和右相机的内参数和旋转矩阵相同,右相机的平移矩阵由左相机的平移矩阵和两个相机的距离得到,从而得到右相机的内外参数。S22. The internal parameters and rotation matrices of the left camera and the right camera are the same, and the translation matrix of the right camera is obtained from the translation matrix of the left camera and the distance between the two cameras, thereby obtaining the internal and external parameters of the right camera.3.根据权利要求1所述的基于结构光的三维重建方法,其特征在于:所述步骤S4中,所述RGB结构光点图案是基于RGB三基色的结构光点阵。3 . The three-dimensional reconstruction method based on structured light according to claim 1 , wherein in the step S4 , the RGB structured light dot pattern is a structured light dot matrix based on three primary colors of RGB. 4 .4.根据权利要求3所述的基于结构光的三维重建方法,其特征在于:所述RBG结构光点图案为能形成按红、绿、蓝每行各点的颜色相同、相邻行颜色不同的结构光点阵。4. The three-dimensional reconstruction method based on structured light according to claim 3, characterized in that: the RBG structured light spot pattern is capable of forming the same color of each point in each row of red, green and blue, and different colors of adjacent rows. The structured light lattice.5.根据权利要求1所述的基于结构光的三维重建方法,其特征在于:S4中,所述结构光是从物体的正前方向物体投射。5 . The three-dimensional reconstruction method based on structured light according to claim 1 , wherein in S4 , the structured light is projected from the front of the object to the object. 6 .6.根据权利要求1所述的基于结构光的三维重建方法,其特征在于:所述步骤S5包括以下子步骤:6. The three-dimensional reconstruction method based on structured light according to claim 1, wherein the step S5 comprises the following sub-steps:S51.基于左相机的内外参数和畸变映射矩阵、右相机的内外参数和畸变映射矩阵,分别得到左校正矩阵和右校正矩阵;S51. Based on the internal and external parameters of the left camera and the distortion mapping matrix, and the internal and external parameters of the right camera and the distortion mapping matrix, respectively obtain a left correction matrix and a right correction matrix;S52.利用左校正矩阵对左图像进行立体校正,利用右校正矩阵对右图像进行立体校正,经过左校正矩阵处理后的左图像中的点与经过右校正矩阵处理后的右图像中的匹配点在同一条扫描线上,即所述点与匹配点的y轴坐标相同。S52. Use the left correction matrix to perform stereo correction on the left image, use the right correction matrix to perform stereo correction on the right image, and the points in the left image processed by the left correction matrix and the matching points in the right image processed by the right correction matrix On the same scan line, i.e. the point has the same y-coordinate of the matching point.7.根据权利要求1所述的基于结构光的三维重建方法,其特征在于:所述步骤S6包括以下子步骤:7. The three-dimensional reconstruction method based on structured light according to claim 1, wherein the step S6 comprises the following sub-steps:S601.利用基于斜率差分布的阈值选择方法,对R通道的点图像计算第一个阈值T;S601. utilize the threshold value selection method based on the slope difference distribution to calculate the first threshold value T for the point image of the R channel;S602.采用阈值T对点图像进行分割,将分割后的二值图像I1中的所有的点被标记为:S602. Use the threshold T to segment the point image, and mark all points in the segmented binary image I1 as:
Figure FDA0002796324300000031
Figure FDA0002796324300000031
其中(x,y)为二值图像的索引;Where (x, y) is the index of the binary image;S603.设图像分辨率为NX×NY,定义集合X为{1,2,...,NX},集合Y为{1,2,...,NY},则计算第k个标记点的索引集为:S603. Set the image resolution as NX ×NY , define the set X as {1,2,...,NX } and the set Y as {1,2,...,NY }, then calculate the kth The index set of marker points is:(Xk,Yk)={(x,y)|I1(x,y)=k} (2);(Xk , Yk )={(x,y)|I1 (x,y)=k} (2);计算第k个标记点的区域Ak为:Calculate the area Ak of the k-th marker point as:Ak=|Xk|=|Yk| (3);Ak =|Xk |=|Yk | (3);S604.区域集合
Figure FDA0002796324300000032
排序为
Figure FDA0002796324300000033
满足以下条件:
S604. Regional Collection
Figure FDA0002796324300000032
sort as
Figure FDA0002796324300000033
The following conditions:
Figure FDA0002796324300000034
Figure FDA0002796324300000034
S605.计算排序区域的差值Di为:S605. Calculate the difference Di of the sorting area as:
Figure FDA0002796324300000035
Figure FDA0002796324300000035
则最大差值计算为:Then the maximum difference is calculated as:Dmax=maxDi,i=1,2,...,NB-1 (6);Dmax =maxDi , i=1,2,...,NB -1 (6);S606.计算区域集合
Figure FDA0002796324300000036
的平均值:
S606. Computation area set
Figure FDA0002796324300000036
average of:
Figure FDA0002796324300000037
Figure FDA0002796324300000037
S607.所有区域小于Am的分割点的索引集
Figure FDA0002796324300000038
计算为:
S607. The index set of all division points whose area is smaller than Am
Figure FDA0002796324300000038
Calculated as:
Figure FDA0002796324300000039
Figure FDA0002796324300000039
S608.全局阈值更新为:S608. The global threshold is updated to:T=T+ΔT (9);T=T+ΔT (9);其中ΔT是循环的步长,它的值是一个大于或等于1的整数;where ΔT is the step size of the loop, and its value is an integer greater than or equal to 1;S609.再次使用更新的阈值对图像进行分割;S609. Use the updated threshold again to segment the image;S610.重复步骤S601至S609,直到Dmax小于区域集合
Figure FDA0002796324300000043
中值的十分之一;
S610. Repeat steps S601 to S609 until Dmax is less than the area set
Figure FDA0002796324300000043
one-tenth of the median;
S611.设上述步骤重复m次,得到第m次分割结果Im,所有被分割的点的索引集(Xm,Ym)为:S611. Suppose the above steps are repeated m times to obtain the m-th division result Im , and the index set (Xm , Ym ) of all the divided points is:(Xm,Ym)={(x,y)|Im(x,y)>0} (10);(Xm , Ym )={(x, y)|Im (x, y)>0} (10);S612.最终分割图像IR的分辨率NX×NY初始化为:S612. The resolution NX× N Yof the final segmented image IR is initialized as:
Figure FDA0002796324300000041
Figure FDA0002796324300000041
计算得:Calculated:
Figure FDA0002796324300000042
Figure FDA0002796324300000042
S613.对G通道和B通道图像进行分割,得到IG和IB,将分割结果相加,形成最终的分割结果。S613. Segment theG channel and B channel images to obtain IG andIB , and add the segmentation results to form a final segmentation result.8.根据权利要求1所述的基于结构光的三维重建方法,其特征在于:所属步骤S7包括以下子步骤:8. The three-dimensional reconstruction method based on structured light according to claim 1, wherein the step S7 comprises the following sub-steps:S71.在每个通道中,以结构元素B={0,0}将分割后的点展开5次,使相邻的点连接起来,形成直线图像;S71. In each channel, expand the segmented points 5 times with structural element B={0,0} to connect adjacent points to form a straight line image;S72.将每个通道内的聚类直线图像与相应的分割点图像相乘,生成聚类点图像;在每个通道中,不同直线上的点被分配不同的识别号,包括线识别号,行识别号。S72. Multiply the clustered straight line image in each channel with the corresponding segmentation point image to generate a clustered point image; in each channel, points on different straight lines are assigned different identification numbers, including line identification numbers, Line ID.9.根据权利要求1所述的基于结构光的三维重建方法,其特征在于:所属步骤S8包括以下子步骤:9. The three-dimensional reconstruction method based on structured light according to claim 1, wherein step S8 includes the following sub-steps:S81.在每个通道的两个视图中,首先根据点的线识别号进行匹配,然后将具有相同行标识号的点按其行识别号进行匹配,由此将两个视图中的聚类点进行匹配;S81. In the two views of each channel, firstly match according to the line identification number of the point, and then match the points with the same line identification number according to their line identification number, thereby clustering points in the two views to match;S82.得到左图像和右图像的匹配角点像素坐标,即左图像某角点l(xl,yl)和右图像某角点r(xr,yr);S82. Obtain the pixel coordinates of the matching corner points of the left image and the right image, that is, a certain corner point l(xl , yl ) of the left image and a certain corner point r (xr , yr ) of the right image;S83.由于已经对图像进行立体校正,实现行对准,所以点l和点r的y轴坐标相同,左图像和右图像上的对应匹配点关于物体上点P的视差可直接表示为d=xl-xrS83. Since the image has been stereo corrected to achieve line alignment, the y-axis coordinates of point l and point r are the same, and the parallax of the corresponding matching points on the left image and the right image with respect to point P on the object can be directly expressed as d= xl -xr .10.根据权利要求1所述的基于结构光的三维重建方法,其特征在于:所属步骤S9包括以下子步骤:10. The three-dimensional reconstruction method based on structured light according to claim 1, wherein the sub-step S9 comprises the following sub-steps:S91.根据左图像和右图像上的对应匹配点关于物体上点P的视差,以及左相机和右相机的光心,得到三角形POlOr相似于三角形P1r,其中相似三角形比例公式为:S91. According to the parallax of the corresponding matching points on the left image and the right image with respect to the point P on the object, and the optical centers of the left camera and the right camera, the trianglePO10r is obtained to be similar to the triangleP1r , and the similar triangle ratio formula is:
Figure FDA0002796324300000051
Figure FDA0002796324300000051
其中,T为左相机和右相机的光心距离,d为左图像和右图像上的对应匹配点关于物体上点P的视差d=xl-xr,f为左相机和右相机的的焦距,Z为P点的深度值,Ol为左相机光心,Or为右相机光心;Among them, T is the optical center distance between the left camera and the right camera, d is the disparity d=xl -xr of the corresponding matching point on the left image and the right image with respect to the point P on the object, and f is the left camera and the right camera. Focal length, Z is the depth value of point P, Ol is the optical center of the left camera, andOr is the optical center of the right camera;S92.利用式(13)得到P点的三维坐标(X,Y,Z),S92. Use formula (13) to obtain the three-dimensional coordinates (X, Y, Z) of point P,
Figure FDA0002796324300000061
Figure FDA0002796324300000061
最后求得图像上所有点的三维坐标(X,Y,Z)。Finally, the three-dimensional coordinates (X, Y, Z) of all points on the image are obtained.
CN202011334701.0A2020-11-242020-11-24Three-dimensional reconstruction method based on structured lightActiveCN112489193B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011334701.0ACN112489193B (en)2020-11-242020-11-24Three-dimensional reconstruction method based on structured light

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011334701.0ACN112489193B (en)2020-11-242020-11-24Three-dimensional reconstruction method based on structured light

Publications (2)

Publication NumberPublication Date
CN112489193Atrue CN112489193A (en)2021-03-12
CN112489193B CN112489193B (en)2024-06-14

Family

ID=74934011

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011334701.0AActiveCN112489193B (en)2020-11-242020-11-24Three-dimensional reconstruction method based on structured light

Country Status (1)

CountryLink
CN (1)CN112489193B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113066064A (en)*2021-03-292021-07-02郑州铁路职业技术学院Cone beam CT image biological structure identification and three-dimensional reconstruction system based on artificial intelligence
CN114067056A (en)*2021-11-182022-02-18新拓三维技术(深圳)有限公司Data fusion method based on structured light
CN114332349A (en)*2021-11-172022-04-12浙江智慧视频安防创新中心有限公司Binocular structured light edge reconstruction method and system and storage medium
WO2022218081A1 (en)*2021-04-142022-10-20东莞埃科思科技有限公司Binocular camera and robot
CN116958268A (en)*2022-04-192023-10-27湖北香城智能机电研究院有限公司 A binocular vision three-dimensional reconstruction method based on Gray code structured light

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101667303A (en)*2009-09-292010-03-10浙江工业大学Three-dimensional reconstruction method based on coding structured light
CN102422832A (en)*2011-08-172012-04-25中国农业大学Spray vision positioning system and positioning method
CN107945268A (en)*2017-12-152018-04-20深圳大学A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN109191509A (en)*2018-07-252019-01-11广东工业大学A kind of virtual binocular three-dimensional reconstruction method based on structure light
CN110880186A (en)*2018-09-062020-03-13山东理工大学Real-time human hand three-dimensional measurement method based on one-time projection structured light parallel stripe pattern
CN110926339A (en)*2018-09-192020-03-27山东理工大学 A real-time three-dimensional measurement method based on one-shot structured light parallel fringe pattern

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101667303A (en)*2009-09-292010-03-10浙江工业大学Three-dimensional reconstruction method based on coding structured light
CN102422832A (en)*2011-08-172012-04-25中国农业大学Spray vision positioning system and positioning method
CN107945268A (en)*2017-12-152018-04-20深圳大学A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN109191509A (en)*2018-07-252019-01-11广东工业大学A kind of virtual binocular three-dimensional reconstruction method based on structure light
CN110880186A (en)*2018-09-062020-03-13山东理工大学Real-time human hand three-dimensional measurement method based on one-time projection structured light parallel stripe pattern
CN110926339A (en)*2018-09-192020-03-27山东理工大学 A real-time three-dimensional measurement method based on one-shot structured light parallel fringe pattern

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113066064A (en)*2021-03-292021-07-02郑州铁路职业技术学院Cone beam CT image biological structure identification and three-dimensional reconstruction system based on artificial intelligence
CN113066064B (en)*2021-03-292023-06-06郑州铁路职业技术学院 Biological structure recognition and 3D reconstruction system of cone beam CT images based on artificial intelligence
WO2022218081A1 (en)*2021-04-142022-10-20东莞埃科思科技有限公司Binocular camera and robot
CN114332349A (en)*2021-11-172022-04-12浙江智慧视频安防创新中心有限公司Binocular structured light edge reconstruction method and system and storage medium
CN114332349B (en)*2021-11-172023-11-03浙江视觉智能创新中心有限公司Binocular structured light edge reconstruction method, system and storage medium
CN114067056A (en)*2021-11-182022-02-18新拓三维技术(深圳)有限公司Data fusion method based on structured light
CN116958268A (en)*2022-04-192023-10-27湖北香城智能机电研究院有限公司 A binocular vision three-dimensional reconstruction method based on Gray code structured light

Also Published As

Publication numberPublication date
CN112489193B (en)2024-06-14

Similar Documents

PublicationPublication DateTitle
CN112489193B (en)Three-dimensional reconstruction method based on structured light
CN110853075B (en) A visual tracking and localization method based on dense point cloud and synthetic view
Kaskman et al.Homebreweddb: Rgb-d dataset for 6d pose estimation of 3d objects
US11348267B2 (en)Method and apparatus for generating a three-dimensional model
CN106091984B (en) A method for acquiring 3D point cloud data based on line laser
CN105184857B (en)Monocular vision based on structure light ranging rebuilds mesoscale factor determination method
CN110728671B (en) Vision-Based Dense Reconstruction Methods for Textureless Scenes
CN102938142B (en)Based on the indoor LiDAR missing data complementing method of Kinect
CN112801074B (en)Depth map estimation method based on traffic camera
CN118657888A (en) A sparse view 3D reconstruction method based on depth prior information
CN101299270A (en)Multiple video cameras synchronous quick calibration method in three-dimensional scanning system
CN112132907A (en) A camera calibration method, device, electronic device and storage medium
CN101551907B (en)Method for multi-camera automated high-precision calibration
CN113160421B (en)Projection-based spatial real object interaction virtual experiment method
CN108305233A (en)A kind of light field image bearing calibration for microlens array error
CN114998448A (en) A method for multi-constraint binocular fisheye camera calibration and spatial point localization
CN116309844A (en)Three-dimensional measurement method based on single aviation picture of unmanned aerial vehicle
JP2000137815A (en) New viewpoint image generation method
CN113393524A (en)Target pose estimation method combining deep learning and contour point cloud reconstruction
CN205451195U (en)Real -time three -dimensional some cloud system that rebuilds based on many cameras
CN113963067B (en) A calibration method using a small target to calibrate a vision sensor with a large field of view
CN117593618B (en)Point cloud generation method based on nerve radiation field and depth map
CN104182968A (en)Method for segmenting fuzzy moving targets by wide-baseline multi-array optical detection system
CN112712566A (en)Binocular stereo vision sensor measuring method based on structure parameter online correction
CN115359127B (en) A polarization camera array calibration method suitable for multi-layer medium environment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp