技术领域technical field
本发明属于机器视觉研究领域,是一种基于激光散斑结构光的三维重建。The invention belongs to the field of machine vision research, and is a three-dimensional reconstruction based on laser speckle structured light.
背景技术Background technique
三维重建技术是机器视觉研究的重要课题之一,是指其通过三维物体的图像来恢复三维物体的三维空间几何形状。一般三维重建的方式有通过双摄像机的双目视差原理通过三角测量法,或者通过结构光来获得空间编码,通过三角测量法来获得深度信息。Three-dimensional reconstruction technology is one of the important topics of machine vision research, which refers to the restoration of the three-dimensional geometric shape of three-dimensional objects through the image of three-dimensional objects. The general 3D reconstruction methods include triangulation through the binocular parallax principle of dual cameras, or spatial encoding through structured light, and depth information through triangulation.
本发明意在用一种激光散斑结构光来获得深度信息,类似的发明如微软的kinect亦是通过此种方法来获得物体的深度信息(通过激光散斑的互相关函数来匹配不同深度),不同的在于通过散斑后再获得深度信息的算法,本发明提出通过细化窗口对逐个像素块通过多支持向量机并行分类编码来获得每个像素窗口的深度,通过深度信息反解摄像机模型来获得物体在世界坐标系下的坐标。The present invention intends to use a laser speckle structured light to obtain depth information. Similar inventions such as Microsoft's kinect also use this method to obtain object depth information (matching different depths through the cross-correlation function of laser speckle) , the difference lies in the algorithm of obtaining depth information after speckle. The present invention proposes to obtain the depth of each pixel window by refining the window and performing multi-support vector machine parallel classification encoding on each pixel block, and reversely resolve the camera model through depth information. To get the coordinates of the object in the world coordinate system.
三维重建一般是根据已经标定好的摄像机通过已知的标定内、外部参数来求解物点的世界坐标,而求解过程因为摄像机模型和求解方程组的限定决定了求解的结果只能是一条射线方程,而无法通过其方程直接得到物点的3个世界坐标。本发明意在提出一种新的方式来获取物点的深度坐标,进而通过线性摄像机模型与径向畸变的方程来直接求解物点的三维世界坐标系下的3个坐标。3D reconstruction generally solves the world coordinates of the object point based on the calibrated camera through the known internal and external parameters of the calibration, and the solution process can only be a ray equation due to the limitation of the camera model and the solution equation group. , but the three world coordinates of the object point cannot be directly obtained through its equation. The present invention intends to propose a new way to obtain the depth coordinates of the object point, and then directly solve the three coordinates of the object point in the three-dimensional world coordinate system through the linear camera model and the equation of radial distortion.
当激光通过粗糙透明表面(如毛玻璃)并投射在物体表面时,在物体表面可以观察到无规则分布的明暗斑点,即激光散斑。这种激光散斑的产生是当激光照射在粗糙表面上时,表面上每一点都要散射光,而空间个点都接收这些相干散射光的照射就形成了激光散斑,散斑场按光路分为两种,一种散斑场是在自由空间中传播而形成的(也称作客观散斑),另一种是由透镜成像形成的(也称作主观散斑),本发明使用的是后者。When the laser passes through a rough transparent surface (such as ground glass) and projects on the surface of the object, irregularly distributed light and dark spots can be observed on the surface of the object, that is, laser speckle. The generation of this kind of laser speckle is that when the laser is irradiated on the rough surface, every point on the surface will scatter light, and every point in space receives the irradiation of these coherent scattered light to form laser speckle, and the speckle field is according to the optical path There are two kinds of speckle fields, one is formed by propagating in free space (also called objective speckle), and the other is formed by lens imaging (also called subjective speckle). It is the latter.
而对于空间各点形成的规则散斑,其中包含了空间每点的深度信息,通过红外摄像机来捕获所形成的散斑,通过特征的提取,分类器(SVM)的训练,最终可以获得关于空间每点所形成的散斑中所包含的深度信息。For the regular speckle formed by each point in the space, which contains the depth information of each point in the space, the formed speckle is captured by an infrared camera, and through the feature extraction and the training of the classifier (SVM), finally the information about the space can be obtained. The depth information contained in the speckle formed by each point.
由所获得的深度信息,通过公式
发明内容Contents of the invention
摄像机标定过程通常采用的是经典的针孔成像模型,该模型一般用以下公式描述:The camera calibration process usually uses the classic pinhole imaging model, which is generally described by the following formula:
其中:空间中任意一点P在世界坐标系中的齐次坐标为Pw(xw yw zw 1)T,在图像坐标系中的齐次坐标为p(u v 1)T。λ是任意比例因子;K是摄像机内参数矩阵,其中s为图像畸变因子,fu,fv为图像在u方向和v方向上像点的无力坐标到图像像素坐标的比例系数,即有效焦距,(uo vo)是主光轴与图像平面交点的图像坐标。R是一个3*3单位正交的旋转矩阵;T是一个平移向量;同时,(R,T)是摄像机坐标系相对于世界坐标系的位置。Among them: the homogeneous coordinate of any point P in the space in the world coordinate system is Pw (xw yw zw 1)T , and the homogeneous coordinate in the image coordinate system is p(uv 1)T . λ is an arbitrary scale factor; K is the internal parameter matrix of the camera, where s is the image distortion factor, fu and fv are the scale coefficients from the powerless coordinates of the image point in the u direction and the v direction of the image to the image pixel coordinates, that is, the effective focal length , (uo vo ) is the image coordinate of the intersection point of the principal optical axis and the image plane. R is a 3*3 unit orthogonal rotation matrix; T is a translation vector; at the same time, (R, T) is the position of the camera coordinate system relative to the world coordinate system.
通过上述公式,在已知内外参数的情况下,可以得到两个方程组,其中未知的是世界坐标系下的物点坐标,所以,如果要通过上述公式求解则必须首先求得一个坐标,本发明通过求解深度信息来降低上述方程中的未知数个数来求解其余两个坐标。Through the above formula, in the case of knowing the internal and external parameters, two equations can be obtained, and the unknown is the coordinate of the object point in the world coordinate system. Therefore, if you want to solve it through the above formula, you must first obtain a coordinate. This The invention solves the remaining two coordinates by reducing the number of unknowns in the above equation by solving the depth information.
使用激光散斑对空间进行编码,本发明使用主观散斑,即通过透镜成像形成的散斑来对一定角度的空间进行编码,通过红外摄像头来获得编码图像。Laser speckle is used to encode space. The present invention uses subjective speckle, that is, speckle formed by lens imaging to encode space at a certain angle, and obtains a coded image through an infrared camera.
通过对不同距离的编码图像的特征提取来作为特征向量作为SVM的训练集,送入SVM中进行训练,值得注意的是,每一副图像都只是作为各自距离所属SVM的训练集,这样就会得到一组SVM,对于最后求解得到的深度距离的精度也在这里决定,在散斑编码的范围内不同图像之间的距离间隔越小自然越好,但是同时所得到的的SVM的个数也会随之变多,这样在随后计算深度距离的时间上会花费更多时间,对于图像的实时性会有不小影响。By extracting the features of the coded images with different distances as the feature vector as the training set of SVM, it is sent to the SVM for training. It is worth noting that each image is only used as the training set of the SVM of the respective distance, so that A set of SVMs is obtained, and the accuracy of the depth distance obtained by the final solution is also determined here. The smaller the distance interval between different images within the range of speckle coding, the better, but at the same time, the number of SVMs obtained is also It will increase accordingly, so it will take more time to calculate the depth distance later, which will have a great impact on the real-time performance of the image.
深度距离的计算是通过散斑测试图像的特征提取来作为SVM的测试集,通过多类SVM编码来同时对多SVM进行运算最后得到一组二进制编码,通过这组编码乘以精度系数来获得具体的深度信息。在获得目标区域的深度信息时,不同区域甚至不同像素之间的距离是不同的,要想获得理想的深度信息,需要在匹配SVM时,对测试集中做窗口提取运算,甚至为了精确要对每个散斑,像素来作为窗口,在窗口中进行特征提取,作为SVM的训练集。The calculation of the depth distance is to use the feature extraction of the speckle test image as the test set of the SVM, and use the multi-class SVM code to simultaneously operate on multiple SVMs to obtain a set of binary codes, which are multiplied by the precision coefficient to obtain the specific depth information. When obtaining the depth information of the target area, the distance between different areas or even different pixels is different. In order to obtain the ideal depth information, it is necessary to perform window extraction operations on the test set when matching SVM, and even for accuracy. A speckle, the pixel is used as a window, and feature extraction is performed in the window, which is used as the training set of SVM.
在深度信息获取后,通过
附图说明Description of drawings
图1本发明主流程图。Fig. 1 main flow chart of the present invention.
图2本发明红外SVM标定过程示意图。Fig. 2 is a schematic diagram of the infrared SVM calibration process of the present invention.
图3本发明测试集送入SVM中进行分类示意图。图4本发明特征提取与组织示意图。Fig. 3 is a schematic diagram of sending the test set of the present invention into the SVM for classification. Fig. 4 is a schematic diagram of feature extraction and organization of the present invention.
图5本发明某个SVM训练流程图。Fig. 5 is a certain SVM training flow chart of the present invention.
图6本发明SVM批量训练流程图。Fig. 6 is a flow chart of SVM batch training in the present invention.
图7本发明SVM分类流程图。Fig. 7 SVM classification flowchart of the present invention.
具体实施方式Detailed ways
本发明所使用的基本硬件:Basic hardware used in the present invention:
1、红外摄像头;1. Infrared camera;
2、激光发射器;2. Laser transmitter;
3、衍射光学部件3个;3. Three diffractive optical components;
下面将结合附图,详细阐述本发明的实施方式:Below in conjunction with accompanying drawing, set forth the embodiment of the present invention in detail:
图1展示了本发明对三维重建流程,实施方式如下描述:Fig. 1 has shown the present invention to three-dimensional reconstruction process, and the embodiment is described as follows:
1、本发明算法流程如下:1. The algorithm flow of the present invention is as follows:
(1)首次标定。首次标定决定了在后面的使用过程中的标定精度,因本发明所涉及的领域在三维重建,标定精度要求范围较宽,在不同的三维重建场景下可以选取不同精度的标定方法。(1) Calibration for the first time. The first calibration determines the calibration accuracy in the subsequent use process. Because the field involved in the present invention is 3D reconstruction, the calibration accuracy requires a wide range, and different precision calibration methods can be selected in different 3D reconstruction scenarios.
(2)选取合适的世界坐标系。在对三维场景进行重建过程中,有时需要对摄像机的姿态调整或移动;对于不需要移动的情况下,旋转矩阵与平移矩阵并没有变化对后期计算三维坐标并无影响;对于需要移动的情况,只要选取合适的世界坐标系,在摄像机移动过程中通过分析摄像机的移动轨迹来求解出旋转与平移矩阵。(2) Select the appropriate world coordinate system. In the process of reconstructing the 3D scene, it is sometimes necessary to adjust or move the camera's attitude; for the case where no movement is required, the rotation matrix and translation matrix do not change and have no effect on the later calculation of the 3D coordinates; for the case where movement is required, As long as the appropriate world coordinate system is selected, the rotation and translation matrix can be solved by analyzing the movement trajectory of the camera during the camera movement process.
(3)激光散斑结构光投影。激光发生器产生的激光通过光学部件(如衍射光学部件)散射来得到所需要散斑图案,对于安全激光规定的各零级能量不超过0.4mw上限,我们需要特别的光学部件将散射出来的激光零级能量降低到这个范围以下,比如Prime Sensez在国内申请的专利用于使零级减少的光学设计(专利申请号CN200880119911)。(3) Laser speckle structured light projection. The laser light generated by the laser generator is scattered by optical components (such as diffractive optical components) to obtain the required speckle pattern. For the zero-order energy specified by the safety laser, it does not exceed the upper limit of 0.4mw. We need special optical components to disperse the scattered laser light The zero-order energy is reduced below this range. For example, the patent applied by Prime Sensez in China is used to reduce the zero-order optical design (patent application number CN200880119911).
(4)如图2,散射出来的激光散斑对空间有一定的标定范围,这个范围通过散射光学部件的散射角可以得到,在这个范围内,间隔一定距离内用特定标定物来对散斑进行标记,如散射范围为-30度到30度(包括上下,左右),纵深距离为0.5-3.5米,可以用大于散射角范围的标定物每隔1mm取一次散斑图像,而这个间隔距离也决定了后面求取深度距离的精度。(4) As shown in Figure 2, the scattered laser speckle has a certain calibration range for the space, and this range can be obtained by the scattering angle of the scattering optical components. For marking, if the scattering range is -30 degrees to 30 degrees (including up and down, left and right), and the depth distance is 0.5-3.5 meters, you can use a calibration object larger than the scattering angle range to take a speckle image every 1mm, and this distance It also determines the accuracy of calculating the depth distance later.
(5)步骤4中,在3米的范围内,每隔1mm取一次标定散斑图像,这样共有300副图像,对每幅图像进行特征提取,通过PCA方法对窗口化的散斑图像进行特征提取,同时为了弥补PCA在某些时候对特征的捕获失效,通过提取散斑的亮度、直径等特征做成300分训练集。如图4(5) In step 4, within the range of 3 meters, the calibration speckle image is taken every 1 mm, so that there are 300 images in total, and feature extraction is performed on each image, and the windowed speckle image is characterized by the PCA method Extraction, and in order to make up for the failure of PCA to capture features at certain times, a 300-point training set is made by extracting features such as brightness and diameter of speckles. Figure 4
(6)在步骤5中,PCA算法的流程是对300幅图像中的每幅图像计算其各自归一化矩阵Xi*,计算Xi*的协方差矩阵C;对协方差矩阵进行特征值分解,选取最大的p个特征值对应的特征向量组成投影矩阵;这里选取特征值的标准是为所选的特征值综合大于所有特征值总和的90%;对原始样本矩阵进行投影,得到其主成分S*。具体所运用的公式如下:(6) In step 5, the process of the PCA algorithm is to calculate its respective normalization matrix Xi* for each image in the 300 images, and calculate the covariance matrix C of Xi* ; perform eigenvalue analysis on the covariance matrix Decomposition, select the eigenvectors corresponding to the largest p eigenvalues to form a projection matrix; the criterion for selecting eigenvalues here is that the selected eigenvalues are greater than 90% of the sum of all eigenvalues; the original sample matrix is projected to obtain its principal Ingredient S* . The specific formula used is as follows:
(C-λi)p=0(C-λi )p=0
S*=S*pS* = S*p
(7)因PCA自身的缺陷,在某些条件下,PCA方法会对某些特征的提取并不会很有效,故而需要在主成分特征集中加入其他特征来作为特征属性形成混合特征,具体就是将散斑大小,亮度作为新特征与主成分特征共同形成混合特征来作为训练集,其混合的方法是通过判断提取散斑所在像素块的主成分其散斑特征是一致的,如:某散斑所在像素矩阵为一115*109所组成的矩阵,那么在这个矩阵所对应的主成分特征如果被选取,在混合时这个矩阵中的散斑属性特征是一致的。(7) Due to the defects of PCA itself, under certain conditions, the PCA method will not be very effective in extracting some features, so it is necessary to add other features to the principal component feature set as feature attributes to form a mixed feature, specifically The speckle size and brightness are used as new features and the principal component features to form a mixed feature as a training set. The method of mixing is to judge that the speckle features of the principal components of the pixel block where the speckle is extracted are consistent, such as: a speckle The pixel matrix where the spots are located is a matrix composed of 115*109, so if the principal component features corresponding to this matrix are selected, the speckle attribute features in this matrix are consistent during mixing.
(8)将训练集送入SVM中进行训练,这里SVM是一数量跟训练集数量相等的SVM组,在这组SVM中,每个SVM训练一份训练集,SVM间的组织形式考虑并行或串级。如图6。(8) Send the training set into the SVM for training. Here, the SVM is a group of SVMs whose number is equal to the number of training sets. In this group of SVMs, each SVM trains a training set. The organization form between SVMs considers parallel or cascade. Figure 6.
(9)在步骤8中对每一SVM的训练是一致的,如图5具体做法是,首先选择SVM所使用的核函数,这里使用RBF核函数,设置其RBF核函数参数σ初始值;将步骤7中所提取的特征集作为RBF的参数计算其特征空间;求解Lagrange-duality对偶因子α,通过α来计算分类参数w,b;将计算出来的分类模型通过交叉检验的方法计算其平均正确率,如果平均正确率大于90%,则训练结束,否则,对核函数参数σ进行重新设置,并重新训练如此循环,直到找到平均正确率最高的核函数参数σ。其中用到的部分公式如下:(9) In step 8, the training of each SVM is consistent, as shown in Figure 5. The specific method is to first select the kernel function used by the SVM, here use the RBF kernel function, and set the initial value of the RBF kernel function parameter σ; The feature set extracted in step 7 is used as the parameter of RBF to calculate its feature space; solve the Lagrange-duality dual factor α, and calculate the classification parameters w, b through α; calculate the average correctness of the calculated classification model by cross-checking If the average correct rate is greater than 90%, the training ends, otherwise, the kernel function parameter σ is reset, and the retraining cycle is repeated until the kernel function parameter σ with the highest average correct rate is found. Some of the formulas used are as follows:
Ei=ri-yi,η=K(x1,x1)+K(x1,x1)-2K(x1,x2),r为几何距离,y为分类标签。Ei =ri -yi , η=K(x1 ,x1 )+K(x1 ,x1 )-2K(x1 ,x2 ), r is the geometric distance, and y is the classification label.
(10)通过步骤3的投影,同样可以获得对某特定空间的散斑图像,分类流程也是从这里开始的。如图7(10) Through the projection in step 3, the speckle image of a specific space can also be obtained, and the classification process also starts from here. Figure 7
(11)假设散斑图像的像素是1080*768,需要对得到的散斑图像进行窗口化,为了能获取每一点尽可能精确的深度信息,可以在考虑时效性的基础上尽可能将窗口缩小比如选取的窗口为6*6,散斑图像也就被分为180*128块。(11) Assuming that the pixels of the speckle image are 1080*768, the obtained speckle image needs to be windowed. In order to obtain as accurate depth information as possible for each point, the window can be reduced as much as possible on the basis of timeliness For example, if the selected window is 6*6, the speckle image is divided into 180*128 blocks.
(12)对每一块窗口进行特征提取,这里所用的特征是与步骤5所提取的特征相一致的,假设提取的特征是一个5维行向量,那么经过一系列处理,这幅1080*768的图像就被分为了180*128*5部分的测试集。(12) Perform feature extraction on each block of windows, the features used here are consistent with the features extracted in step 5, assuming that the extracted features are a 5-dimensional row vector, then after a series of processing, this 1080*768 The image is divided into a test set of 180*128*5 parts.
(13)将步骤9的测试集送入SVM中进行分类,每个测试集可以得到一组300位的二进制编码,如图3.(13) Send the test set of step 9 into the SVM for classification, and each test set can obtain a group of 300-bit binary codes, as shown in Figure 3.
(14)将步骤10中所得到的二进制编码通过公式D=Ls+B*dist其中D表示深度距离,Ls表示步骤4中的纵深距离的起始距离,B表示SVM的二进制编码转换为10进制的数,dist表示间隔距离。通过公式求得每块窗口的深度距离,将每块窗口之间的距离通过图像插值算法来求得每个像素的深度信息。(14) the binary code obtained in the step 10 is passed through the formula D=Ls +B*dist wherein D represents the depth distance, Ls represents the starting distance of the depth distance in the step 4, and B represents that the binary code of the SVM is converted into Decimal number, dist represents the interval distance. The depth distance of each window is obtained through the formula, and the depth information of each pixel is obtained by the distance between each window through the image interpolation algorithm.
(15)通过公式
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410190263.3ACN103971405A (en) | 2014-05-06 | 2014-05-06 | Method for three-dimensional reconstruction of laser speckle structured light and depth information |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410190263.3ACN103971405A (en) | 2014-05-06 | 2014-05-06 | Method for three-dimensional reconstruction of laser speckle structured light and depth information |
| Publication Number | Publication Date |
|---|---|
| CN103971405Atrue CN103971405A (en) | 2014-08-06 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410190263.3APendingCN103971405A (en) | 2014-05-06 | 2014-05-06 | Method for three-dimensional reconstruction of laser speckle structured light and depth information |
| Country | Link |
|---|---|
| CN (1) | CN103971405A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104360633A (en)* | 2014-10-10 | 2015-02-18 | 南开大学 | Human-computer interaction system for service robot |
| CN105468375A (en)* | 2015-11-30 | 2016-04-06 | 扬州大学 | Surface structure light point cloud data oriented corresponding point search structure construction method |
| CN105675549A (en)* | 2016-01-11 | 2016-06-15 | 武汉大学 | Portable crop parameter measurement and growth vigor intelligent analysis device and method |
| CN106352809A (en)* | 2016-08-24 | 2017-01-25 | 中国科学院上海光学精密机械研究所 | Method for detecting fogging depth of neodymium-doped laser phosphate glass surface and method for fogging removal |
| CN106576159A (en)* | 2015-06-23 | 2017-04-19 | 华为技术有限公司 | Photographing device and method for acquiring depth information |
| CN106643492A (en)* | 2016-11-18 | 2017-05-10 | 中国民航大学 | Aeroengine damaged blade three-dimensional digital speckle moulding method |
| CN107392874A (en)* | 2017-07-31 | 2017-11-24 | 广东欧珀移动通信有限公司 | Beauty treatment method, device and mobile device |
| CN107491302A (en)* | 2017-07-31 | 2017-12-19 | 广东欧珀移动通信有限公司 | terminal control method and device |
| CN107563304A (en)* | 2017-08-09 | 2018-01-09 | 广东欧珀移动通信有限公司 | Terminal equipment unlocking method and device, and terminal equipment |
| CN107833254A (en)* | 2017-10-11 | 2018-03-23 | 中国长光卫星技术有限公司 | A kind of camera calibration device based on diffraction optical element |
| CN108050955A (en)* | 2017-12-14 | 2018-05-18 | 合肥工业大学 | Based on structured light projection and the relevant high temperature air disturbance filtering method of digital picture |
| CN108645353A (en)* | 2018-05-14 | 2018-10-12 | 四川川大智胜软件股份有限公司 | Three dimensional data collection system and method based on the random binary coding light field of multiframe |
| CN108696682A (en)* | 2018-04-28 | 2018-10-23 | Oppo广东移动通信有限公司 | Data processing method and device, electronic equipment and computer readable storage medium |
| CN108921027A (en)* | 2018-06-01 | 2018-11-30 | 杭州荣跃科技有限公司 | A kind of running disorder object recognition methods based on laser speckle three-dimensional reconstruction |
| CN109087350A (en)* | 2018-08-07 | 2018-12-25 | 西安电子科技大学 | Fluid light intensity three-dimensional rebuilding method based on projective geometry |
| CN109100740A (en)* | 2018-04-24 | 2018-12-28 | 北京航空航天大学 | A kind of three-dimensional image imaging device, imaging method and system |
| CN109102559A (en)* | 2018-08-16 | 2018-12-28 | Oppo广东移动通信有限公司 | three-dimensional model processing method and device |
| CN109167904A (en)* | 2018-10-31 | 2019-01-08 | Oppo广东移动通信有限公司 | Image acquiring method, image acquiring device, structure optical assembly and electronic device |
| CN109405765A (en)* | 2018-10-23 | 2019-03-01 | 北京的卢深视科技有限公司 | A kind of high accuracy depth calculation method and system based on pattern light |
| CN109581327A (en)* | 2018-11-20 | 2019-04-05 | 天津大学 | Totally-enclosed Laser emission base station and its implementation |
| CN109798838A (en)* | 2018-12-19 | 2019-05-24 | 西安交通大学 | A kind of ToF depth transducer and its distance measuring method based on laser speckle projection |
| CN109887022A (en)* | 2019-02-25 | 2019-06-14 | 北京超维度计算科技有限公司 | A kind of characteristic point matching method of binocular depth camera |
| CN109978809A (en)* | 2017-12-26 | 2019-07-05 | 同方威视技术股份有限公司 | Image processing method, device and computer readable storage medium |
| CN110012206A (en)* | 2019-05-24 | 2019-07-12 | Oppo广东移动通信有限公司 | Image acquisition method, image acquisition device, electronic apparatus, and readable storage medium |
| CN110009673A (en)* | 2019-04-01 | 2019-07-12 | 四川深瑞视科技有限公司 | Depth information detection method, device and electronic equipment |
| CN110264573A (en)* | 2019-05-31 | 2019-09-20 | 中国科学院深圳先进技术研究院 | Three-dimensional rebuilding method, device, terminal device and storage medium based on structure light |
| CN110337674A (en)* | 2019-05-28 | 2019-10-15 | 深圳市汇顶科技股份有限公司 | Three-dimensional rebuilding method, device, equipment and storage medium |
| CN110415226A (en)* | 2019-07-23 | 2019-11-05 | Oppo广东移动通信有限公司 | Stray light measurement method, device, electronic equipment and storage medium |
| CN110969656A (en)* | 2019-12-10 | 2020-04-07 | 长春精仪光电技术有限公司 | Airborne equipment-based laser beam spot size detection method |
| CN112669362A (en)* | 2021-01-12 | 2021-04-16 | 四川深瑞视科技有限公司 | Depth information acquisition method, device and system based on speckles |
| US11050918B2 (en) | 2018-04-28 | 2021-06-29 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for performing image processing, and computer readable storage medium |
| CN113379816A (en)* | 2021-06-29 | 2021-09-10 | 北京的卢深视科技有限公司 | Structure change detection method, electronic device, and storage medium |
| US11126016B2 (en) | 2016-04-04 | 2021-09-21 | Carl Zeiss Vision International Gmbh | Method and device for determining parameters for spectacle fitting |
| CN113902819A (en)* | 2020-06-22 | 2022-01-07 | 深圳大学 | Method, apparatus, computer device and storage medium for imaging through scattering medium |
| CN118397201A (en)* | 2024-06-28 | 2024-07-26 | 中国人民解放军国防科技大学 | Image reconstruction method and device for raw light field data of focusing light field camera |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101984767A (en)* | 2008-01-21 | 2011-03-09 | 普莱姆森斯有限公司 | Optical design for zero-order reduction |
| US20120056982A1 (en)* | 2010-09-08 | 2012-03-08 | Microsoft Corporation | Depth camera based on structured light and stereo vision |
| CN103279987A (en)* | 2013-06-18 | 2013-09-04 | 厦门理工学院 | Object fast three-dimensional modeling method based on Kinect |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101984767A (en)* | 2008-01-21 | 2011-03-09 | 普莱姆森斯有限公司 | Optical design for zero-order reduction |
| US20120056982A1 (en)* | 2010-09-08 | 2012-03-08 | Microsoft Corporation | Depth camera based on structured light and stereo vision |
| CN103279987A (en)* | 2013-06-18 | 2013-09-04 | 厦门理工学院 | Object fast three-dimensional modeling method based on Kinect |
| Title |
|---|
| 牛连丁等: "基于支持向量机的图像深度提取方法", 《哈尔滨商业大学学报(自然科学版)》* |
| 范哲: "基于Kinect的三维重建", 《中国硕士学位论文全文数据库信息科技辑》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104360633A (en)* | 2014-10-10 | 2015-02-18 | 南开大学 | Human-computer interaction system for service robot |
| US10560686B2 (en) | 2015-06-23 | 2020-02-11 | Huawei Technologies Co., Ltd. | Photographing device and method for obtaining depth information |
| CN106576159A (en)* | 2015-06-23 | 2017-04-19 | 华为技术有限公司 | Photographing device and method for acquiring depth information |
| CN105468375A (en)* | 2015-11-30 | 2016-04-06 | 扬州大学 | Surface structure light point cloud data oriented corresponding point search structure construction method |
| CN105468375B (en)* | 2015-11-30 | 2019-03-05 | 扬州大学 | A kind of construction method of the corresponding points searching structure towards area-structure light point cloud data |
| CN105675549A (en)* | 2016-01-11 | 2016-06-15 | 武汉大学 | Portable crop parameter measurement and growth vigor intelligent analysis device and method |
| CN105675549B (en)* | 2016-01-11 | 2019-03-19 | 武汉大学 | A kind of Portable rural crop parameter measurement and growing way intellectual analysis device and method |
| US11867978B2 (en) | 2016-04-04 | 2024-01-09 | Carl Zeiss Vision International Gmbh | Method and device for determining parameters for spectacle fitting |
| US11126016B2 (en) | 2016-04-04 | 2021-09-21 | Carl Zeiss Vision International Gmbh | Method and device for determining parameters for spectacle fitting |
| CN106352809B (en)* | 2016-08-24 | 2018-11-20 | 中国科学院上海光学精密机械研究所 | The detection method and hair mist minimizing technology of phosphate laser neodymium glass surface steaminess degree |
| CN106352809A (en)* | 2016-08-24 | 2017-01-25 | 中国科学院上海光学精密机械研究所 | Method for detecting fogging depth of neodymium-doped laser phosphate glass surface and method for fogging removal |
| CN106643492B (en)* | 2016-11-18 | 2018-11-02 | 中国民航大学 | A kind of aero-engine damaged blade 3-dimensional digital speckle formative method |
| CN106643492A (en)* | 2016-11-18 | 2017-05-10 | 中国民航大学 | Aeroengine damaged blade three-dimensional digital speckle moulding method |
| CN107491302A (en)* | 2017-07-31 | 2017-12-19 | 广东欧珀移动通信有限公司 | terminal control method and device |
| CN107392874A (en)* | 2017-07-31 | 2017-11-24 | 广东欧珀移动通信有限公司 | Beauty treatment method, device and mobile device |
| CN107563304A (en)* | 2017-08-09 | 2018-01-09 | 广东欧珀移动通信有限公司 | Terminal equipment unlocking method and device, and terminal equipment |
| CN107563304B (en)* | 2017-08-09 | 2020-10-16 | Oppo广东移动通信有限公司 | Terminal device unlocking method and device, and terminal device |
| CN107833254A (en)* | 2017-10-11 | 2018-03-23 | 中国长光卫星技术有限公司 | A kind of camera calibration device based on diffraction optical element |
| CN108050955A (en)* | 2017-12-14 | 2018-05-18 | 合肥工业大学 | Based on structured light projection and the relevant high temperature air disturbance filtering method of digital picture |
| CN108050955B (en)* | 2017-12-14 | 2019-10-18 | 合肥工业大学 | High temperature air disturbance filtering method based on structured light projection and digital image correlation |
| CN109978809A (en)* | 2017-12-26 | 2019-07-05 | 同方威视技术股份有限公司 | Image processing method, device and computer readable storage medium |
| CN109100740A (en)* | 2018-04-24 | 2018-12-28 | 北京航空航天大学 | A kind of three-dimensional image imaging device, imaging method and system |
| US11050918B2 (en) | 2018-04-28 | 2021-06-29 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method and apparatus for performing image processing, and computer readable storage medium |
| CN108696682A (en)* | 2018-04-28 | 2018-10-23 | Oppo广东移动通信有限公司 | Data processing method and device, electronic equipment and computer readable storage medium |
| CN108645353A (en)* | 2018-05-14 | 2018-10-12 | 四川川大智胜软件股份有限公司 | Three dimensional data collection system and method based on the random binary coding light field of multiframe |
| CN108921027A (en)* | 2018-06-01 | 2018-11-30 | 杭州荣跃科技有限公司 | A kind of running disorder object recognition methods based on laser speckle three-dimensional reconstruction |
| CN109087350A (en)* | 2018-08-07 | 2018-12-25 | 西安电子科技大学 | Fluid light intensity three-dimensional rebuilding method based on projective geometry |
| CN109087350B (en)* | 2018-08-07 | 2020-06-26 | 西安电子科技大学 | A three-dimensional reconstruction method of fluid light intensity based on projective geometry |
| CN109102559A (en)* | 2018-08-16 | 2018-12-28 | Oppo广东移动通信有限公司 | three-dimensional model processing method and device |
| CN109405765A (en)* | 2018-10-23 | 2019-03-01 | 北京的卢深视科技有限公司 | A kind of high accuracy depth calculation method and system based on pattern light |
| CN109167904A (en)* | 2018-10-31 | 2019-01-08 | Oppo广东移动通信有限公司 | Image acquiring method, image acquiring device, structure optical assembly and electronic device |
| CN109167904B (en)* | 2018-10-31 | 2020-04-28 | Oppo广东移动通信有限公司 | Image acquisition method, image acquisition device, structured light assembly and electronic device |
| CN109581327A (en)* | 2018-11-20 | 2019-04-05 | 天津大学 | Totally-enclosed Laser emission base station and its implementation |
| CN109581327B (en)* | 2018-11-20 | 2023-07-18 | 天津大学 | Fully enclosed laser transmitting base station and its realization method |
| CN109798838A (en)* | 2018-12-19 | 2019-05-24 | 西安交通大学 | A kind of ToF depth transducer and its distance measuring method based on laser speckle projection |
| CN109798838B (en)* | 2018-12-19 | 2020-10-27 | 西安交通大学 | ToF depth sensor based on laser speckle projection and ranging method thereof |
| CN109887022A (en)* | 2019-02-25 | 2019-06-14 | 北京超维度计算科技有限公司 | A kind of characteristic point matching method of binocular depth camera |
| CN110009673A (en)* | 2019-04-01 | 2019-07-12 | 四川深瑞视科技有限公司 | Depth information detection method, device and electronic equipment |
| CN110012206A (en)* | 2019-05-24 | 2019-07-12 | Oppo广东移动通信有限公司 | Image acquisition method, image acquisition device, electronic apparatus, and readable storage medium |
| CN110337674A (en)* | 2019-05-28 | 2019-10-15 | 深圳市汇顶科技股份有限公司 | Three-dimensional rebuilding method, device, equipment and storage medium |
| WO2020237492A1 (en)* | 2019-05-28 | 2020-12-03 | 深圳市汇顶科技股份有限公司 | Three-dimensional reconstruction method, device, apparatus, and storage medium |
| CN110264573A (en)* | 2019-05-31 | 2019-09-20 | 中国科学院深圳先进技术研究院 | Three-dimensional rebuilding method, device, terminal device and storage medium based on structure light |
| CN110264573B (en)* | 2019-05-31 | 2022-02-18 | 中国科学院深圳先进技术研究院 | Three-dimensional reconstruction method and device based on structured light, terminal equipment and storage medium |
| CN110415226A (en)* | 2019-07-23 | 2019-11-05 | Oppo广东移动通信有限公司 | Stray light measurement method, device, electronic equipment and storage medium |
| CN110969656B (en)* | 2019-12-10 | 2023-05-12 | 长春精仪光电技术有限公司 | Detection method based on laser beam spot size of airborne equipment |
| CN110969656A (en)* | 2019-12-10 | 2020-04-07 | 长春精仪光电技术有限公司 | Airborne equipment-based laser beam spot size detection method |
| CN113902819A (en)* | 2020-06-22 | 2022-01-07 | 深圳大学 | Method, apparatus, computer device and storage medium for imaging through scattering medium |
| CN112669362A (en)* | 2021-01-12 | 2021-04-16 | 四川深瑞视科技有限公司 | Depth information acquisition method, device and system based on speckles |
| CN112669362B (en)* | 2021-01-12 | 2024-03-29 | 四川深瑞视科技有限公司 | Depth information acquisition method, device and system based on speckles |
| CN113379816A (en)* | 2021-06-29 | 2021-09-10 | 北京的卢深视科技有限公司 | Structure change detection method, electronic device, and storage medium |
| CN113379816B (en)* | 2021-06-29 | 2022-03-25 | 北京的卢深视科技有限公司 | Structure change detection method, electronic device, and storage medium |
| CN118397201A (en)* | 2024-06-28 | 2024-07-26 | 中国人民解放军国防科技大学 | Image reconstruction method and device for raw light field data of focusing light field camera |
| CN118397201B (en)* | 2024-06-28 | 2024-08-23 | 中国人民解放军国防科技大学 | Image reconstruction method and device for raw light field data of focusing light field camera |
| Publication | Publication Date | Title |
|---|---|---|
| CN103971405A (en) | Method for three-dimensional reconstruction of laser speckle structured light and depth information | |
| Smolyanskiy et al. | On the importance of stereo for accurate depth estimation: An efficient semi-supervised deep neural network approach | |
| JP6855587B2 (en) | Devices and methods for acquiring distance information from a viewpoint | |
| EP3343502B1 (en) | Depth sensor noise | |
| US9829309B2 (en) | Depth sensing method, device and system based on symbols array plane structured light | |
| US9454821B2 (en) | One method of depth perception based on binary laser speckle images | |
| US12260575B2 (en) | Scale-aware monocular localization and mapping | |
| CN112818925B (en) | Urban building and crown identification method | |
| CN105203034B (en) | A kind of survey height survey area method based on monocular cam three-dimensional ranging model | |
| CN110443843A (en) | A kind of unsupervised monocular depth estimation method based on generation confrontation network | |
| KR20120071219A (en) | Apparatus and method for obtaining 3d depth information | |
| CN112132213A (en) | Sample image processing method and device, electronic equipment and storage medium | |
| CN113378760A (en) | Training target detection model and method and device for detecting target | |
| CN103761519A (en) | Non-contact sight-line tracking method based on self-adaptive calibration | |
| CN104079827A (en) | Light field imaging automatic refocusing method | |
| US20180063506A1 (en) | Method for the 3d reconstruction of a scene | |
| CN114549548B (en) | Glass image segmentation method based on polarization clues | |
| CN103942802A (en) | Method for obtaining depth of structured light dynamic scene on basis of random templates | |
| CN117036442A (en) | Robust monocular depth completion method, system and storage medium | |
| Itu et al. | Automatic extrinsic camera parameters calibration using Convolutional Neural Networks | |
| CN116740665A (en) | A point cloud target detection method and device based on three-dimensional intersection and union ratio | |
| CN104813217A (en) | Method for designing a passive single-channel imager capable of estimating depth of field | |
| Pereira et al. | Weather and meteorological optical range classification for autonomous driving | |
| Zhao et al. | Distance transform pooling neural network for LiDAR depth completion | |
| CN107529020A (en) | Image processing method and apparatus, electronic apparatus, and computer-readable storage medium |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| WD01 | Invention patent application deemed withdrawn after publication | ||
| WD01 | Invention patent application deemed withdrawn after publication | Application publication date:20140806 |