Movatterモバイル変換


[0]ホーム

URL:


CN106651794B - A Projection Speckle Correction Method Based on Virtual Camera - Google Patents

A Projection Speckle Correction Method Based on Virtual Camera
Download PDF

Info

Publication number
CN106651794B
CN106651794BCN201611093813.5ACN201611093813ACN106651794BCN 106651794 BCN106651794 BCN 106651794BCN 201611093813 ACN201611093813 ACN 201611093813ACN 106651794 BCN106651794 BCN 106651794B
Authority
CN
China
Prior art keywords
image
camera
speckle
virtual camera
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611093813.5A
Other languages
Chinese (zh)
Other versions
CN106651794A (en
Inventor
刘荣科
杜秋晨
潘宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beijing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Aeronautics and AstronauticsfiledCriticalBeijing University of Aeronautics and Astronautics
Priority to CN201611093813.5ApriorityCriticalpatent/CN106651794B/en
Publication of CN106651794ApublicationCriticalpatent/CN106651794A/en
Application grantedgrantedCritical
Publication of CN106651794BpublicationCriticalpatent/CN106651794B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于虚拟相机的投影散斑图像校正方法,属于视频研究技术领域。本方法在相机图像平面与过投影机焦点的屏幕法线相交位置构造虚拟相机,利用相机和投影机水平基线距离计算虚拟相机中图像偏移距离。提取相机接收到的图像中投影区域的特征点,确定转移矩阵,变换投影散斑到虚拟相机平面,得到校正后的投影散斑。本发明有效地解决了结构光深度提取中相机与投影机图像间像素大小不匹配的问题,并且避免了相机和投影机的标定,计算方便快捷,便于利用相机和投影机搭建结构光测量系统,从而获得质量较好的深度图。

The invention discloses a projection speckle image correction method based on a virtual camera, which belongs to the technical field of video research. The method constructs a virtual camera at the intersection position of the camera image plane and the screen normal passing through the focal point of the projector, and calculates the image offset distance in the virtual camera by using the horizontal baseline distance between the camera and the projector. Extract the feature points of the projected area in the image received by the camera, determine the transfer matrix, transform the projected speckle to the virtual camera plane, and obtain the corrected projected speckle. The present invention effectively solves the problem of pixel size mismatch between camera and projector images in structured light depth extraction, and avoids the calibration of cameras and projectors, and is convenient and quick to calculate, and is convenient to use cameras and projectors to build a structured light measurement system. This results in a better quality depth map.

Description

Translated fromChinese
一种基于虚拟相机的投影散斑校正方法A Projection Speckle Correction Method Based on Virtual Camera

技术领域technical field

本发明属于立体视频中基于结构光的复杂场景采样与重建领域,特别涉及相机和投影机参数不同且任意位置时的场景重建。The invention belongs to the field of complex scene sampling and reconstruction based on structured light in stereoscopic video, and particularly relates to scene reconstruction when the parameters of cameras and projectors are different and at any position.

背景技术Background technique

人类感知外部事物的主要方式是通过视觉,有研究表明,人类获取的所有信息中的80%是来自视觉系统。人类观察到的物体具有立体感,是因为人类的视觉系统能够从双眼看到的图像中计算场景深度值、恢复场景三维信息。但是在传统的2D视频中,深度信息没有被记录下来,导致视觉系统无法从平面图像中获得三维信息,使用户难以获得立体感和临场感。3D视频技术的出现就是为了保留场景深度信息,使用户在获取场景纹理信息的同时也能感受到场景的深度信息,从而产生立体感受,给用户带来真实的身临其境的感觉。3D视频中最重要的前提是深度获取,目前很多国家的研究机构提出了各种深度获取方法并且已经搭建了系统,如MIT、斯坦福大学、德国HHI以及名古屋大学等。The main way for humans to perceive external things is through vision. Studies have shown that 80% of all information obtained by humans comes from the visual system. Objects observed by humans have a three-dimensional sense because the human visual system can calculate the depth value of the scene and restore the three-dimensional information of the scene from the images seen by both eyes. But in traditional 2D video, the depth information is not recorded, so the visual system cannot obtain three-dimensional information from the flat image, making it difficult for users to obtain a sense of three-dimensionality and presence. The emergence of 3D video technology is to preserve the depth information of the scene, so that users can also feel the depth information of the scene while obtaining the texture information of the scene, thereby generating a three-dimensional experience and bringing users a real immersive feeling. The most important prerequisite for 3D video is depth acquisition. At present, research institutions in many countries have proposed various depth acquisition methods and built systems, such as MIT, Stanford University, German HHI, and Nagoya University.

由于传统双目视觉无法获取弱纹理区域的深度值,目前最广泛采用的深度获取方法是基于结构光的主动视觉技术。基本的结构光测量系统由一个相机和一个投影机组成,投影机投射经过编码的散斑到场景表面,相机拍摄带有散斑的场景图像,计算投射散斑和拍摄到的散斑之间的偏移量,可以获得场景的深度信息。微软公司研制的Kinect是利用结构光技术获取深度的代表,利用光编码技术,结合红外投影机和红外CMOS传感器直接获取深度值。Kinect是成熟的商用产品,灵活性较差,并且深度精度和探测距离有限制。实际应用中常用相机和投影机搭建结构光深度测量系统。Since the traditional binocular vision cannot obtain the depth value of the weak texture area, the most widely used depth acquisition method is the active vision technology based on structured light. The basic structured light measurement system consists of a camera and a projector. The projector projects encoded speckle onto the scene surface. The camera captures the scene image with speckle, and calculates the Offset, the depth information of the scene can be obtained. The Kinect developed by Microsoft is a representative of the use of structured light technology to obtain depth. It uses optical coding technology, combined with infrared projectors and infrared CMOS sensors to directly obtain depth values. Kinect is a mature commercial product with poor flexibility and limited depth accuracy and detection distance. In practical applications, cameras and projectors are commonly used to build structured light depth measurement systems.

由于相机和投影机参数不同,并且摆放位置很难像Kinect中的相机和红外收发器一样严格对齐,所以相机拍摄到的投影区域与设计的投影散斑形状不同。为了便于在水平极线上进行匹配,需要对投影散斑进行校正,使其与相机拍摄到的投影区域形状相同并且像素大小相同。在传统双目立体视觉中,图像校正前要对相机和投影机进行标定,获得其内外参数。标定过程是比较复杂的,需要多方位拍摄标定板图像,进行大量计算,并且标定的精度对图像校正的结果影响较大。这会使深度信息获取的过程变的复杂并且结果稳定性变差,不能适应实际中方便快捷的使用。Because the parameters of the camera and the projector are different, and it is difficult to place them in a strict alignment like the camera and infrared transceiver in Kinect, the projection area captured by the camera is different from the designed projection speckle shape. In order to facilitate matching on the horizontal epipolar line, the projected speckle needs to be corrected so that it has the same shape and pixel size as the projected area captured by the camera. In traditional binocular stereo vision, the camera and projector must be calibrated before image correction to obtain their internal and external parameters. The calibration process is relatively complicated, and it needs to take images of the calibration plate in multiple directions and perform a large number of calculations, and the accuracy of the calibration has a great influence on the results of image correction. This will complicate the process of obtaining depth information and degrade the stability of the results, which cannot be adapted to convenient and quick use in practice.

发明内容Contents of the invention

本发明针对搭建结构光测量系统的复杂性问题,设计了一种基于虚拟相机的投影散斑校正方法,所述方法构造了一个虚拟相机。由于虚拟相机和真实相机的图像像素大小相同,可以直接匹配计算深度图,二者图像之间存在水平偏移,这个偏移量可由图像坐标与世界坐标之间的比例关系计算。将投影散斑变换到虚拟相机,即可得到校正后的散斑图像。Aiming at the complexity of building a structured light measurement system, the present invention designs a projection speckle correction method based on a virtual camera, and the method constructs a virtual camera. Since the image pixels of the virtual camera and the real camera have the same size, they can directly match the calculated depth map. There is a horizontal offset between the two images, and this offset can be calculated from the proportional relationship between the image coordinates and the world coordinates. The corrected speckle image can be obtained by transforming the projected speckle to the virtual camera.

本发明提供的一种基于虚拟相机的投影散斑校正方法,包括如下步骤:A projection speckle correction method based on a virtual camera provided by the present invention includes the following steps:

步骤一,设计带有棋盘格边界的随机散斑;Step 1, designing random speckle with checkerboard boundaries;

步骤二,投影机投射散斑到场景中,相机拍摄带有散斑的场景图像,确定散斑上棋盘格角点的位置;Step 2: The projector projects the speckle into the scene, and the camera captures the scene image with the speckle, and determines the positions of the checkerboard corners on the speckle;

步骤三,在相机图像平面和过投影机焦点的屏幕法线相交位置构造虚拟相机的成像位置,虚拟相机在该位置获取到校正后的散斑图像,该图像和真实相机拍摄的散斑图像形状相同并且像素大小一致,计算虚拟相机上的图像平移量,确定校正后图像的位置;Step 3: Construct the imaging position of the virtual camera at the intersection position of the camera image plane and the screen normal passing through the focal point of the projector. The virtual camera obtains the corrected speckle image at this position, and the shape of the image and the speckle image captured by the real camera The same and the same pixel size, calculate the image translation on the virtual camera, and determine the position of the corrected image;

步骤四,计算校正前的投影散斑和校正后的散斑图像之间的单应性矩阵,确定校正后图像的灰度值;Step 4, calculating the homography matrix between the projected speckle before correction and the speckle image after correction, and determining the gray value of the corrected image;

步骤五,结合第三步得到的校正图像的位置和第四步得到的校正图像灰度,生成最终的校正图像。Step five, combining the position of the corrected image obtained in the third step and the gray scale of the corrected image obtained in the fourth step to generate a final corrected image.

本发明的优点和积极效果在于:Advantage and positive effect of the present invention are:

(1)结合相机和投影机的标定和校正过程,并对这两个过程做了简化,用确定的对应点直接进行图像校正,避免了标定中计算相机和投影机内外参数的过程。(1) Combining the calibration and correction process of the camera and projector, and simplifying the two processes, the image correction is performed directly with the determined corresponding points, avoiding the process of calculating the internal and external parameters of the camera and projector during calibration.

(2)适用于实际中的各种情况,例如投影机投射出的图像变形、相机与投影机视角范围不同、相机与投影机图像平面不平行等情况。(2) Applicable to various situations in practice, such as distortion of the image projected by the projector, different viewing angle ranges between the camera and the projector, and non-parallel image planes between the camera and the projector.

(3)灵活性高,可以方便的增加分辨率来提高深度精度、增加投射光强来提高深度检测距离,适用于工业和科研领域。(3) High flexibility, can easily increase the resolution to improve the depth accuracy, increase the projected light intensity to increase the depth detection distance, suitable for industrial and scientific research fields.

附图说明Description of drawings

图1是本发明中基于虚拟相机的投影散斑校正方法进行图像校正和匹配计算深度值的示意图;FIG. 1 is a schematic diagram of image correction and matching calculation of depth values performed by a projected speckle correction method based on a virtual camera in the present invention;

图2是本发明中设计的投影散斑图像示意图;Fig. 2 is a schematic diagram of a projected speckle image designed in the present invention;

图3是本发明中构造的虚拟相机位置示意图;Fig. 3 is a schematic diagram of the position of the virtual camera constructed in the present invention;

图4是本发明中计算虚拟相机图像偏移量的示意图。Fig. 4 is a schematic diagram of calculating the virtual camera image offset in the present invention.

具体实施方式Detailed ways

下面将结合附图和实施例对本发明作进一步的详细说明。The present invention will be further described in detail with reference to the accompanying drawings and embodiments.

如图1所示,本发明提供一种基于虚拟相机的投影散斑校正方法,所述投影散斑校正方法中,在相机图像平面与过投影机焦点的屏幕法线相交位置构造虚拟相机,用来接收校正后的散斑图像;其后,计算校正图像和相机图像之间的平移量,结合相机图像提供的投影散斑位置信息,确定校正图像中投影散斑区域的位置。通过对应点计算投影散斑和校正图像之间的单应性矩阵,将投影散斑图像变换到虚拟相机中,获得校正后的图像。将相机图像与校正图像进行匹配,得到场景深度图。具体实现步骤如下说明。As shown in Figure 1, the present invention provides a projected speckle correction method based on a virtual camera. In the projected speckle correction method, a virtual camera is constructed at the intersection position of the camera image plane and the screen normal passing through the focal point of the projector. to receive the corrected speckle image; then, calculate the translation between the corrected image and the camera image, and combine the projected speckle position information provided by the camera image to determine the position of the projected speckle area in the corrected image. The homography matrix between the projected speckle and the corrected image is calculated through the corresponding points, and the projected speckle image is transformed into a virtual camera to obtain the corrected image. The camera image is matched with the rectified image to obtain a scene depth map. The specific implementation steps are described as follows.

步骤一,设计带有棋盘格边界的随机散斑。Step one, design random speckle with checkerboard boundaries.

如图2所示,所设计的散斑结合了随机散斑和棋盘格散斑的优点,结合并简化了标定和校正过程。棋盘格区域的设计是相机标定过程中标定板的简化,只保留在散斑边缘区域。由于在对物体进行深度测量时,投影散斑完全覆盖物体,边缘区域不会受到物体影响,可以正确提取对应点。此外,相机拍摄的图像在图像边缘区域畸变较严重,将棋盘格散斑设计在边缘区域能更好地提取畸变特征,从而更容易消除畸变。As shown in Figure 2, the designed speckle combines the advantages of random speckle and checkerboard speckle, and simplifies the calibration and correction process. The design of the checkerboard area is a simplification of the calibration plate in the camera calibration process, and only the edge area of the speckle remains. Since the projected speckle completely covers the object when measuring the depth of the object, the edge area will not be affected by the object, and the corresponding point can be correctly extracted. In addition, the image captured by the camera is severely distorted in the edge area of the image, and the design of the checkerboard speckle in the edge area can better extract the distortion features, making it easier to eliminate the distortion.

步骤二,投影机投射散斑到场景中,相机拍摄带有散斑的场景图像,确定散斑上棋盘格角点的位置。In step 2, the projector projects the speckle into the scene, and the camera captures the scene image with the speckle, and determines the positions of the checkerboard corners on the speckle.

相机拍摄到的图像中的棋盘格角点检测可以使用OpenCV中的函数完成,保存相机图像中角点的位置。The checkerboard corner detection in the image captured by the camera can be done using the function in OpenCV, which saves the position of the corner in the camera image.

投影机投射的散斑是在步骤一中设计的,其中的棋盘格角点可以直接获得。保存投影散斑中角点的位置,得到投射前和投射后散斑的对应点。The speckle projected by the projector is designed in step one, where the checkerboard corners can be obtained directly. Save the position of the corner points in the projected speckle, and get the corresponding points of the speckle before and after projection.

步骤三,在相机图像平面和过投影机焦点的屏幕法线相交位置构造虚拟相机的成像位置,计算虚拟相机上的图像平移量,确定校正后图像的位置。Step 3: Construct the imaging position of the virtual camera at the intersection position of the camera image plane and the screen normal passing through the focal point of the projector, calculate the image translation on the virtual camera, and determine the position of the corrected image.

相机图像平面是指真实相机拍摄的图像所在的平面。虚拟相机是设想出来的一个成像位置,在这个位置上获取的图像和真实相机拍摄的散斑区域形状相同、像素大小相等,二者能够直接进行匹配获取深度,所以虚拟相机的图像就是校正后的散斑图像。虚拟相机的图像平面中心位于相机图像平面和过投影机焦点的屏幕法线相交位置,主要因为以下两点因素:The camera image plane refers to the plane where the image captured by the real camera is located. The virtual camera is an imaginary imaging position. The image acquired at this position has the same shape and pixel size as the speckle area captured by the real camera. The two can be directly matched to obtain the depth, so the image of the virtual camera is the corrected one. speckle image. The center of the image plane of the virtual camera is located at the intersection of the camera image plane and the screen normal passing through the focal point of the projector, mainly because of the following two factors:

(1)校正后的图像和相机拍摄到的图像形状相同并且像素大小一致,则要求虚拟相机和真实相机的视角范围相同、视角方向相同。只有在与相机平面共面的位置才能保证二者图像能够匹配。(1) The corrected image and the image captured by the camera have the same shape and the same pixel size, so the virtual camera and the real camera are required to have the same viewing angle range and the same viewing direction. Only in a position coplanar with the camera plane can the two images be guaranteed to match.

(2)校正图像能与相机图像进行匹配获取场景深度信息,则要求虚拟相机位置能正确反应出视差。由于视差和相机与投影机的基线距离有关,设置虚拟相机的位置位于相机图像平面和过投影机焦点的屏幕法线上,竖直位置与投影机相同,水平方向上和相机基线距离不变。(2) The corrected image can be matched with the camera image to obtain the scene depth information, which requires that the virtual camera position can correctly reflect the parallax. Since the parallax is related to the baseline distance between the camera and the projector, the position of the virtual camera is set on the camera image plane and the screen normal of the focal point of the projector, the vertical position is the same as that of the projector, and the horizontal distance from the camera baseline remains unchanged.

综上所述,如图3所示,表示了实际应用中相机和投影机视角范围不同且图像平面不平行的情况。V是构造的虚拟相机的焦点位置,C是真实相机焦点,P是投影机焦点。为了更好地说明拍摄到的图像位置,在相机前放置了一个投影屏幕,从P出发的两条斜线代表投影机的视角范围,与投影屏幕围成的三角形是投影区域,在真实相机中只占整幅图像的一部分。如图1所示,真实相机图像和校正图像存在水平方向偏移,这个偏移量反映了相机焦点到投影屏幕的距离。To sum up, as shown in FIG. 3 , it shows the situation that the viewing angle ranges of the camera and the projector are different and the image planes are not parallel in practical applications. V is the focus position of the constructed virtual camera, C is the real camera focus, and P is the projector focus. In order to better explain the position of the captured image, a projection screen is placed in front of the camera. The two oblique lines starting from P represent the viewing angle range of the projector, and the triangle surrounded by the projection screen is the projection area. In a real camera Only take up a part of the whole image. As shown in Figure 1, there is a horizontal offset between the real camera image and the corrected image, and this offset reflects the distance from the camera focus to the projection screen.

由于二维图像变换不能保留距离信息,在第一次使用本方法时需测量相机到投影屏幕的距离,如果要获得相对距离,而不是精确的距离值,相机到投影屏幕的距离可设为估计值。Since the two-dimensional image transformation cannot preserve the distance information, the distance from the camera to the projection screen needs to be measured when using this method for the first time. If you want to obtain a relative distance instead of an accurate distance value, the distance from the camera to the projection screen can be set as an estimate value.

真实相机图像和校正图像水平方向的偏移量通过世界坐标系和图像坐标系之间的比例关系计算。如图4所示,Z代表相机焦点到图像平面的距离,f表示相机的焦距,dw表示相机和投影机水平方向基线距离,即在世界坐标系中的平移距离,可以在硬件系统中测量,di表示图像坐标系中水平方向的偏移量,计算方法如公式(1)所示。The offset in the horizontal direction of the real camera image and the corrected image is calculated by the proportional relationship between the world coordinate system and the image coordinate system. As shown in Figure 4, Z represents the distance from the camera focus to the image plane, f represents the focal length of the camera, and dw represents the horizontal baseline distance between the camera and the projector, that is, the translation distance in the world coordinate system, which can be measured in the hardware system. di represents the offset in the horizontal direction in the image coordinate system, and the calculation method is shown in formula (1).

公式(1)计算出的di的单位是以世界坐标系中厘米为单位的,图像坐标系的单位是像素,从参考手册中查得相机像素大小,记为s,则在图像平面中偏移的像素数为di*s。The unit of di calculated by the formula (1) is centimeters in the world coordinate system, and the unit of the image coordinate system is pixels. The pixel size of the camera is found from the reference manual and recorded as s, and then it is offset in the image plane The number of pixels of is di*s.

结合真实相机中确定的投影区域形状和图像平面中平移的像素数,校正图像的位置可以被唯一确定。Combining the shape of the projected area determined in the real camera and the number of pixels translated in the image plane, the position of the corrected image can be uniquely determined.

步骤四,利用特征点计算校正前的投影散斑和校正后的散斑图像之间的单应性矩阵,确定校正后图像的灰度值;Step 4, using the feature points to calculate the homography matrix between the projected speckle before correction and the speckle image after correction, and determine the gray value of the corrected image;

利用以上两个步骤的结果,步骤二得到了校正前后的对应点,步骤三得到了校正图像的位置,需要计算校正前的投影散斑和校正后图像上每个点的对应关系,即单应性矩阵。二维图像之间的单应性矩阵是3阶方阵,且右下角元素值为1,如表达式(2)。Using the results of the above two steps, step 2 obtains the corresponding points before and after correction, and step 3 obtains the position of the corrected image, and it is necessary to calculate the corresponding relationship between the projected speckle before correction and each point on the corrected image, that is, the homography sex matrix. The homography matrix between two-dimensional images is a square matrix of order 3, and the value of the element in the lower right corner is 1, as shown in expression (2).

设校正前投影散斑中的点为pi=(xi,yi),对应校正后的点为pi′=(xi′,yi′),则二者满足以下公式关系:Assume that the point in the projected speckle before correction is pi =(xi ,yi ), and the corresponding point after correction is pi ′=(xi ′,yi ′), then the two satisfy the following formula relationship:

[xri yri zri]T=H*[xi yi 1]T (3)[xri yri zri ]T =H*[xi yi 1]T (3)

公式(3)中xri,yri和zri表示校正后的点在三维空间中的坐标。公式(4)将三维坐标恢复为二维图像中的坐标。In the formula (3), xri , yri and zri represent the coordinates of the corrected point in the three-dimensional space. Equation (4) restores the 3D coordinates to those in the 2D image.

设v为H中未知数连接成的向量,即v=[a b c d e f g h]T,v为H按行连接成的单应性向量。公式(3)和(4)可以表示成矩阵方程,如下:Let v be the vector formed by connecting the unknowns in H, that is, v=[abcdefgh]T , and v be the homography vector formed by connecting H in rows. Formulas (3) and (4) can be expressed as matrix equations, as follows:

为了确定H中的8个未知数,至少需要确定4个对应点,每个对应点包含横坐标和纵坐标两个方程。在实际应用中为了利用最小二乘法消除噪声对对应点提取的影响,通常确定的对应点个数大于4个。设确定了n个对应点,构造关于v的矩阵方程如下:In order to determine the 8 unknowns in H, at least 4 corresponding points need to be determined, and each corresponding point contains two equations of abscissa and ordinate. In practical applications, in order to use the least square method to eliminate the influence of noise on the extraction of corresponding points, the number of corresponding points determined is usually greater than four. Assuming that n corresponding points are determined, the matrix equation about v is constructed as follows:

A*v=b (6)A*v=b (6)

公式(6)中的A是公式(5)扩展至多行的情况,如下:A in formula (6) is the case where formula (5) is extended to multiple rows, as follows:

公式[6]中的b是校正后图像中特征点坐标连成的向量,b=[x1′ y1′ … xn′ yn′]Tb in the formula [6] is a vector formed by connecting the coordinates of the feature points in the corrected image, b=[x1 ′ y1 ′ … xn ′ yn ′]T .

由于公式(6)是超定方程,利用最小二乘法求解如下:Since formula (6) is an overdetermined equation, the least square method is used to solve it as follows:

v=(ATA)-1ATb (8)v=(AT A)-1 AT b (8)

由于v中的变量是H中的未知数,可以确定单应性矩阵,将校正前投影散斑中所有点变换到校正后的图像中。Since the variables in v are unknowns in H, a homography matrix can be determined to transform all points in the uncorrected projected speckle into the corrected image.

步骤五,结合第三步得到的校正图像的位置和第四步得到的校正图像灰度,生成最终的校正图像。Step five, combining the position of the corrected image obtained in the third step and the gray scale of the corrected image obtained in the fourth step to generate a final corrected image.

如图1右下角所示,将生成的校正图像直接与真实相机拍摄的图像进行逐行匹配,得到场景的深度图。As shown in the lower right corner of Figure 1, the generated corrected image is directly matched line-by-line with the image captured by the real camera to obtain the depth map of the scene.

实施例Example

该实施例中,相机和投影机的数量均为1,采用本发明提供的投影机散斑校正方法对投影散斑进行校正,与相机拍摄到的散斑进行匹配得到高质量的场景深度图,具体步骤如下:In this embodiment, the number of cameras and projectors is 1, and the projection speckle is corrected by using the projector speckle correction method provided by the present invention, and matched with the speckle captured by the camera to obtain a high-quality scene depth map, Specific steps are as follows:

步骤一,设计带有棋盘格边界的随机散斑;Step 1, designing random speckle with checkerboard boundaries;

随机散斑的大小根据投影机的输入图像大小进行设计,本系统中设置为912*1140像素。如图2所示,由MATLAB或C++生成0-1随机矩阵,1对应的位置为白色,0对应的位置为黑色。边缘的棋盘格区域宽度20像素。The size of the random speckle is designed according to the input image size of the projector, which is set to 912*1140 pixels in this system. As shown in Figure 2, a 0-1 random matrix is generated by MATLAB or C++, the position corresponding to 1 is white, and the position corresponding to 0 is black. The checkerboard area at the edge is 20 pixels wide.

步骤二,投影机投射散斑到场景中,相机拍摄带有散斑的场景图像,确定散斑上棋盘格角点的位置;Step 2: The projector projects the speckle into the scene, and the camera captures the scene image with the speckle, and determines the positions of the checkerboard corners on the speckle;

选取棋盘格4个角点和4个边中点的黑白相交的角点作为特征点,这8个点的按顺时针顺序,依次为左上、中上、右上、右中、右下、中下、左下和左中,本发明实施例中对应的坐标如下:(10,10)、(10,449)、(10,906)、(549,906)、(1134,906)、(1134,449)、(1134,10)、(549,10)。Select the 4 corner points of the chessboard and the black and white intersecting corner points of the 4 side midpoints as the feature points. The 8 points are in clockwise order, which are upper left, upper middle, upper right, middle right, lower right, and lower middle , lower left and middle left, the corresponding coordinates in the embodiment of the present invention are as follows: (10,10), (10,449), (10,906), (549,906), (1134,906), (1134,449), (1134,10 ), (549,10).

相机的分辨率是1280*960,拍摄到的图像中这8个点的坐标如下:(25,182)、(32,619)、(34,1069)、(297,1074)、(590,1081)、(588,617)、(586,166)、(289,176)。The resolution of the camera is 1280*960, and the coordinates of these 8 points in the captured image are as follows: (25,182), (32,619), (34,1069), (297,1074), (590,1081), (588,617 ), (586,166), (289,176).

步骤三,在相机图像平面和过投影机焦点的屏幕法线相交位置构造虚拟相机,计算虚拟相机上的图像平移量,确定校正后图像的位置;Step 3, constructing a virtual camera at the intersection position of the camera image plane and the screen normal passing through the focal point of the projector, calculating the image translation amount on the virtual camera, and determining the position of the corrected image;

在相机前方120cm的位置放置投影屏幕,相机中拍摄到的投影图像形状与校正后的形状相同,只是水平位置不同。广角相机焦距为5mm,测量得相机与投影机水平方向基线距离为5.7cm,相机像素大小为3.75um,由公式(1)计算得平移量0.2375mm,图像平面平移距离63像素。由于投影机在相机左侧,校正后图像向右平移63像素,至此,校正图像的位置已确定。A projection screen is placed 120cm in front of the camera. The shape of the projected image captured by the camera is the same as the corrected shape, but the horizontal position is different. The focal length of the wide-angle camera is 5mm, the measured baseline distance between the camera and the projector in the horizontal direction is 5.7cm, the pixel size of the camera is 3.75um, the translation amount calculated by formula (1) is 0.2375mm, and the image plane translation distance is 63 pixels. Since the projector is on the left side of the camera, the corrected image is shifted to the right by 63 pixels. So far, the position of the corrected image has been determined.

步骤四,计算校正前的投影散斑和校正后的散斑图像之间的单应性矩阵,确定校正后图像的灰度值;Step 4, calculating the homography matrix between the projected speckle before correction and the speckle image after correction, and determining the gray value of the corrected image;

将步骤二中得到的8组对应点代入公式(6)中,由公式(8)解得未知数向量如下:Substituting the 8 groups of corresponding points obtained in step 2 into formula (6), the unknown vector obtained from formula (8) is as follows:

v=[0.4822 0.0111 20.5249 -0.0190 1.0005 172.7887 0 0]Tv=[0.4822 0.0111 20.5249 -0.0190 1.0005 172.7887 0 0]T

进而得到单应性矩阵如下:Then the homography matrix is obtained as follows:

将(9)代入公式(3)和(4),可得到校正后的散斑图像。Substituting (9) into formulas (3) and (4), the corrected speckle image can be obtained.

考虑到将原始图像中的点变换到校正图像中不一定能覆盖全部校正图像区域,校正图像中可能会有空洞,还需额外的图像修复方法。实际应用中采用逆向单应性矩阵,从校正图像中计算原始图像(校正前图像)中对应的像素值来填充,如下式:Considering that transforming the points in the original image to the rectified image may not cover all the rectified image area, there may be holes in the rectified image, and additional image restoration methods are needed. In practical applications, the inverse homography matrix is used to calculate the corresponding pixel values in the original image (image before correction) from the corrected image to fill, as follows:

[xri yri zri]T=H-1*[xi′ yi′ zi′]T (10)[xri yri zri ]T =H-1 *[xi ′ yi ′ zi ′]T (10)

公式(11)中得出的横纵坐标一般不会在整数位置,由于双线性插值得到的像素灰度接近真实相机中两个像素之间的模糊,采用双线性插值得到亚像素级灰度值,得到最终的校正图像。The horizontal and vertical coordinates obtained in formula (11) are generally not in integer positions. Since the pixel grayscale obtained by bilinear interpolation is close to the blur between two pixels in a real camera, bilinear interpolation is used to obtain sub-pixel grayscale degree value to get the final corrected image.

步骤五,结合第三步得到的校正图像的位置和第四步得到的校正图像灰度,生成最终的校正图像。Step five, combining the position of the corrected image obtained in the third step and the gray scale of the corrected image obtained in the fourth step to generate a final corrected image.

用校正后的散斑图像和真实相机拍摄到的图像按行进行匹配,能够获得场景的深度值。The corrected speckle image and the image captured by the real camera are matched row by row to obtain the depth value of the scene.

Claims (3)

CN201611093813.5A2016-12-012016-12-01 A Projection Speckle Correction Method Based on Virtual CameraActiveCN106651794B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201611093813.5ACN106651794B (en)2016-12-012016-12-01 A Projection Speckle Correction Method Based on Virtual Camera

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201611093813.5ACN106651794B (en)2016-12-012016-12-01 A Projection Speckle Correction Method Based on Virtual Camera

Publications (2)

Publication NumberPublication Date
CN106651794A CN106651794A (en)2017-05-10
CN106651794Btrue CN106651794B (en)2019-12-03

Family

ID=58814175

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201611093813.5AActiveCN106651794B (en)2016-12-012016-12-01 A Projection Speckle Correction Method Based on Virtual Camera

Country Status (1)

CountryLink
CN (1)CN106651794B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108198219B (en)*2017-11-212022-05-13合肥工业大学Error compensation method for camera calibration parameters for photogrammetry
CN108399596B (en)2018-02-072020-12-18深圳奥比中光科技有限公司Depth image engine and depth image calculation method
CN108847109A (en)*2018-06-262018-11-20天津慧医谷科技有限公司A kind of human body acupoint selection practice wire examination method and system based on three-dimensional modeling
CN110689581B (en)*2018-07-062022-05-13Oppo广东移动通信有限公司 Structured light module calibration method, electronic device, and computer-readable storage medium
CN110770794A (en)*2018-08-222020-02-07深圳市大疆创新科技有限公司Image depth estimation method and device, readable storage medium and electronic equipment
US11176694B2 (en)*2018-10-192021-11-16Samsung Electronics Co., LtdMethod and apparatus for active depth sensing and calibration method thereof
TWI680436B (en)2018-12-072019-12-21財團法人工業技術研究院Depth camera calibration device and method thereof
CN110751656B (en)*2019-09-242022-11-22浙江大华技术股份有限公司Automatic crack parameter extraction method and device and storage device
CN111061421B (en)*2019-12-192021-07-20北京澜景科技有限公司Picture projection method and device and computer storage medium
CN111243002A (en)*2020-01-152020-06-05中国人民解放军国防科技大学Monocular laser speckle projection system calibration and depth estimation method applied to high-precision three-dimensional measurement
CN111540004B (en)*2020-04-162023-07-14北京清微智能科技有限公司Single camera polar line correction method and device
CN111540022B (en)*2020-05-142024-04-19深圳市艾为智能有限公司Image unification method based on virtual camera
CN111710000B (en)*2020-05-252023-09-05合肥的卢深视科技有限公司Camera line deviation self-checking method and system
CN111768450B (en)*2020-06-102023-10-13合肥的卢深视科技有限公司Automatic detection method and device for structured light camera row deviation based on speckle pattern
CN112053446B (en)*2020-07-112024-02-02南京国图信息产业有限公司Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
CN112070844B (en)*2020-08-272024-07-19合肥的卢深视科技有限公司Calibration method, device, equipment and medium of structured light system
CN111815693B (en)*2020-09-042021-01-12北京清微智能科技有限公司Depth image generation method and device
JP7686662B2 (en)*2020-10-132025-06-02パナソニックホールディングス株式会社 Installation information acquisition method, correction method, program, and installation information acquisition system
CN115008089B (en)*2021-03-042024-05-03深圳新益昌开玖自动化设备有限公司Coarse aluminum wire welding device, debugging method and coarse aluminum wire welding device equipment
CN113158924A (en)*2021-04-272021-07-23深圳赋能软件有限公司Speckle image correction method, face recognition method, face correction device and face recognition equipment
CN113793387A (en)*2021-08-062021-12-14中国科学院深圳先进技术研究院Calibration method, device and terminal of monocular speckle structured light system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9053573B2 (en)*2010-04-292015-06-09Personify, Inc.Systems and methods for generating a virtual camera viewpoint for an image
JP2013012045A (en)*2011-06-292013-01-17Nippon Telegr & Teleph Corp <Ntt>Image processing method, image processing system, and computer program
JP6016061B2 (en)*2012-04-202016-10-26Nltテクノロジー株式会社 Image generation apparatus, image display apparatus, image generation method, and image generation program
CN102710951B (en)*2012-05-092014-06-25天津大学Multi-view-point computing and imaging method based on speckle-structure optical depth camera
US20140104394A1 (en)*2012-10-152014-04-17Intel CorporationSystem and method for combining data from multiple depth cameras
CN103268608B (en)*2013-05-172015-12-02清华大学Based on depth estimation method and the device of near-infrared laser speckle
CN103561257B (en)*2013-11-012015-05-13北京航空航天大学Interference-free light-encoded depth extraction method based on depth reference planes
CN103607584B (en)*2013-11-272015-05-27浙江大学Real-time registration method for depth maps shot by kinect and video shot by color camera
TW201605247A (en)*2014-07-302016-02-01國立臺灣大學Image processing system and method
CN104268871A (en)*2014-09-232015-01-07清华大学Method and device for depth estimation based on near-infrared laser speckles
CN104899882A (en)*2015-05-282015-09-09北京工业大学Depth acquisition method for complex scene
CN104902246B (en)*2015-06-172020-07-28浙江大华技术股份有限公司Video monitoring method and device

Also Published As

Publication numberPublication date
CN106651794A (en)2017-05-10

Similar Documents

PublicationPublication DateTitle
CN106651794B (en) A Projection Speckle Correction Method Based on Virtual Camera
JP6722323B2 (en) System and method for imaging device modeling and calibration
CN101630406B (en)Camera calibration method and camera calibration device
CN106254854B (en)Preparation method, the apparatus and system of 3-D image
US8897502B2 (en)Calibration for stereoscopic capture system
CN109816731B (en)Method for accurately registering RGB (Red Green blue) and depth information
CN109147027B (en) Method, system and device for three-dimensional reconstruction of monocular image based on reference plane
CN103607584B (en)Real-time registration method for depth maps shot by kinect and video shot by color camera
CN102072706B (en)Multi-camera positioning and tracking method and system
CN106170086B (en)Method and device thereof, the system of drawing three-dimensional image
CN112132906A (en) A method and system for calibrating external parameters between a depth camera and a visible light camera
US10560683B2 (en)System, method and software for producing three-dimensional images that appear to project forward of or vertically above a display medium using a virtual 3D model made from the simultaneous localization and depth-mapping of the physical features of real objects
CN111009030A (en) A multi-view high-resolution texture image and binocular 3D point cloud mapping method
TWI581051B (en) Three - dimensional panoramic image generation method
TWI820246B (en)Apparatus with disparity estimation, method and computer program product of estimating disparity from a wide angle image
CN110060304B (en)Method for acquiring three-dimensional information of organism
CN113821107B (en)Indoor and outdoor naked eye 3D system with real-time and free viewpoint
TWI731430B (en)Information display method and information display system
CN108038887A (en)Based on binocular RGB-D camera depth profile methods of estimation
JP2008249431A (en) 3D image correction method and apparatus
Deng et al.Registration of multiple rgbd cameras via local rigid transformations
CN115760560A (en)Depth information acquisition method and device, equipment and storage medium
WO2013044642A1 (en)Brightness function obtaining method and related apparatus
CN109990756B (en) A binocular ranging method and system
JP2019032660A (en) Imaging system and imaging method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp