技术领域technical field
本发明涉及图像处理和三维数据采集领域,具体地涉及一种三维人脸模型生成系统及方法。The invention relates to the fields of image processing and three-dimensional data acquisition, in particular to a system and method for generating a three-dimensional human face model.
背景技术Background technique
近年来,随着硬件技术的发展和计算机处理能力的提高,人们对三维数据获取的需求在不断增长,其中运用低成本的设备方便快速地获取高精度人脸三维模型数据成为一个热门的研究和应用方向。In recent years, with the development of hardware technology and the improvement of computer processing capabilities, people's demand for 3D data acquisition has been increasing. Among them, using low-cost equipment to obtain high-precision face 3D model data conveniently and quickly has become a popular research and App direction.
现有的人脸三维模型数据采集方法主要利用主动距离传感器,通过向环境发射电磁波等能量并对其反射信号进行分析来获得场景的距离信息。激光扫描仪是其中主流的设备,但是这类设备笨重并且价格非常昂贵(百万元人民币级别),因此限制了其广泛应用。此外,结构光方法也能够获得三维模型重建结果。但是由于其需要额外的投影设备,因此整个设备不紧凑。而且,在扫描过程中这类设备会发射出高亮度的可见光,当人眼直视时会感觉不适,因此一般在采用这类设备时要求用户闭眼,从而降低了用户的体验感。进入21世纪后逐渐出现了一种通过测量光线飞行时间(Time of Flight,TOF)来获取场景三维结构的摄像机,但是这类设备的缺陷是获取的数据空间分辨率很低,一般一帧3D图像仅有2万像素左右,完全无法满足高精度人脸三维模型获取的需求。The existing methods for collecting face 3D model data mainly use active distance sensors to obtain the distance information of the scene by emitting energy such as electromagnetic waves to the environment and analyzing the reflected signals. Laser scanners are among the mainstream devices, but such devices are bulky and very expensive (millions of RMB level), thus limiting their wide application. In addition, the structured light method can also obtain 3D model reconstruction results. But since it requires additional projection equipment, the whole device is not compact. Moreover, such devices will emit high-brightness visible light during the scanning process, and people will feel uncomfortable when looking directly at them. Therefore, users are generally required to close their eyes when using such devices, thereby reducing the user's sense of experience. After entering the 21st century, a camera that obtains the three-dimensional structure of the scene by measuring the time of flight (Time of Flight, TOF) of light gradually appeared, but the defect of this type of device is that the spatial resolution of the acquired data is very low. Only about 20,000 pixels, completely unable to meet the needs of high-precision face 3D model acquisition.
发明内容Contents of the invention
有鉴于此,有必要提供一种低成本、高精度以及具有良好用户体验的三维人脸模型生成系统及方法。In view of this, it is necessary to provide a low-cost, high-precision and good user experience 3D face model generation system and method.
一种三维人脸模型生成系统,包括一个三维数据获取单元以及一个与所述三维数据获取单元相连的三维模型生成单元。所述三维数据获取单元包括一个第一数码图像获取装置、一个第二数码图像获取装置以及一个红外结构光投影装置。所述第一数码图像获取装置以及所述第二数码图像获取装置用于获取两个不同角度人脸的数码图像。所述红外结构光投影装置用于向人脸投射红外结构光并获得包含人脸的深度数据的图像。所述三维模型生成单元用于依据所述数码图像以及所述包含人脸深度数据的图像进行人脸三维模型的重建,以获得人脸的三维模型。A three-dimensional human face model generation system, comprising a three-dimensional data acquisition unit and a three-dimensional model generation unit connected with the three-dimensional data acquisition unit. The three-dimensional data acquisition unit includes a first digital image acquisition device, a second digital image acquisition device and an infrared structured light projection device. The first digital image acquisition device and the second digital image acquisition device are used to acquire digital images of faces from two different angles. The infrared structured light projection device is used for projecting infrared structured light onto a human face and obtaining an image containing depth data of the human face. The 3D model generation unit is used to reconstruct the 3D model of the face according to the digital image and the image containing the depth data of the face, so as to obtain the 3D model of the face.
优选地,第一数码图像获取装置、所述第二数码图像获取装置以及所述红外结构光投影装置沿一水平直线方向设置,所述红外结构光投影装置设置于第一数码图像获取装置以及所述第二数码图像获取装置之间。Preferably, the first digital image acquisition device, the second digital image acquisition device and the infrared structured light projection device are arranged along a horizontal straight line, and the infrared structured light projection device is arranged on the first digital image acquisition device and the infrared structured light projection device Between the second digital image acquisition device.
优选地,第一数码图像获取装置以及所述第二数码图像获取装置为数码相机,且所述第一数码图像获取装置以及所述第二数码图像获取装置具有相同的焦距。Preferably, the first digital image acquisition device and the second digital image acquisition device are digital cameras, and the first digital image acquisition device and the second digital image acquisition device have the same focal length.
优选地,三维模型生成单元包括一个系统标定模块、一个立体图对校正模块、一个几何及超分辨率变换模块、一个纹理分割模块、一个种子像素提取模块、一个视差图生成模块以及一个模型建立模块;Preferably, the three-dimensional model generation unit includes a system calibration module, a stereogram pair correction module, a geometric and super-resolution transformation module, a texture segmentation module, a seed pixel extraction module, a disparity map generation module and a model building module;
其中,所述系统标定模块用于建立确定系统坐标,以确定所述第一数码图像获取装置、所述第二数码图像获取装置以及所述红外结构光投影装置于所述系统坐标的位置;Wherein, the system calibration module is used to establish and determine system coordinates, so as to determine the positions of the first digital image acquisition device, the second digital image acquisition device, and the infrared structured light projection device in the system coordinates;
其中,所述立体图对校正模块用于对所述第一数码图像获取装置以及所述第二数码图像获取装置所获取的图像进行校正,以消除垂直视差;Wherein, the stereogram pair correction module is used to correct the images acquired by the first digital image acquisition device and the second digital image acquisition device, so as to eliminate vertical parallax;
其中,所述几何及超分辨率变换模块用于对所述红外结构光投影装置所获得的包含人脸深度数据的图像分别进行几何变换以及超分辨率变换;Wherein, the geometric and super-resolution transformation module is used to respectively perform geometric transformation and super-resolution transformation on the image containing face depth data obtained by the infrared structured light projection device;
其中,所述纹理分割模块用于对所述第一数码图像获取装置所获取的图像进行纹理分割,得到纹理分割后的二值掩模图像;Wherein, the texture segmentation module is configured to perform texture segmentation on the image acquired by the first digital image acquisition device to obtain a texture-segmented binary mask image;
其中,所述种子像素提取模块用于提取种子像素;Wherein, the seed pixel extraction module is used to extract the seed pixel;
其中,所述视差图生成模块用于依据所述几何及超分辨率变换模块所得到的视差图、所述纹理分割模块得到的二值掩模图像以及所述种子像素提取模块所提取的所述种子像素点获得分辨率与所述第一数码图像获取装置相同的、基于种子像素扩张的视差图;Wherein, the disparity map generation module is used to obtain the disparity map obtained by the geometric and super-resolution transformation module, the binary mask image obtained by the texture segmentation module, and the Obtaining a disparity map based on sub-pixel expansion with the same resolution as the first digital image acquisition device for the sub-pixels;
其中,所述模型建立模块用于依据所述视差图生成模块获得的视差图建立人脸的三维模型。Wherein, the model building module is used for building a three-dimensional model of the human face according to the disparity map obtained by the disparity map generation module.
优选地,系统标定模块用于得到所述第一数码图像获取装置、所述第二数码图像获取装置以及所述红外结构光投影装置的内部参数矩阵。Preferably, the system calibration module is used to obtain the internal parameter matrix of the first digital image acquisition device, the second digital image acquisition device and the infrared structured light projection device.
优选地,系统标定模块选取所述第一数码图像获取装置的相机坐标系作为参考坐标系,由所述第一数码图像获取装置、所述第二数码图像获取装置以及所述红外结构光投影装置的相对位置关系确定所述第二数码图像获取装置以及所述红外结构光投影装置相对于所述参考坐标系的位置。Preferably, the system calibration module selects the camera coordinate system of the first digital image acquisition device as a reference coordinate system, and the first digital image acquisition device, the second digital image acquisition device and the infrared structured light projection device The relative positional relationship determines the positions of the second digital image acquisition device and the infrared structured light projection device relative to the reference coordinate system.
本发明的另一个目的是提供一种三维人脸模型生成方法,包括如下步骤:Another object of the present invention is to provide a method for generating a three-dimensional human face model, comprising the steps of:
提供一个三维数据采集获取单元,所述三维数据获取单元包括一个第一数码图像获取装置、一个第二数码图像获取装置以及一个红外结构光投影装置;A three-dimensional data acquisition and acquisition unit is provided, the three-dimensional data acquisition unit includes a first digital image acquisition device, a second digital image acquisition device and an infrared structured light projection device;
建立系统坐标,以确定所述第一数码图像获取装置、所述第二数码图像获取装置以及所述红外结构光投影装置于所述系统坐标的位置;Establishing system coordinates to determine the positions of the first digital image acquisition device, the second digital image acquisition device, and the infrared structured light projection device in the system coordinates;
获取两个不同角度的人脸的数码图像以及一个包含人脸深度数据的红外结构光图像;Acquire two digital images of the human face from different angles and an infrared structured light image containing the depth data of the human face;
依据所述人脸的数码图像以及所述包含人脸深度数据的红外结构光图像进行人脸三维模型的重建,以获得人脸的三维模型。The three-dimensional model of the human face is reconstructed according to the digital image of the human face and the infrared structured light image containing the depth data of the human face, so as to obtain the three-dimensional model of the human face.
优选地,所述进行人脸三维模型重建的步骤包括:Preferably, the step of carrying out the reconstruction of the three-dimensional model of the human face comprises:
对所述两个不同角度的人脸的数码图像进行立体图对校正,以消除垂直视差;Carrying out stereogram pair correction to the digital images of the faces of the two different angles to eliminate vertical parallax;
对所述红外结构光图像分别进行几何变换及超分辨率变换;performing geometric transformation and super-resolution transformation on the infrared structured light image;
对所述数码图像进行纹理分割,得到纹理分割后的二值掩模图像;performing texture segmentation on the digital image to obtain a binary mask image after texture segmentation;
进行种子像素的提取;Carry out the extraction of seed pixel;
依据对所述红外结构光图像分别进行几何变换及超分辨率变换所得到的视差图、所述二值掩模图像以及所述种子像素获得分辨率与所述第一数码图像获取装置相同的、基于种子像素扩张的视差图;According to the disparity map obtained by geometrically transforming and super-resolution transforming the infrared structured light image, the binary mask image and the sub-pixels, the resolution is the same as that of the first digital image acquiring device, Disparity map based on seed pixel expansion;
依据所述分辨率与所述第一数码图像获取装置相同的、基于种子像素扩张的视差图建立人脸的三维模型。A three-dimensional model of the human face is established according to the disparity map based on the seed pixel expansion with the same resolution as that of the first digital image acquisition device.
优选地,以旋转矩阵Rs和平移矢量Ts表示所述第二数码图像获取装置相对于所述系统坐标的位置,以旋转矩阵Ra和平移矢量Ta表示所述红外结构光投影装置相对于所述系统坐标的位置。Preferably, the position of the second digital image acquisition device relative to the system coordinates is represented by a rotation matrix Rs and a translation vector Ts, and the position of the infrared structured light projection device relative to the system is represented by a rotation matrix Ra and a translation vector Ta The location of the coordinates.
优选地,以红外结构光图像的每一个像素点i计算其对应的三维空间坐标Pi=[xiyi zi]T,然后利用以下公式计算该像素点i在所述第一数码图像获取装置的投影坐标:Preferably, each pixel point i of the infrared structured light image is used to calculate its corresponding three-dimensional space coordinate Pi =[xi yi zi ]T , and then the following formula is used to calculate the pixel point i in the first digital image Get the projected coordinates of a device:
pi=Proj(Rr·(Ra·Pi+Ta)+Tr),pi =Proj(Rr·(Ra·Pi +Ta)+Tr),
令:P’i=[x’i y’i z’i]T=Rr·(Ra·Pi+Ta)+TrOrder: P'i =[x'i y'i z'i ]T =Rr·(Ra·Pi +Ta)+Tr
pi处的视差值采用如下公式计算:The disparity value at pi is calculated using the following formula:
d(pi)=b·f/z’id(pi )=b·f/z'i
其中,Proj()为投影变换,旋转矩阵Rr及平移矢量Tr表示所述第一数码图像获取装置相对于所述系统坐标的位置;b和f分别为所述第一数码图像获取装置以及所述第二数码图像获取装置之间的基线距离和所述第一数码图像获取装置以及所述第二数码图像获取装置的焦距,依据各像素的视差值得到与第一数码图像获取装置所获得的彩色图像分辨率相同的稀疏视差图。Wherein, Proj () is a projection transformation, and the rotation matrix Rr and the translation vector Tr represent the position of the first digital image acquisition device relative to the system coordinates; b and f are respectively the first digital image acquisition device and the The baseline distance between the second digital image acquisition device and the focal lengths of the first digital image acquisition device and the second digital image acquisition device are obtained according to the parallax value of each pixel and obtained by the first digital image acquisition device. Sparse disparity maps of the same resolution as color images.
优选地,基于所述稀疏视差图,用线性插值的方法得到致密的视差图。Preferably, based on the sparse disparity map, a dense disparity map is obtained by using a linear interpolation method.
优选地,将由所述第一数码图像获取装置获取的彩色图像转化为灰度图象,然后对每一像素点i处计算其灰度值的方差Vari,采用公式:Preferably, the color image acquired by the first digital image acquisition device is converted into a grayscale image, and then the variance Vari of its grayscale value is calculated for each pixel point i, using the formula:
上式中t为阈值,获得二值掩模图像。In the above formula, t is the threshold, and a binary mask image is obtained.
优选地,所述mask值中为1的像素点为是纹理丰富的区域,其余为弱纹理区域。Preferably, pixels with a value of 1 in the mask are areas with rich texture, and the rest are areas with weak texture.
优选地,提取所述红外结构光图像的所有像素点作为种子像素点。Preferably, all pixels of the infrared structured light image are extracted as seed pixels.
优选地,依据所述的视差图、所述二值掩模图像以及所述种子像素点获得分辨率与所述第一数码图像获取装置相同的、基于种子像素扩张的视差图。Preferably, according to the disparity map, the binary mask image and the sub-pixels, a disparity map based on sub-pixel expansion with the same resolution as that of the first digital image acquisition device is obtained.
优选地,依据所述基于种子像素扩张的视差图得到三维点云,计算方法如下式:Preferably, the three-dimensional point cloud is obtained according to the disparity map based on the seed pixel expansion, and the calculation method is as follows:
其中,X(u,v)是坐标为(u,v)的像素对应的三维点的X坐标;Y(u,v)是坐标为(u,v)的像素对应的三维点的Y坐标;Z(u,v)是坐标为(u,v)的像素对应的三维点的Z坐标;B为所述第一数码图像获取装置与所述第二数码图像获取装置之间的距离;(u0,v0)为所述第一数码图像获取装置的光心坐标,d(u,v)为优化视差图中(u,v)处的视差值。Wherein, X(u, v) is the X coordinate of the three-dimensional point corresponding to the pixel whose coordinate is (u, v); Y(u, v) is the Y coordinate of the three-dimensional point corresponding to the pixel whose coordinate is (u, v); Z(u, v) is the Z coordinate of the three-dimensional point corresponding to the pixel whose coordinates are (u, v); B is the distance between the first digital image acquisition device and the second digital image acquisition device; (u0 , v0 ) are the optical center coordinates of the first digital image acquisition device, and d(u, v) is the disparity value at (u, v) in the optimized disparity map.
相对于现有技术,本发明的所述的三维人脸模型生成系统以及三维人脸模型生成方法采用红外结构光投影装置配合数码图像获取装置采集人脸的三维数据,并依据所述三维数据进行人脸三维模型的生成,由于采集过程中无可见光辐射,因此用户体验较好,由于无需特殊的灯光架设,因此具有较强的稳定性。此外,所述三维人脸模型生成系统整体成本较低而且可以保证较高的空间分辨率。Compared with the prior art, the three-dimensional face model generation system and the three-dimensional face model generation method of the present invention use an infrared structured light projection device to cooperate with a digital image acquisition device to collect three-dimensional data of a human face, and perform a process based on the three-dimensional data. The generation of the 3D model of the face has a better user experience because there is no visible light radiation during the acquisition process, and it has strong stability because no special lighting is required. In addition, the overall cost of the 3D human face model generation system is low and can ensure high spatial resolution.
附图说明Description of drawings
图1是本发明实施方式的三维人脸模型生成系统的示意图。FIG. 1 is a schematic diagram of a three-dimensional face model generation system according to an embodiment of the present invention.
图2是本发明实施方式的三维人脸模型生成方法的流程图。FIG. 2 is a flowchart of a method for generating a three-dimensional face model according to an embodiment of the present invention.
具体实施方式detailed description
以下将结合附图对本发明作进一步说明。The present invention will be further described below in conjunction with the accompanying drawings.
请参阅图1,本发明实施方式的三维人脸模型生成系统100包括三维数据获取单元10以及一个与所述三维数据获取单元10相连的三维模型生成单元20。Referring to FIG. 1 , a 3D face model generation system 100 according to an embodiment of the present invention includes a 3D data acquisition unit 10 and a 3D model generation unit 20 connected to the 3D data acquisition unit 10 .
所述三维数据获取单元10包括一个第一数码图像获取装置11、一个第二数码图像获取装置12以及一个红外结构光投影装置13。所述第一数码图像获取装置11、第二数码图像获取装置12以及红外结构光投影装置13沿一水平直线方向设置,所述红外结构光投影装置13设置于第一数码图像获取装置11以及所述第二数码图像获取装置12之间。所述第一数码图像获取装置11以及所述第二数码图像获取装置12用于获取人脸的数码图像,本实施方式中,所述第一数码图像获取装置11以及所述第二数码图像获取装置12为数码相机,且二者具有相同的焦距。所述红外结构光投影装置13用于向人脸投射红外结构光并获得包含人脸的深度数据的图像。The three-dimensional data acquisition unit 10 includes a first digital image acquisition device 11 , a second digital image acquisition device 12 and an infrared structured light projection device 13 . The first digital image acquisition device 11, the second digital image acquisition device 12, and the infrared structured light projection device 13 are arranged along a horizontal linear direction, and the infrared structured light projection device 13 is arranged on the first digital image acquisition device 11 and the infrared structured light projection device 13. Between the second digital image acquisition device 12 mentioned above. The first digital image acquisition device 11 and the second digital image acquisition device 12 are used to acquire digital images of human faces. In this embodiment, the first digital image acquisition device 11 and the second digital image acquisition device Device 12 is a digital camera, and both have the same focal length. The infrared structured light projection device 13 is used for projecting infrared structured light onto a human face and obtaining an image containing depth data of the human face.
本实施方式中,所述三维数据获取单元10还包括一个支架14。所述支架14用于支撑所述第一数码图像获取装置11、所述第二数码图像获取装置12以及所述红外结构光投影装置13。In this embodiment, the three-dimensional data acquisition unit 10 further includes a bracket 14 . The bracket 14 is used to support the first digital image acquisition device 11 , the second digital image acquisition device 12 and the infrared structured light projection device 13 .
所述三维模型生成单元20用于依据所述三维数据获取单元10所获取的数据生成三维人脸模型。所述三维模型生成单元20包括一个系统标定模块21、一个立体图对校正模块22、一个几何及超分辨率变换模块23、一个纹理分割模块24、一个种子像素提取模块25、一个视差图生成模块26以及一个模型建立模块27。The 3D model generation unit 20 is used to generate a 3D face model according to the data acquired by the 3D data acquisition unit 10 . The three-dimensional model generation unit 20 includes a system calibration module 21, a stereogram pair correction module 22, a geometric and super-resolution transformation module 23, a texture segmentation module 24, a seed pixel extraction module 25, and a disparity map generation module 26 and a model building module 27 .
所述系统标定模块21用于建立确定系统坐标,以确定所述第一数码图像获取装置11、所述第二数码图像获取装置12以及所述红外结构光投影装置13于所述系统坐标的位置。The system calibration module 21 is used to establish and determine system coordinates, so as to determine the positions of the first digital image acquisition device 11, the second digital image acquisition device 12, and the infrared structured light projection device 13 in the system coordinates .
本实施方式中,选取第一数码图像获取装置11的相机坐标系作为参考坐标系,由所述第一数码图像获取装置11、所述第二数码图像获取装置12以及所述红外结构光投影装置13的相对位置关系确定所述第二数码图像获取装置12以及所述红外结构光投影装置13相对于所述参考坐标系的位置。本实施方式中,所述第二数码图像获取装置12相对于所述参考坐标系的位置采用旋转矩阵Rs和平移矢量Ts来表示;所述红外结构光投影装置13相对于所述参考坐标系的位置采用旋转矩阵Ra和平移矢量Ta表示,以上旋转矩阵Rs、Ra以及平移矢量Ts、Ta可以通过基于平面棋盘格标定模板的技术得到。此外,所述系统标定模块21还可以得到所述第一数码图像获取装置11、所述第二数码图像获取装置12以及所述红外结构光投影装置13的内部参数矩阵。In this embodiment, the camera coordinate system of the first digital image acquisition device 11 is selected as the reference coordinate system, and the first digital image acquisition device 11, the second digital image acquisition device 12 and the infrared structured light projection device The relative positional relationship of 13 determines the positions of the second digital image acquisition device 12 and the infrared structured light projection device 13 relative to the reference coordinate system. In this embodiment, the position of the second digital image acquisition device 12 relative to the reference coordinate system is represented by a rotation matrix Rs and a translation vector Ts; the position of the infrared structured light projection device 13 relative to the reference coordinate system The position is represented by a rotation matrix Ra and a translation vector Ta. The above rotation matrices Rs, Ra and translation vectors Ts, Ta can be obtained by a technique based on a planar checkerboard calibration template. In addition, the system calibration module 21 can also obtain the internal parameter matrix of the first digital image acquisition device 11 , the second digital image acquisition device 12 and the infrared structured light projection device 13 .
所述立体图对校正模块22用于对所述第一数码图像获取装置11以及所述第二数码图像获取装置12所获取的图像进行校正,以消除垂直视差。The stereogram correction module 22 is used for correcting the images acquired by the first digital image acquisition device 11 and the second digital image acquisition device 12 to eliminate vertical parallax.
本实施方式中,所述立体图对校正模块22根据所述第一数码图像获取装置11以及所述第二数码图像获取装置12所获取的图像进行重采样使得极线平行,具体,可以利用Fusiello等人提出的方法(Fusiello A,Trucco E,.Verri A.A Compact Algorithm forRectification of Stereo Pairs.Machine Vision and Applications,2000,12:16-22.)来实现所述立体图对校正。所述立体图对校正模块22分别以Rr和Tr表示所述第一数码图像获取装置11校正所需的旋转矩阵以及平移矢量。In this embodiment, the stereogram correction module 22 performs resampling according to the images acquired by the first digital image acquisition device 11 and the second digital image acquisition device 12 so that the epipolar lines are parallel. Specifically, Fusiello etc. can be used A method proposed by people (Fusiello A, Trucco E,. Verri A.A Compact Algorithm for Rectification of Stereo Pairs. Machine Vision and Applications, 2000, 12:16-22.) to achieve the stereogram pair correction. In the stereogram pair correction module 22, Rr and Tr represent the rotation matrix and translation vector required for correction by the first digital image acquisition device 11, respectively.
所述几何及超分辨率变换模块23用于对所述红外结构光投影装置13所获得的图像分别进行几何变换以及超分辨率变换。The geometric and super-resolution transformation module 23 is used to respectively perform geometric transformation and super-resolution transformation on the image obtained by the infrared structured light projection device 13 .
本实施方式中,所述几何及超分辨率变换模块23对所述红外结构光投影装置13所获得图像的每一个像素点i均计算其对应的三维空间坐标Pi=[xi yi zi]T,然后利用公式(1)计算该点在所述第一数码图像获取装置11的投影坐标:In this embodiment, the geometric and super-resolution transformation module 23 calculates the corresponding three-dimensional space coordinate Pi =[xi yi z for each pixel point i of the image obtained by the infrared structured light projection device 13i ]T , then use formula (1) to calculate the projection coordinates of this point in the first digital image acquisition device 11:
pi=Proj(Rr·(Ra·Pi+Ta)+Tr) (1)pi =Proj(Rr·(Ra·Pi +Ta)+Tr) (1)
其中Proj()是投影变换,其参数可以通过所述系统标定模块21所获得的所述第一数码图像获取装置11、所述第二数码图像获取装置12以及所述红外结构光投影装置13的内部参数矩阵得到。令:Wherein Proj () is a projection transformation, and its parameters can be obtained by the system calibration module 21 of the first digital image acquisition device 11, the second digital image acquisition device 12 and the infrared structured light projection device 13 The internal parameter matrix is obtained. make:
P’i=[x’i y’i z’i]T=Rr·(Ra·Pi+Ta)+TrP'i =[x'i y'i z'i ]T =Rr·(Ra·Pi +Ta)+Tr
则可以计算pi处的视差值为:Then the disparity value at pi can be calculated as:
d(pi)=b·f/z’id(pi )=b·f/z'i
其中b和f分别为所述第一数码图像获取装置11以及所述第二数码图像获取装置12之间的基线距离和所述第一数码图像获取装置11以及所述第二数码图像获取装置12的焦距,b、f的具体数值可以由所述系统标定模块21所获得的所述第一数码图像获取装置11、所述第二数码图像获取装置12以及所述红外结构光投影装置13的内部参数矩阵得到。Wherein b and f are respectively the baseline distance between the first digital image acquisition device 11 and the second digital image acquisition device 12 and the first digital image acquisition device 11 and the second digital image acquisition device 12 The specific values of b and f can be obtained from the first digital image acquisition device 11, the second digital image acquisition device 12 and the inside of the infrared structured light projection device 13 obtained by the system calibration module 21. The parameter matrix is obtained.
采用以上方法可以得到与第一数码图像获取装置11所获得的彩色图像分辨率相同的稀疏视差图,并且可以基于上述稀疏视差图,用线性插值的方法得到致密的视差图dispo。Using the above method, a sparse disparity map with the same resolution as the color image obtained by the first digital image acquisition device 11 can be obtained, and a dense disparity map dispo can be obtained by linear interpolation based on the above sparse disparity map.
所述纹理分割模块24用于对所述第一数码图像获取装置11所获取的图像进行纹理分割,得到纹理分割后的二值掩模图像mask。The texture segmentation module 24 is used for performing texture segmentation on the image acquired by the first digital image acquisition device 11 to obtain a texture-segmented binary mask image mask.
本实施方式中,所述纹理分割模块24首先将所述第一数码图像获取装置11所获取的彩色图像转化为灰度图象,然后对每一像素点i处计算其灰度值的方差Vari,从而有:In this embodiment, the texture segmentation module 24 first converts the color image acquired by the first digital image acquisition device 11 into a grayscale image, and then calculates the variance Var of the grayscale value of each pixel i.i , so that:
上式中t为阈值,一般可以取为所有像素灰度值方差的均值。In the above formula, t is the threshold value, which can generally be taken as the mean value of the variance of the gray value of all pixels.
然后对mask用形态学滤波技术进行滤波消除噪声。Then filter the mask with morphological filtering technology to eliminate noise.
以上,mask值中为1的像素点为是纹理丰富的区域,其余为弱纹理区域。Above, the pixels with a mask value of 1 are areas with rich texture, and the rest are areas with weak texture.
所述种子像素提取模块25用于提取种子像素。本实施方式中,所述种子像素提取模块25所述几何及超分辨率变换模块23计算得到的pi作为种子像素点。The sub-pixel extraction module 25 is used to extract sub-pixels. In this embodiment, the pi calculated by the geometric and super-resolution transformation module 23 of the seed pixel extraction module 25 is used as the seed pixel point.
所述视差图生成模块26用于依据所述几何及超分辨率变换模块23所得到的视差图dispo、所述纹理分割模块24得到的二值掩模图像mask以及所述种子像素提取模块25所提取的所述种子像素点获得分辨率与所述第一数码图像获取装置11相同的、基于种子像素扩张的视差图dispf。The disparity map generation module 26 is used to obtain the disparity map dispo obtained by the geometric and super-resolution transformation module 23, the binary mask image mask obtained by the texture segmentation module 24, and the subpixel extraction module 25. The extracted sub-pixel points obtain a disparity map dispf based on sub-pixel expansion with the same resolution as that of the first digital image acquisition device 11 .
本实施方式中,立体匹配代价函数可以选取各种基于窗口的函数,如归一化交叉相关(NCC)。将所述种子像素提取模块25所提取的种子像素点按照其匹配代价函数从大到小的顺序排列构成一个队列,取队列首位的像素p1,其对应的视差值为d(p1)。对于和p1相邻的四邻域中的某个像素pk,如果mask(pk)=1且pk处视差值尚未更新过,则分别计算像素pk处视差值分别为d(p1)-1、d(p1)和d(p1)+1对应的匹配代价函数,并且选取匹配代价函数值最大的视差值作为其新的视差,并且根据匹配代价函数值插入种子像素队列中对应的位置中。按上述描述的方法依次处理种子像素队列中的首位像素直到队列为空为止。In this embodiment, various window-based functions may be selected as the stereo matching cost function, such as normalized cross-correlation (NCC). Arranging the seed pixel points extracted by the seed pixel extraction module 25 in order of their matching cost functions from large to small to form a queue, taking the first pixel p1 in the queue, and its corresponding parallax value is d(p1 ) . For a certain pixel pk in the four neighbors adjacent to p1 , if mask(pk )=1 and the disparity value at pk has not been updated, then the disparity values at pixel pk are respectively calculated as d( p1 )-1, d(p1 ) and d(p1 )+1 corresponding matching cost functions, and select the disparity value with the largest matching cost function value as its new disparity, and insert the seed according to the matching cost function value in the corresponding position in the pixel queue. Process the first pixel in the seed pixel queue sequentially according to the method described above until the queue is empty.
所述模型建立模块27用于依据所述视差图生成模块26获得的视差图建立人脸的三维模型。The model building module 27 is used for building a three-dimensional model of the human face according to the disparity map obtained by the disparity map generation module 26 .
本实施方式中,所述模型建立模块27首先利用所述视差图生成模块26获得的视差图计算得到三维点云,计算方法如下式:In this embodiment, the model building module 27 first uses the disparity map obtained by the disparity map generation module 26 to calculate and obtain a three-dimensional point cloud, and the calculation method is as follows:
其中,X(u,v)是坐标为(u,v)的像素对应的三维点的X坐标;Y(u,v)是坐标为(u,v)的像素对应的三维点的Y坐标;Z(u,v)是坐标为(u,v)的像素对应的三维点的Z坐标;B为所述第一数码图像获取装置11与所述第二数码图像获取装置12之间的距离;(u0,v0)为所述第一数码图像获取装置11的光心坐标,d(u,v)为优化视差图中(u,v)处的视差值。Wherein, X(u, v) is the X coordinate of the three-dimensional point corresponding to the pixel whose coordinate is (u, v); Y(u, v) is the Y coordinate of the three-dimensional point corresponding to the pixel whose coordinate is (u, v); Z (u, v) is the Z coordinate of the three-dimensional point corresponding to the pixel whose coordinates are (u, v); B is the distance between the first digital image acquisition device 11 and the second digital image acquisition device 12; (u0 , v0 ) are the optical center coordinates of the first digital image acquisition device 11, and d(u, v) is the disparity value at (u, v) in the optimized disparity map.
然后,利用柏松曲面重建技术得到最终平滑的三维人脸模型。Then, the final smooth 3D face model is obtained by using Poisson surface reconstruction technology.
请参阅图2,本发明实施方式的三维人脸生成方法包括如下步骤:Please refer to Fig. 2, the three-dimensional human face generation method of the embodiment of the present invention comprises the following steps:
提供一个三维数据采集获取单元10。所述三维数据获取单元10包括一个第一数码图像获取装置11、一个第二数码图像获取装置12以及一个红外结构光投影装置13。所述第一数码图像获取装置11、所述第二数码图像获取装置12以及所述红外结构光投影装置13沿一水平直线方向设置,所述红外结构光投影装置13设置于第一数码图像获取装置11以及所述第二数码图像获取装置12之间。A three-dimensional data acquisition unit 10 is provided. The three-dimensional data acquisition unit 10 includes a first digital image acquisition device 11 , a second digital image acquisition device 12 and an infrared structured light projection device 13 . The first digital image acquisition device 11, the second digital image acquisition device 12, and the infrared structured light projection device 13 are arranged along a horizontal straight line, and the infrared structured light projection device 13 is arranged on the first digital image acquisition device. between the device 11 and the second digital image acquisition device 12 .
建立系统坐标,以确定所述第一数码图像获取装置11、所述第二数码图像获取装置12以及所述红外结构光投影装置13于所述系统坐标的位置。System coordinates are established to determine the positions of the first digital image acquisition device 11 , the second digital image acquisition device 12 and the infrared structured light projection device 13 in the system coordinates.
获取两个不同角度的人脸的数码图像以及一个包含人脸深度数据的红外结构光图像。Acquire two digital images of the face from different angles and an infrared structured light image containing the depth data of the face.
对所述两个数码图像进行立体图对校正,以消除垂直视差;Carrying out stereogram correction on the two digital images to eliminate vertical parallax;
对所述红外结构光图像分别进行几何变化及超分辨率变换;Performing geometric changes and super-resolution transformations on the infrared structured light images;
对所述数码图像进行纹理分割,得到纹理分割后的二值掩模图像;performing texture segmentation on the digital image to obtain a binary mask image after texture segmentation;
进行种子像素的提取;Carry out the extraction of seed pixel;
依据对所述红外结构光图像分别进行几何变换及超分辨率变换所得到的视差图dispo、所述二值掩模图像mask以及所述种子像素获得分辨率与所述第一数码图像获取装置11相同的、基于种子像素扩张的视差图dispf;According to the disparity map dispo obtained by performing geometric transformation and super-resolution transformation on the infrared structured light image, the binary mask image mask and the sub-pixels to obtain resolution and the first digital image acquisition device 11 The same disparity map dispf based on seed pixel expansion;
依据所述视差图dispf建立人脸的三维模型。A three-dimensional model of a human face is established according to the disparity map dispf.
以上三维人脸生成方法各个步骤所采用取的具体算法可参考说明书中关于所述三维人脸模型生成系统100的说明,此处不再赘述。For the specific algorithms adopted in each step of the above three-dimensional face generation method, please refer to the description of the three-dimensional face model generation system 100 in the specification, and will not be repeated here.
所述的三维人脸模型生成系统100以及三维人脸模型生成方法采用红外结构光投影装置配合数码图像获取装置采集人脸的三维数据,并依据所述三维数据进行人脸三维模型的生成,由于采集过程中无可见光辐射,因此用户体验较好,由于无需特殊的灯光架设,因此具有较强的稳定性。此外,所述三维人脸模型生成系统100整体成本较低而且可以保证较高的空间分辨率。The three-dimensional face model generation system 100 and the three-dimensional face model generation method use an infrared structured light projection device in conjunction with a digital image acquisition device to collect three-dimensional data of a human face, and generate a three-dimensional face model based on the three-dimensional data. There is no visible light radiation during the collection process, so the user experience is better, and because no special lighting is required, it has strong stability. In addition, the overall cost of the 3D face model generation system 100 is low and can ensure a high spatial resolution.
以上所述,仅是本发明的较佳实施例,并非对本发明作任何形式上的限制,凡是依据本发明的技术实质对以上实施例所作的任何简单修改、等同变化与修饰,均仍属本发明技术方案的保护范围。The above are only preferred embodiments of the present invention, and are not intended to limit the present invention in any form. Any simple modifications, equivalent changes and modifications made to the above embodiments according to the technical essence of the present invention are still within the scope of this invention. The protection scope of the technical solution of the invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410214265.1ACN103971408B (en) | 2014-05-21 | 2014-05-21 | Three-dimensional facial model generating system and method |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410214265.1ACN103971408B (en) | 2014-05-21 | 2014-05-21 | Three-dimensional facial model generating system and method |
| Publication Number | Publication Date |
|---|---|
| CN103971408A CN103971408A (en) | 2014-08-06 |
| CN103971408Btrue CN103971408B (en) | 2017-05-03 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410214265.1AActiveCN103971408B (en) | 2014-05-21 | 2014-05-21 | Three-dimensional facial model generating system and method |
| Country | Link |
|---|---|
| CN (1) | CN103971408B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104601900B (en)* | 2015-01-16 | 2017-11-21 | 杭州戈虎达科技有限公司 | For the micro- image data acquiring method for throwing equipment of intelligent interaction |
| CN104835141B (en)* | 2015-03-09 | 2018-05-18 | 深圳市魔眼科技有限公司 | The mobile terminal and method of three-dimensional model are established in a kind of laser ranging |
| CN106164979B (en)* | 2015-07-13 | 2019-05-17 | 深圳大学 | A three-dimensional face reconstruction method and system |
| CN105513221B (en)* | 2015-12-30 | 2018-08-14 | 四川川大智胜软件股份有限公司 | A kind of ATM machine antifraud apparatus and system based on three-dimensional face identification |
| CN105761243A (en)* | 2016-01-28 | 2016-07-13 | 四川川大智胜软件股份有限公司 | Three-dimensional full face photographing system based on structured light projection and photographing method thereof |
| CN105791636A (en)* | 2016-04-07 | 2016-07-20 | 潍坊科技学院 | A video processing system |
| CN106791775A (en)* | 2016-11-15 | 2017-05-31 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
| CN107240149A (en)* | 2017-06-14 | 2017-10-10 | 广东工业大学 | Object 3D Model Construction Method Based on Image Processing |
| CN107483564A (en)* | 2017-07-31 | 2017-12-15 | 广东欧珀移动通信有限公司 | Information push method, device and terminal equipment |
| CN107392874B (en)* | 2017-07-31 | 2021-04-09 | Oppo广东移动通信有限公司 | Beauty treatment method, device and mobile device |
| CN107807806A (en)* | 2017-10-27 | 2018-03-16 | 广东欧珀移动通信有限公司 | Display parameter adjustment method, device and electronic device |
| CN108363995B (en)* | 2018-03-19 | 2021-09-17 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating data |
| CN110378994B (en) | 2018-04-12 | 2021-05-28 | Oppo广东移动通信有限公司 | Face modeling method and related products |
| CN108628448B (en) | 2018-04-12 | 2019-12-06 | Oppo广东移动通信有限公司 | Screen brightening method, device, mobile terminal and storage medium |
| CN111047678B (en)* | 2018-10-12 | 2024-01-23 | 杭州海康威视数字技术股份有限公司 | Three-dimensional face acquisition device and method |
| CN109499010B (en)* | 2018-12-21 | 2021-06-08 | 苏州雷泰医疗科技有限公司 | Radiotherapy assistant system and method based on infrared and visible light 3D reconstruction |
| WO2020237492A1 (en)* | 2019-05-28 | 2020-12-03 | 深圳市汇顶科技股份有限公司 | Three-dimensional reconstruction method, device, apparatus, and storage medium |
| CN110278356A (en)* | 2019-06-10 | 2019-09-24 | 北京迈格威科技有限公司 | Smart camera equipment and information processing method, information processing equipment and medium |
| CN110598571A (en)* | 2019-08-15 | 2019-12-20 | 中国平安人寿保险股份有限公司 | Living body detection method, living body detection device and computer-readable storage medium |
| CN110675413B (en)* | 2019-09-27 | 2020-11-13 | 腾讯科技(深圳)有限公司 | Three-dimensional face model construction method and device, computer equipment and storage medium |
| CN111210510B (en) | 2020-01-16 | 2021-08-06 | 腾讯科技(深圳)有限公司 | Three-dimensional face model generation method, device, computer equipment and storage medium |
| CN116052209B (en)* | 2022-12-14 | 2024-03-29 | 长沙观谱红外科技有限公司 | Processing method of infrared image and standard 3D human body model and storage medium |
| CN115938023B (en)* | 2023-03-15 | 2023-05-02 | 深圳市皇家金盾智能科技有限公司 | Intelligent door lock face recognition unlocking method and device, medium and intelligent door lock |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102542601A (en)* | 2010-12-10 | 2012-07-04 | 三星电子株式会社 | Equipment and method for modeling three-dimensional (3D) object |
| CN103247074A (en)* | 2013-04-23 | 2013-08-14 | 苏州华漫信息服务有限公司 | 3D (three dimensional) photographing method combining depth information and human face analyzing technology |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101625768B (en)* | 2009-07-23 | 2011-11-09 | 东南大学 | Three-dimensional human face reconstruction method based on stereoscopic vision |
| KR20110071213A (en)* | 2009-12-21 | 2011-06-29 | 한국전자통신연구원 | 3D avatar face generation device using stereo vision and face detector and its method |
| US8581961B2 (en)* | 2011-03-31 | 2013-11-12 | Vangogh Imaging, Inc. | Stereoscopic panoramic video capture system using surface identification and distance registration technique |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102542601A (en)* | 2010-12-10 | 2012-07-04 | 三星电子株式会社 | Equipment and method for modeling three-dimensional (3D) object |
| CN103247074A (en)* | 2013-04-23 | 2013-08-14 | 苏州华漫信息服务有限公司 | 3D (three dimensional) photographing method combining depth information and human face analyzing technology |
| Title |
|---|
| 利用立体图对的三维人脸模型重建算法;沈晔湖等;《计算机辅助设计与图形学学报》;20061231;第18卷(第12期);第1904-1910页* |
| Publication number | Publication date |
|---|---|
| CN103971408A (en) | 2014-08-06 |
| Publication | Publication Date | Title |
|---|---|---|
| CN103971408B (en) | Three-dimensional facial model generating system and method | |
| CN110288642B (en) | Three-dimensional object rapid reconstruction method based on camera array | |
| CN104504671B (en) | Method for generating virtual-real fusion image for stereo display | |
| CN106600686B (en) | A 3D point cloud reconstruction method based on multiple uncalibrated images | |
| WO2021077720A1 (en) | Method, apparatus, and system for acquiring three-dimensional model of object, and electronic device | |
| CN103106688B (en) | Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering | |
| KR101310589B1 (en) | Techniques for rapid stereo reconstruction from images | |
| CN109357633B (en) | Three-dimensional scanning method, device, storage medium and processor | |
| CN109147027B (en) | Method, system and device for three-dimensional reconstruction of monocular image based on reference plane | |
| CN106204731A (en) | A kind of multi-view angle three-dimensional method for reconstructing based on Binocular Stereo Vision System | |
| CN109242898B (en) | Three-dimensional modeling method and system based on image sequence | |
| CN102980513B (en) | Monocular full-view stereo vision sensor centered by thing | |
| CN107274483A (en) | A kind of object dimensional model building method | |
| CN115512055A (en) | Method and device for performing indoor structure three-dimensional reconstruction based on two-dimensional video and computer equipment | |
| CN114627491A (en) | Single three-dimensional attitude estimation method based on polar line convergence | |
| WO2019219014A1 (en) | Three-dimensional geometry and eigencomponent reconstruction method and device based on light and shadow optimization | |
| CN109712230B (en) | Three-dimensional model supplement method, device, storage medium and processor | |
| WO2018032841A1 (en) | Method, device and system for drawing three-dimensional image | |
| CN104599317A (en) | Mobile terminal and method for achieving 3D (three-dimensional) scanning modeling function | |
| CN112381721A (en) | Human face three-dimensional reconstruction method based on binocular vision | |
| CN106683163B (en) | Imaging method and system for video monitoring | |
| CN109461206A (en) | A kind of the face three-dimensional reconstruction apparatus and method of multi-view stereo vision | |
| Zhang et al. | The light field 3D scanner | |
| CN107374638A (en) | A kind of height measuring system and method based on binocular vision module | |
| CN117372647B (en) | Rapid construction method and system of three-dimensional model for building |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |