Movatterモバイル変換


[0]ホーム

URL:


CN103530907B - Complicated three-dimensional model drawing method based on images - Google Patents

Complicated three-dimensional model drawing method based on images
Download PDF

Info

Publication number
CN103530907B
CN103530907BCN201310497271.8ACN201310497271ACN103530907BCN 103530907 BCN103530907 BCN 103530907BCN 201310497271 ACN201310497271 ACN 201310497271ACN 103530907 BCN103530907 BCN 103530907B
Authority
CN
China
Prior art keywords
visual angle
virtual
image
virtual visual
under
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310497271.8A
Other languages
Chinese (zh)
Other versions
CN103530907A (en
Inventor
向开兵
郝爱民
吴伟和
李帅
王德志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN ESUN DISPLAY CO Ltd
Beihang University
Original Assignee
SHENZHEN ESUN DISPLAY CO Ltd
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN ESUN DISPLAY CO Ltd, Beihang UniversityfiledCriticalSHENZHEN ESUN DISPLAY CO Ltd
Priority to CN201310497271.8ApriorityCriticalpatent/CN103530907B/en
Publication of CN103530907ApublicationCriticalpatent/CN103530907A/en
Application grantedgrantedCritical
Publication of CN103530907BpublicationCriticalpatent/CN103530907B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The invention provides a complicated three-dimensional model drawing method based on images. Vertexes are uniformity selected from a spherical surface surrounding a model as camera coordinate positions, and a color image and a depth image of the model under a sampling visual angle are obtained by taking a sphere center as a target point of a camera. The method comprises the steps of triangularly dividing the spherical surface according to sampling point coordinates to determine a triangle where a virtual visual angle is located, taking corresponding visual angles as reference visual angles when taking vertexes of the triangle as sampling points, and drawing the model under the virtual visual angle with depth images and color images under the reference visual angles: calculating mapping relationships between the virtual visual angle and pixels in the three reference visual angles respectively with parameters of the reference visual angles, and drawing an image of the virtual visual angle with the appropriate pixels of the reference visual angles or a background pixel by taking the depth images as references, and finally, optimizing the drawn color image. The method can meet real-time requirements and obtains a very true-to-life drawing effect.

Description

Translated fromChinese
基于图像的复杂三维模型绘制方法Image-based Complicated 3D Model Rendering Method

技术领域technical field

本发明涉及一种基于图像的真实感绘制方法,主要用于复杂模型在虚拟视角下的真实感绘制。The invention relates to an image-based realistic rendering method, which is mainly used for realistic rendering of complex models under a virtual viewing angle.

背景技术Background technique

传统的图形学当中真实感绘制的一般过程是:用户输入物体的几何特性并进行几何建模,然后根据模型所在环境的光照信息,模型的光滑度、透明度、反射率折射率等物理属性,以及表面纹理等,通过空间变换、透视变换等,计算得到物体在特定视角下的每个像素的颜色值。然而,这种方法的建模过程复杂,计算和显示的开销大,时间复杂度与模型复杂度耦合性高,不适用于复杂模型的绘制,并且难以得到逼真的绘制结果。The general process of realistic rendering in traditional graphics is: the user inputs the geometric characteristics of the object and performs geometric modeling, and then according to the lighting information of the environment where the model is located, the physical properties of the model such as smoothness, transparency, reflectivity and refractive index, and Surface texture, etc., through space transformation, perspective transformation, etc., calculate the color value of each pixel of the object under a specific viewing angle. However, the modeling process of this method is complex, the overhead of calculation and display is high, the coupling between time complexity and model complexity is high, it is not suitable for the rendering of complex models, and it is difficult to obtain realistic rendering results.

基于图像的绘制方法(Image-Based Rendering.IBR)以图像作为基本的输入,在不进行几何模型重建的基础上合成虚拟视角下的图像,在视频游戏、虚拟旅行、电子商务、工业检测等领域有着非常广阔的应用前景,因此也是三维图形逼真绘制领域的一个研究热点。本发明所提出的基于图像的复杂三维模型绘制方法是一种IBR方法。The image-based rendering method (Image-Based Rendering.IBR) uses images as the basic input, and synthesizes images under virtual perspectives without geometric model reconstruction. It is used in video games, virtual travel, e-commerce, industrial inspection and other fields. It has a very broad application prospect, so it is also a research hotspot in the field of realistic rendering of 3D graphics. The image-based complex three-dimensional model rendering method proposed by the present invention is an IBR method.

基于图像绘制的主要方法如下:The main methods of image-based drawing are as follows:

1.几何和图像混合建模和绘制方法(Hybrid Geometry and Image-basedApproach)1. Hybrid Geometry and Image-based Approach

PaulE.Debevec(参考文献1-PaulE.Debevec,camilloj.Taylor,Jitendra make.“Modeling and rendering architecture from photographs:a hybrid geometry andimaged-based approach”.In SIGGRAPH96Processing of the23rdannual conference oncomputer graphics and interactive techniques.Page11-20)提出的几何和图像混合建模和绘制方法的主要步骤如下:PaulE.Debevec (Reference 1-PaulE.Debevec,camilloj.Taylor, Jitendra make. "Modeling and rendering architecture from photographs: a hybrid geometry and imaged-based approach".In SIGGRAPH96Processing ofthe23rd annual conference on computer graphics and interactive techniques.Page -20) The main steps of the proposed geometric and image hybrid modeling and rendering method are as follows:

a.拍摄场景照片,交互指定模型的边缘;a. Take a picture of the scene and interactively specify the edge of the model;

b.生成模型的粗糙模型;b. Generate a rough model of the model;

c.利用基于模型的立体视觉算法细化模型;c. Refine the model using a model-based stereo vision algorithm;

d.利用基于视点的纹理映射合成新视图。d. Synthesize new views using viewpoint-based texture mapping.

此过程的实例如图4所示。这种方法的优点是简单快捷,可以通过拍摄少量的照片得到的新视图;缺点此过程还需要人为的指定模型的轮廓,只能适用于普通建筑物等外形规整的景物,不适用于复杂模型。An example of this process is shown in Figure 4. The advantage of this method is that it is simple and fast, and a new view can be obtained by taking a small number of photos; the disadvantage is that this process also requires artificially specifying the outline of the model, which can only be applied to regular buildings such as ordinary buildings, not complex models. .

2.视图插值、变换的方法(View Interpolation、Transaction)2. View interpolation and transformation methods (View Interpolation, Transaction)

视图插值、变换方法直接利用参考点的照片生成虚拟视角下的图像。视图插值、变换方法(参考文献2-Geetha Ramachandran,Markus Rupp.“MultiView Synthesis FromStereo Views”.In IWSSIP2012,11-13April2012,PP.341-PP.345.参考文献3-AnkitK.Jain,Lam C.Tran,RamsinKhoshabeh,Truong Q.Nguyen,Efficient Stereo-to-Multiview Synthesis,ICASSP2011.PP.889-PP.892.参考文献4-S.C.Chan,A.C.Bovik,H.R.Sheikh,and E.P.SimomCelli,“Image-Based Rendering and synthesis”,IEEESignal Process,Mag,vol.24,no.6,PP.22-PP.33)的输入是两个视角下的规整化的彩色图像与深度图像,输出的是位于两个参考点所决定的直线上(基线,Baseline)的虚拟视角下的图像,具体的过程如下:The view interpolation and transformation methods directly use the photos of the reference points to generate images under the virtual perspective. View interpolation, transformation method (Reference 2-Geetha Ramachandran,Markus Rupp. "MultiView Synthesis FromStereo Views".In IWSSIP2012,11-13April2012,PP.341-PP.345.Reference 3-AnkitK.Jain,Lam C.Tran , RamsinKhoshabeh, Truong Q.Nguyen, Efficient Stereo-to-Multiview Synthesis, ICASSP2011.PP.889-PP.892. Reference 4-S.C.Chan, A.C.Bovik, H.R.Sheikh, and E.P.SimomCelli, "Image-Based Rendering and synthesis ",IEEESignal Process,Mag,vol.24,no.6,PP.22-PP.33)The input is the normalized color image and depth image under two viewing angles, and the output is determined by the two reference points The image under the virtual perspective on the straight line (baseline, Baseline), the specific process is as follows:

a.立体匹配,生成最初的合成视角图像;a. Stereo matching to generate the initial synthetic view image;

b.优化处理,根据边缘检测找到可能的空洞点;b. Optimize processing, find possible hole points according to edge detection;

c.填充生成合成视角所对应的深度图;c. Fill and generate the depth map corresponding to the synthetic perspective;

d.图像重建,根据深度图像填充彩色图像空洞。d. Image reconstruction, filling the color image holes according to the depth image.

此方法的实例如图5所示。这种方法的优点是过程简单,生成图像的信噪峰值比高(信噪峰值比(PSNR),表征处理后的图像与处理前图像的相似度。信号峰值比越高,说明合成图像的真实度越强),并且能够出色的填补空洞。但是此方法只能产生位于基线上的虚拟视角的图像,而且对图片进行规整化操作会引入投影误差,只能生成近似的中间图像。An example of this approach is shown in Figure 5. The advantage of this method is that the process is simple, and the peak signal-to-noise ratio (PSNR) of the generated image is high (signal-to-noise peak ratio (PSNR), which characterizes the similarity between the processed image and the pre-processed image. The higher the signal peak ratio, the true The stronger the degree), and can fill the hole excellently. However, this method can only generate images of virtual perspectives on the baseline, and normalizing the images will introduce projection errors, and only approximate intermediate images can be generated.

发明内容Contents of the invention

现有的三维模型真实感绘制方法存在以下缺点:基于几何的方法存在模型获取与重建过程复杂,绘制过程受模型复杂度、模型光照属性影响大,绘制效果真实感不强,不适用于复杂模型的绘制;基于图像的绘制方法,合成视角被局限在两个参考视角之间的基线上,无法生成物体在任意视角下的图像。Existing realistic rendering methods for 3D models have the following disadvantages: geometry-based methods have complex model acquisition and reconstruction processes, the rendering process is greatly affected by model complexity and model lighting attributes, and the rendering effect is not realistic, so it is not suitable for complex models In the image-based rendering method, the composite viewing angle is limited to the baseline between two reference viewing angles, and it is impossible to generate an image of an object at any viewing angle.

针对现有技术的缺点,本发明提出基于图像的复杂三维模型绘制方法,其包含如下过程:For the shortcomings of the prior art, the present invention proposes an image-based complex three-dimensional model drawing method, which includes the following processes:

(1)虚拟视角的标定:根据采样点相机位置坐标对包围模型的球面进行三角划分,确定虚拟视角所在的三角面片,取此三角面片的三个顶点所对应的视角为参考视角,虚拟视角可以被表示成参考视角的线性组合;(1) Calibration of the virtual viewing angle: Triangulate the spherical surface surrounding the model according to the camera position coordinates of the sampling point, determine the triangular patch where the virtual viewing angle is located, and take the viewing angle corresponding to the three vertices of the triangular patch as the reference viewing angle. Views can be expressed as a linear combination of reference views;

(2)计算与绘制:根据虚拟视角、参考视角的位置和相机参数,计算三个参考视角图像中每个像素的坐标与虚拟视角下的像素坐标之间的映射关系;根据映射关系,将参考视角下彩色图像中的每一个像素映射到虚拟视角下的图像中,计算该像素在虚拟视角下图像中的坐标和深度值,对于有多个参考视角的像素映射到同一位置的情况,取深度值小的像素值;同时标记虚拟视角下图像中所有已被参考视角像素填充的像素,构造一幅反映从参考视角到虚拟视角映射情况的灰度图;(2) Calculation and drawing: According to the virtual perspective, the position of the reference perspective and the camera parameters, calculate the mapping relationship between the coordinates of each pixel in the three reference perspective images and the pixel coordinates under the virtual perspective; according to the mapping relationship, the reference Each pixel in the color image under the perspective is mapped to the image under the virtual perspective, and the coordinate and depth value of the pixel in the image under the virtual perspective are calculated. For the case where there are multiple reference perspective pixels mapped to the same position, take the depth The pixel value with a small value; at the same time, mark all the pixels in the image under the virtual perspective that have been filled by the pixels of the reference perspective, and construct a grayscale image that reflects the mapping from the reference perspective to the virtual perspective;

(3)图像的优化:对于虚拟视角下图像中的空洞,即经过(2)计算没有参考图像映射到该位置的像素,从反映参考视角到虚拟视角映射情况的灰度图中提取边缘轮廓信息,沿着边缘轮廓对生成的彩色图像进行中值滤波,用邻居像素的值填补空洞;同时,通过中值滤波过滤掉噪声像素。(3) Image optimization: For the hole in the image under the virtual perspective, that is, after (2) calculating the pixel that has no reference image mapped to the position, the edge contour information is extracted from the grayscale image that reflects the mapping from the reference perspective to the virtual perspective , perform median filtering on the generated color image along the edge contour, and fill holes with the values of neighboring pixels; at the same time, filter out noise pixels through median filtering.

其中,通过步骤(1)实现对虚拟视角的标定,确定用于虚拟视角绘制的参考视角,通过步骤(2)建立参考视角下的像素到虚拟视角下像素之间的映射关系,实现对虚拟视角下的复杂模型绘制。Among them, the calibration of the virtual viewing angle is realized through step (1), the reference viewing angle used for drawing the virtual viewing angle is determined, and the mapping relationship between the pixels under the reference viewing angle and the pixels under the virtual viewing angle is established through step (2), so as to realize the calibration of the virtual viewing angle. The following complex model drawing.

其中,在步骤(2)与步骤(3)中使用CUDA(Compute Unified DeviceArchitecture,统一计算设备架构)并行计算,加速虚拟视角下的绘制与优化速度,达到实时交互的要求。Among them, CUDA (Compute Unified Device Architecture, Unified Computing Device Architecture) parallel computing is used in step (2) and step (3) to accelerate the rendering and optimization speed under the virtual perspective and meet the requirements of real-time interaction.

本发明的原理在于:Principle of the present invention is:

本发明提供一种基于图像的复杂三维模型绘制方法,在包围模型的球面上均匀选择顶点作为相机坐标位置,以球心为相机的目标点,获得模型在此采样视角下的彩色图像与深度图像。根据采样点坐标对球面进行三角划分,确定虚拟视角所在的三角形,取以三角形的顶点为采样点所对应的视角为参考视角,使用参考视角下的深度图像与彩色图像绘制虚拟视角下的模型:首先,利用参考视角的参数分别计算虚拟视角与三个参考视角中像素之间的映射关系;其次,以深度图像为参考,选择合适的参考视角像素或者背景像素绘制虚拟视角的图像;最后,对绘制得到的彩色图像进行优化。在整个绘制过程中使用了CUDA加速,实现了对图像的并行快速处理。本发明能够满足实时性的要求,同时得到十分逼真的绘制效果。The invention provides an image-based complex three-dimensional model drawing method, uniformly select vertices on the spherical surface surrounding the model as the camera coordinate position, and use the center of the sphere as the target point of the camera to obtain the color image and depth image of the model at this sampling angle of view . Triangulate the spherical surface according to the coordinates of the sampling points, determine the triangle where the virtual viewing angle is located, take the viewing angle corresponding to the apex of the triangle as the sampling point as the reference viewing angle, and use the depth image and color image under the reference viewing angle to draw the model under the virtual viewing angle: First, use the parameters of the reference perspective to calculate the mapping relationship between the virtual perspective and the pixels in the three reference perspectives; secondly, using the depth image as a reference, select the appropriate reference perspective pixels or background pixels to draw the image of the virtual perspective; finally, the The resulting color image is drawn for optimization. CUDA acceleration is used throughout the drawing process to achieve parallel and fast processing of images. The invention can meet the requirement of real-time performance and obtain very realistic drawing effect at the same time.

本发明与现有的技术相比,优点如下:Compared with the prior art, the present invention has the following advantages:

(1)本发明的虚拟视角可以在空间当中任意的移动。利用三个参考点对虚拟视角下的模型进行绘制,这样既可以保证虚拟视角能够在水平与竖直两个维度上自由的移动,又最大限度的降低算法的输入量与存储开销;(1) The virtual viewing angle of the present invention can move arbitrarily in space. Use three reference points to draw the model under the virtual perspective, which can not only ensure that the virtual perspective can move freely in the horizontal and vertical dimensions, but also minimize the input amount and storage cost of the algorithm;

(2)本发明提出的绘制方法非常稳定,算法的时间复杂度与场景的复杂度耦合性低,尤其适用于复杂模型的绘制。(2) The rendering method proposed by the present invention is very stable, and the coupling between the time complexity of the algorithm and the complexity of the scene is low, and is especially suitable for the rendering of complex models.

附图说明Description of drawings

图1为本发明所采用的流程图;Fig. 1 is the flow chart that the present invention adopts;

图2为本算法的原理示意图;Fig. 2 is the schematic diagram of the principle of this algorithm;

图3当虚拟视角在某个三角面片的内部时,虚拟视角与参考点视角的位置关系;Fig. 3 When the virtual viewing angle is inside a certain triangular surface, the positional relationship between the virtual viewing angle and the reference point viewing angle;

图4正变换矩阵与逆变换矩阵;Fig. 4 positive transformation matrix and inverse transformation matrix;

图5为几何和图像混合建模过程,其中:(a)是通过交互确定被重建物体的轮廓,(b)是得到物体的粗糙模型,(c)是用基于模型的立体视觉算法细化模型,(d)是通过基于视点的纹理映射得到最终的场景;Figure 5 shows the modeling process of geometric and image hybrids, in which: (a) determines the contour of the reconstructed object through interaction, (b) obtains the rough model of the object, and (c) refines the model with a model-based stereo vision algorithm , (d) is the final scene obtained through viewpoint-based texture mapping;

图6为视差图方法的实例,最初的输入图像是规整化的彩色图像(图中是该彩色图像的灰度图的示意图),初始生成图是经过像素映射与填充过程后得到了彩色图(图中是该彩色图的灰度图的示意图),注意方框内的空洞,算法的最终输出是空洞填补过后的彩色图(图中是该彩色图的灰度图的示意图),注意方框内的空洞已经被消除;Figure 6 is an example of the disparity map method. The initial input image is a normalized color image (the figure is a schematic diagram of the grayscale image of the color image), and the initial generated image is a color image obtained after the process of pixel mapping and filling ( The picture is a schematic diagram of the grayscale image of the color image), pay attention to the holes in the box, the final output of the algorithm is the color image after the hole is filled (the picture is a schematic diagram of the grayscale image of the color image), pay attention to the box The void inside has been eliminated;

图7经过初步映射填充后的虚拟视角的彩色图(图中是该彩色图的灰度图的示意图),注意模型周边的噪声的内部的空洞;Figure 7 is the color map of the virtual perspective after preliminary mapping and filling (the figure is a schematic diagram of the grayscale image of the color map), pay attention to the internal holes of the noise around the model;

图8经过初步映射填充后的虚拟视角的灰度图,注意模型周边的噪声与内部的空洞;Figure 8 is the grayscale image of the virtual viewing angle after preliminary mapping and filling, pay attention to the noise around the model and the internal cavity;

图9进过膨胀、差值操作之后的边缘图;Figure 9 is the edge map after expansion and difference operations;

图10简单的对所有像素进行中值滤波之后的结果;Figure 10 is the result of simple median filtering of all pixels;

图11只对边缘像素进行中值滤波之后的结果,与图10相比,图11更好的保存了细节信息,真实更强;Figure 11 is the result of median filtering only on the edge pixels. Compared with Figure 10, Figure 11 better preserves the detail information and is more realistic;

图12是发明实例的部分输入图像,上方三张图片是输入图像示意图,输入图像可以为彩色图像,图中未显示,下方三张图片是分别与之对应的深度图;Fig. 12 is a partial input image of the invention example, the upper three pictures are schematic diagrams of the input image, the input image can be a color image, which is not shown in the figure, and the lower three pictures are corresponding depth maps;

图13是发明实例的部分输出图像,输出的图片是由本发明所提出的绘制方法合成的。Fig. 13 is a partial output image of the invention example, the output picture is synthesized by the rendering method proposed by the invention.

具体实施方式detailed description

下面结合附图及具体实施例进一步说明本发明。The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.

1.一种基于深度图像的真实感绘制方法:1. A realistic rendering method based on depth images:

针对由均匀分布在包围模型的球面上,包含相机视角参数、相机视角下的深度图、彩色图的采样点来表示的三维模型,根据采样点坐标对球面进行三角划分,确定虚拟视角所在的三角形,取以三角形的顶点为采样点所对应的视角为参考视角,使用参考视角下的深度图像与彩色图像绘制虚拟视角下的模型:首先,利用参考视角的参数分别计算虚拟视角与三个参考视角中像素之间的映射关系;其次,以深度图像为参考,选择合适的参考视角像素或者背景像素绘制虚拟视角的图像;最后,对绘制得到的彩色图像进行优化。在整个绘制过程中使用了CUDA加速,实现了对图像的并行快速处理。具体实施过程如下:For a 3D model represented by sampling points evenly distributed on the spherical surface surrounding the model, including camera viewing angle parameters, depth maps under the camera viewing angle, and color maps, the spherical surface is triangulated according to the coordinates of the sampling points to determine the triangle where the virtual viewing angle is located , take the angle of view corresponding to the apex of the triangle as the sampling point as the reference angle of view, use the depth image and color image under the reference angle of view to draw the model under the virtual angle of view: first, use the parameters of the reference angle of view to calculate the virtual angle of view and the three reference angles of view The mapping relationship between the pixels in the middle; secondly, using the depth image as a reference, select the appropriate reference view pixel or background pixel to draw the image of the virtual view; finally, optimize the drawn color image. CUDA acceleration is used throughout the drawing process to achieve parallel and fast processing of images. The specific implementation process is as follows:

1.虚拟视角的标定模块1. Calibration module of virtual perspective

在本发明中,以不同采样点的深度图像、彩色图像与采样点相机参数表示物体的三维模型。三维模型M用一个二元组<K,V>表示,其中K是一单纯复形,表示了采样点的连接关系;V表示采样点的集合,V=(vi|i=1,2,3...|V||),|V|表示采样点的个数;Vi=(ci,di,pi)表示第i个采样点,ci和di分别表示第i个采样点彩色图像和深度图像,pi表示了第i个采样点的相机参数,pi=(pci,poi,aspi,fovi,zni,zfi),pci表示相机位置,poi表示相机目标位置,aspi表示相机视野的纵横比,fovi表示相机的视野的广度,zni、zfi分别表示相机有效深度的最小值与最大值。In the present invention, the three-dimensional model of the object is represented by depth images, color images and camera parameters of different sampling points. The three-dimensional model M is represented by a binary group <K, V>, where K is a simplicial complex, which represents the connection relationship of sampling points; V represents the collection of sampling points, V=(vi |i=1,2, 3...|V||), |V| indicates the number of sampling points; Vi = (ci , d i, pi ) indicates the ith sampling point, and ci and di respectively indicate the i-th sampling point Sampling point color image and depth image, pi represents the camera parameter of the i-th sampling point, pi = (pci , poi , aspi , fovi , zni , zfi ), pci represents the camera position, poi represents the camera target position, aspi represents the aspect ratio of the camera's field of view, fovi represents the breadth of the camera's field of view, zni and zfi represent the minimum and maximum values of the camera's effective depth, respectively.

在绘制虚拟视角下的模型之前,需要求出与虚拟视角最近的三个采样点,即进行虚拟视角的标定。因为所有的采样点在包围物体的球面上是均匀分布的,在球面按照采样点坐标进行三角划分后,只需要确定虚拟视角所在的三角面片,则以此三角面片三个顶点就是所需的最近采样点,这三个最近采样点被称为虚拟视角的参考点。Before drawing the model under the virtual perspective, it is necessary to find the three sampling points closest to the virtual perspective, that is, to calibrate the virtual perspective. Because all sampling points are evenly distributed on the spherical surface surrounding the object, after the spherical surface is triangulated according to the coordinates of the sampling points, it is only necessary to determine the triangular patch where the virtual viewing angle is located, and the three vertices of the triangular patch are the required These three nearest sampling points are called the reference points of the virtual perspective.

<v1,v2,v3>=f(v) (1)<v1 ,v2 ,v3 >=f(v) (1)

其中,v是虚拟视角,<v1,v2,v3>是虚拟视角所在的三角面片,v1,v2,v3是v的参考点。在本发明中,通过求解从虚拟视角相机位置指向球心的向量与逼近包围物体球面的多面体的交点,根据交点所在的三角面片确定三个最近参考点。Wherein, v is the virtual viewing angle, <v1 , v2 , v3 > is the triangular patch where the virtual viewing angle is located, and v1 , v2 , v3 are the reference points of v. In the present invention, by solving the intersection point of the vector pointing to the center of the sphere from the position of the virtual perspective camera and the polyhedron approximating the spherical surface surrounding the object, three nearest reference points are determined according to the triangular patch where the intersection point is located.

虚拟视角与参考点有如图3所示的位置关系。由解析几何的知识可知,在图3所示的位置关系下,虚拟视角可以由参考点线性合成,且满足:The virtual viewing angle and the reference point have a positional relationship as shown in FIG. 3 . From the knowledge of analytic geometry, under the positional relationship shown in Figure 3, the virtual viewing angle can be linearly synthesized from the reference point, and satisfy:

vv&RightArrow;&Right Arrow;==&alpha;&alpha;vv&RightArrow;&Right Arrow;11++&beta;&beta;vv&RightArrow;&Right Arrow;22++((11--&alpha;&alpha;--&beta;&beta;))vv&RightArrow;&Right Arrow;3300&le;&le;&alpha;&alpha;&le;&le;1100&le;&le;&beta;&beta;&le;&le;1100&le;&le;11--&alpha;&alpha;--&beta;&beta;&le;&le;11------((22))

其中,分别代表由虚拟视角坐标点和三个参考点坐标指向球心的向量。in, Represent the vectors pointing to the center of the sphere from the coordinate point of the virtual viewing angle and the coordinates of the three reference points respectively.

由图3可知,三个参考点的坐标与球心构成四面体结构,虚拟视角的坐标位于参考点坐标所围成的三角面片内。令向量四面体体积可以表示成不在同一个平面内的三条边的混合积的形式:It can be seen from Fig. 3 that the coordinates of the three reference points and the center of the sphere form a tetrahedral structure, and the coordinates of the virtual viewing angle are located in the triangular patch surrounded by the coordinates of the reference points. order vector The volume of a tetrahedron can be expressed as the mixed product of three sides not in the same plane:

uu==1166vv&RightArrow;&Right Arrow;&times;&times;ee11&RightArrow;&Right Arrow;&CenterDot;&Center Dot;ee22&RightArrow;&Right Arrow;==--1166vv11&RightArrow;&Right Arrow;&CenterDot;&Center Dot;vv22&RightArrow;&Right Arrow;&CenterDot;&Center Dot;ee22&RightArrow;&Right Arrow;------((33))

公式2变形得:Formula 2 is transformed into:

vv11&RightArrow;&Right Arrow;==vv&RightArrow;&Right Arrow;--&beta;&beta;vv22&RightArrow;&Right Arrow;--((11--&alpha;&alpha;--&beta;&beta;))vv33&RightArrow;&Right Arrow;&alpha;&alpha;------((44))

将公式(4)带入公式(3)的后半部分得到如下公式:Put formula (4) into the second half of formula (3) to get the following formula:

&alpha;&alpha;==--vv&RightArrow;&Right Arrow;&CenterDot;&CenterDot;vv22&RightArrow;&Right Arrow;&times;&times;ee22&RightArrow;&Right Arrow;vv&RightArrow;&Right Arrow;&times;&times;ee11&RightArrow;&Right Arrow;&CenterDot;&Center Dot;ee22&RightArrow;&Right Arrow;------((55))

同理。可以得到:the same way. can get:

&beta;&beta;==--vv&RightArrow;&Right Arrow;&CenterDot;&Center Dot;vv11&RightArrow;&Right Arrow;&times;&times;ee22&RightArrow;&Right Arrow;vv&RightArrow;&Right Arrow;&times;&times;ee11&RightArrow;&Right Arrow;&CenterDot;&Center Dot;ee22&RightArrow;&Right Arrow;------((66))

综上所述,只需要在所有的面片当中进行遍历,利用公式(5)、(6)求出每一个三角面片中α、β,如果α、β满足公式(2)中的约束条件,则虚拟视角就在此面片的包围之中,且此虚拟视角可以根据公式(2)由参考点线性表示。To sum up, it is only necessary to traverse all the faces, and use formulas (5) and (6) to find α and β in each triangular face, if α and β satisfy the constraints in formula (2) , then the virtual viewing angle is surrounded by this patch, and this virtual viewing angle can be expressed linearly by the reference point according to formula (2).

2.计算与绘制模块2. Calculation and drawing module

本模块包含两个子过程,计算过程与绘制过程。计算过程确定每个参考点视角下的像素到虚拟视角下像素的映射关系;绘制过程针对虚拟视角下的每一个像素,按照已求得的映射关系选择1到3个参考点视角下的像素或者背景像素进行绘制。This module contains two sub-processes, calculation process and drawing process. The calculation process determines the mapping relationship between the pixels under the perspective of each reference point and the pixels under the virtual perspective; the drawing process selects 1 to 3 pixels under the perspective of the reference point or The background pixels are drawn.

2.1计算过程:2.1 Calculation process:

给定某一参考点视角下的深度图像的像素坐标与像素值,结合参考点的相机参数,可以求出此像素所对应的点在三维世界坐标系中的坐标,且此过程是可逆的。即任意参考点视角下深度图像的像素坐标和像素值与三维物体的在世界坐标系中的坐标存在双射关系:Given the pixel coordinates and pixel values of the depth image under the perspective of a certain reference point, combined with the camera parameters of the reference point, the coordinates of the point corresponding to this pixel in the three-dimensional world coordinate system can be obtained, and this process is reversible. That is, there is a bijective relationship between the pixel coordinates and pixel values of the depth image at any reference point perspective and the coordinates of the 3D object in the world coordinate system:

pixelpixel&RightArrow;&Right Arrow;==Mm&CenterDot;&Center Dot;objectobject&RightArrow;&Right Arrow;objectobject&RightArrow;&Right Arrow;==Mm--11&CenterDot;&CenterDot;pixelpixel&RightArrow;&Right Arrow;pixelpixel&RightArrow;&Right Arrow;==((ii,,jj,,ll,,depthdepth))objectobject&RightArrow;&Right Arrow;==((xx,,ythe y,,zz,,depthdepth))------((77))

其中i,j是像素坐标,x,y,z是在世界坐标系中坐标,depth是深度图像的像素值。M是可逆矩阵,由采样点相机参数决定。在本发明中定义在世界坐标系中坐标到像素坐标的转换矩阵M为正向转换矩阵,M-1为逆向转换矩阵。正向转换矩阵与逆向转换矩阵的求解过程如图4所示,以正向转换矩阵为例说明其求解过程。Where i, j are pixel coordinates, x, y, z are coordinates in the world coordinate system, and depth is the pixel value of the depth image. M is an invertible matrix, determined by the sampling point camera parameters. In the present invention, it is defined that the transformation matrix M from coordinates to pixel coordinates in the world coordinate system is a forward transformation matrix, and M−1 is a reverse transformation matrix. The solution process of the forward transformation matrix and the reverse transformation matrix is shown in Fig. 4, and the solution process is illustrated by taking the forward transformation matrix as an example.

物体在世界坐标系中的坐标首先经过相机视图变换转换为相机空间坐标,再经过透视投影变换转化为像素坐标,即:The coordinates of an object in the world coordinate system are first transformed into camera space coordinates through camera view transformation, and then transformed into pixel coordinates through perspective projection transformation, namely:

M=mProject·mLookAt (8)M = mProject·mLookAt (8)

将公式(7)、(8)联立得到:Combine formulas (7) and (8) to get:

pixelpixel&RightArrow;&Right Arrow;==mProjectmProject&CenterDot;&CenterDot;mLookAtmLookAt&CenterDot;&CenterDot;objectobject&RightArrow;&Right Arrow;------((99))

objectobject&RightArrow;&Right Arrow;==((mProjectmProject&CenterDot;&Center Dot;mLookAtmLookAt))--11&CenterDot;&Center Dot;pixelpixel&RightArrow;&Right Arrow;

其中,mLookAt是从世界坐标到相机空间坐标的变换矩阵,此矩阵由相机的位置坐标pc、目标点坐标po以及正方向坐标up决定,具体的形式如下:Among them, mLookAt is the transformation matrix from the world coordinates to the camera space coordinates. This matrix is determined by the camera position coordinate pc, the target point coordinate po and the positive direction coordinate up. The specific form is as follows:

mLookAtmLookAt==xaxisxaxis&RightArrow;&Right Arrow;..xxyaxisyaxis&RightArrow;&Right Arrow;..xxzaxiszaxis&RightArrow;&Right Arrow;..xx00xaxisxaxis&RightArrow;&Right Arrow;..ythe yyaxisyaxis&RightArrow;&Right Arrow;..ythe yzaxiszaxis&RightArrow;&Right Arrow;..ythe y00xaxisxaxis&RightArrow;&Right Arrow;..zzyaxisyaxis&RightArrow;&Right Arrow;..ythe yzaxiszaxis&RightArrow;&Right Arrow;..zz00--xaxisxaxis&RightArrow;&Right Arrow;..popo--yaxisyaxis&RightArrow;&Right Arrow;..popo--zaxiszaxis&RightArrow;&Right Arrow;..popo11

zaxiszaxis&RightArrow;&Right Arrow;==pcpc--popo||pcpc--popo||xaxisxaxis&RightArrow;&Right Arrow;==upup&times;&times;zaxiszaxis&RightArrow;&Right Arrow;||upup&times;&times;zaxiszaxis&RightArrow;&Right Arrow;||yaxisyaxis&RightArrow;&Right Arrow;==zaxiszaxis&RightArrow;&Right Arrow;&times;&times;xaxisxaxis&RightArrow;&Right Arrow;------((1010))

mProject是透视投影变换矩阵,此矩阵由相机的视角广度(fov)纵横比(asp)最近深度(zn)与最远深度(zf)决定,具体的形式如下:mProject is a perspective projection transformation matrix. This matrix is determined by the camera's viewing angle (fov), aspect ratio (asp), nearest depth (zn) and farthest depth (zf). The specific form is as follows:

mprojectmproject==xScalexScale00000000yScaleyScale00000000zfzfzfzf--znzn000000--znzn**zfzfzfzf--znzn11------((1111))

yScaleyScale==cotcot((fovfov22))xScalexScale==yScaleyScaleaspasp

因此,参考点视角下的像素到虚拟视角下像素的映射关系如下:Therefore, the mapping relationship between pixels under the reference point perspective and pixels under the virtual perspective is as follows:

pixelpixel&RightArrow;&Right Arrow;==mProjectmProject&CenterDot;&Center Dot;mLookAtmLookAt&CenterDot;&CenterDot;((mPmProjectprojectvvii&CenterDot;&CenterDot;mLookmLookAtAtvvii))--11&CenterDot;&CenterDot;pixelpixelvvii&RightArrow;&Right Arrow;------((1212))

其中,是虚拟视角下的像素坐标与深度值,是参考点vi的像素坐标与深度值,mLookAtv、mProjectv是从世界坐标到虚拟视角下的相机坐标的变换矩阵与透视投影矩阵,是从世界坐标到参考点vi的相机坐标的变换矩阵与透视投影矩阵。in, is the pixel coordinate and depth value under the virtual viewing angle, is the pixel coordinate and depth value of the reference point vi , mLookAtv and mProjectv are the transformation matrix and perspective projection matrix from the world coordinates to the camera coordinates under the virtual perspective, is the transformation matrix and perspective projection matrix from the world coordinates to the camera coordinates of the reference point vi .

2.2绘制过程:2.2 Drawing process:

虚拟视角下的图像存储在彩色图像当中。由公式(12)出发,虚拟视角下的像素与参考点视角下像素可能有多种对应关系:The image under the virtual perspective is stored in the color image. Starting from formula (12), there may be multiple correspondences between pixels under the virtual perspective and pixels under the perspective of the reference point:

情形1:虚拟视角下图像的像素没有参考点视角下的像素与之对应,则此像素是空洞的一部分;Case 1: If the pixel of the image under the virtual perspective does not correspond to the pixel under the perspective of the reference point, this pixel is part of the hole;

情形2:虚拟视角下图像的像素只有1个参考点视角下的像素与之对应,则直接利用参考点视角下的像素进行填充虚拟视角下的图像;Situation 2: The pixel of the image under the virtual perspective has only one pixel under the perspective of the reference point corresponding to it, then directly use the pixels under the perspective of the reference point to fill the image under the virtual perspective;

情形3:虚拟视角下图像的某一个像素有2到3个参考点视角下的像素与之对应,则按照公式(13)进行虚拟视角下的图像绘制。其中,虚拟视角下的像素为p,三个参考视角下像素p1、p2、p3与之对应,原则上选择深度值最小的参考点视角下像素绘制虚拟视角的图像。其中αi是在公式2中求得的参考点视角的权重。Situation 3: A certain pixel of the image under the virtual perspective has 2 to 3 pixels corresponding to it under the perspective of the reference point, then the image drawing under the virtual perspective is performed according to the formula (13). Among them, the pixel under the virtual perspective is p, and the pixels p1 , p2 , and p3 under the three reference perspectives correspond to it. In principle, the pixel under the perspective of the reference point with the smallest depth value is selected to draw the image of the virtual perspective. where αi is the weight of the viewing angle of the reference point obtained in Equation 2.

pp==&Sigma;&Sigma;ii==11QQ&alpha;&alpha;ii&alpha;&alpha;ppii,,ppii&Element;&Element;QQQQ=={{qq||qq--pp''<<threashthreash,,||pp''==minmin((pp11&CenterDot;&CenterDot;&CenterDot;&Center Dot;&CenterDot;&Center Dot;ppnno))}}&alpha;&alpha;==&Sigma;&Sigma;iiQQ&alpha;&alpha;iinno==2,32,3------((1313))

同时在绘制的过程中标记所有属于情形2与情形3的像素,将标记信息存储在灰度图中。绘制的过程中使用CUDA并行计算,同时合成虚拟视角下的多个像素,能够极大的加速绘制的速度。At the same time, all the pixels belonging to the situation 2 and the situation 3 are marked during the drawing process, and the marking information is stored in the grayscale image. During the drawing process, CUDA parallel computing is used to synthesize multiple pixels under the virtual perspective at the same time, which can greatly speed up the drawing speed.

3.图像的优化模块3. Image optimization module

上述绘制过程输出的彩色图像含有噪声与空洞,空洞的原因已在前面的计算与绘制模块中说明,产生噪声的原因是:由于深度信息提取过程中存在误差,并且公式(10)得到的并不一定是整数像素之间的映射,因此在绘制过程中会出现噪声,如图7、图8所示。较为简单的解决方法是对虚拟视角的图像进行中值滤波,这样做可以消除绝大多数的噪声与空洞。这种方法虽然简单,但是会造成图像模糊和过于平滑,如图10所示。本发明首先对灰度图进行膨胀与差值操作,提取图像的边缘轮廓,然后沿着边缘轮廓进行中值滤波,利用空洞周围像素填补空洞,同时消除图像边缘的噪声信号。图像优化的模块包含如下子过程:The color image output by the above rendering process contains noise and holes. The reason for the holes has been explained in the previous calculation and rendering module. The reason for the noise is: due to the error in the depth information extraction process, and the result obtained by formula (10) is not It must be a mapping between integer pixels, so noise will appear during the drawing process, as shown in Figure 7 and Figure 8. A relatively simple solution is to perform median filtering on the image of the virtual perspective, which can eliminate most of the noise and holes. Although this method is simple, it will cause the image to be blurred and too smooth, as shown in Figure 10. The invention first performs dilation and difference operations on the grayscale image, extracts the edge contour of the image, and then performs median filtering along the edge contour, uses pixels around the hole to fill the hole, and simultaneously eliminates noise signals at the edge of the image. The image optimization module includes the following sub-processes:

3.1提取边缘像素3.1 Extract edge pixels

膨胀是求局部最大的操作。膨胀是将图像与核进行卷积,即计算核所覆盖的区域当中像素点的最大值,并将这个最大值赋值给参考点指定的像素;经计算与绘制模块之后生成的彩色图像与灰度图像都含有空洞,对灰度图像进行膨胀操作可以消除空洞。膨胀后的灰度图与原始的灰度图做差值运算,得到的就是边缘图,图中存储的就是边缘轮廓信息,空洞就存在与边缘轮廓当中,如图9所示。Dilation is an operation to find a local maximum. Expansion is to convolve the image with the kernel, that is, calculate the maximum value of the pixels in the area covered by the kernel, and assign this maximum value to the pixel specified by the reference point; the color image and grayscale generated after the calculation and drawing module All images contain holes, and the dilation operation on the grayscale image can eliminate the holes. The difference between the expanded grayscale image and the original grayscale image is calculated to obtain an edge image, which stores edge contour information, and holes exist in the edge contour, as shown in Figure 9.

3.2中值滤波3.2 Median filtering

在本发明中采用中值滤波的方法对空洞像素进行平滑处理,中值滤波将滤波模板中心像素的正方形邻域内的每个像素用模板中间的像素进行替换。由于中值滤波会导致图像失真,因此进行中值替换的像素被严格的限制在边缘轮廓当中,这样可以最大限度的降低滤波操作的次数,保持细节信息。同时,由于噪声信号多是空间当中孤立的点,进行中值滤波的同时大多数的边缘噪声点被背景像素所覆盖,达到了消除噪声的目的。最终滤波得到的结果如图11所示。In the present invention, the hollow pixels are smoothed by using the median filter method, and the median filter replaces each pixel in the square neighborhood of the central pixel of the filter template with the pixel in the middle of the template. Since median filtering will cause image distortion, the pixels for median replacement are strictly limited to the edge contour, which can minimize the number of filtering operations and maintain detailed information. At the same time, since the noise signal is mostly an isolated point in the space, most of the edge noise points are covered by the background pixels when the median filter is performed, and the purpose of eliminating noise is achieved. The final filtering results are shown in Figure 11.

与绘制过程相似,通过使用CUDA并行计算加速图像操作,提高绘制方法的运行速度。Similar to the drawing process, drawing methods run faster by using CUDA parallel computing to accelerate image operations.

2.实施过程2. Implementation process

接下来以玉佛的绘制过程为例说明本发明的具体实施过程。Next, the specific implementation process of the present invention will be described by taking the drawing process of the Jade Buddha as an example.

(1)在包围玉佛的外球面上均匀的选择162个采样点,以球心为坐标原点,r=430,第i个采样点的坐标为vi=(xi,yi,zi),坐标满足如下公式:(1) Evenly select 162 sampling points on the outer spherical surface surrounding the Jade Buddha, take the center of the sphere as the coordinate origin, r=430, and the coordinates of the i-th sampling point are vi =(xi , yi , zi ), the coordinates satisfy the following formula:

xxii==rr((11--22ii--11nno))coscos((arcsinarcsin((11--22ii--11nno))n&pi;n&pi;))ythe yii==rr((11--22ii--11nno))sinsin((arcsinarcsin((11--22ii--11nno))n&pi;n&pi;))zzii==rrcoscos((arcsinarcsin((11--22ii--11nno))))------((1414))

其中,n=162。Among them, n=162.

(2)根据采样点坐标对外球面进行三角划分,按照三角划分的结果对模型数据进行分组。本发明采用三角形逼近的方法构造一个以三角形为基本型的球内接多面体。划分后,球面上共有162个采样点,320个三角面片。(2) Triangulate the outer sphere according to the coordinates of the sampling points, and group the model data according to the results of the triangulation. The present invention adopts the method of triangle approximation to construct a spherical inscribed polyhedron with triangle as the basic type. After division, there are 162 sampling points and 320 triangular patches on the spherical surface.

(3)设置采样点参数,拍摄深度图像与彩色图像。采样点的参数pi=(pci,poi,aspi,fovi,zni,zfi),pci是采样点的坐标,由公式(14)计算得到,其余参数的设置如下表所示:(3) Set the sampling point parameters, and shoot depth images and color images. The parameter pi of the sampling point = (pci , poi , aspi , fovi , zni , zfi ), pci is the coordinate of the sampling point, which is calculated by the formula (14), and the settings of the other parameters are shown in the table below Show:

表1Table 1

poipoifovifoviaspiaspizniznizfizfi(0,0,0)(0,0,0)47′47'1.3331.333350350550550

设置好参数后首先拍摄彩色图像,然后拍摄深度图像。选择其中三个采样点v1,v2,v3,其中:After setting the parameters, first take a color image, and then take a depth image. Select three sampling points v1 , v2 , v3 , where:

pcpcvv11==((--182.8899,365.7798,132.8773182.8899, 365.7798, 132.8773))

pcpcvv22==((--220.4835,217.4600220.4835, 217.4600,,--298.3256298.3256))

pcpcvv33==((295.9221,226.0644,215.0000295.9221, 226.0644, 215.0000))

所采集到的彩色图像与深度图像如图12所示。The collected color image and depth image are shown in Figure 12.

(4)标定虚拟视角,计算映射关系。在实例中用户使用键盘与鼠标交互控制虚拟视角在三维场景中漫游,初始的虚拟视角为(0,430,0),相邻虚拟视角之间的变化值由用户交互指定。已知当前虚拟视角,遍历步骤(2)中得到的三角面片,根据公式(2)、(5)、(6)确定虚拟视角所在的三角面片,并用参考点指向球心的向量量化标定虚拟视角。根据虚拟视角的参数与采样点的参数计算公式(12)的映射关系。(4) Calibrate the virtual viewing angle and calculate the mapping relationship. In the example, the user interacts with the keyboard and mouse to control the virtual viewing angle to roam in the 3D scene. The initial virtual viewing angle is (0, 430, 0), and the change value between adjacent virtual viewing angles is specified by user interaction. Knowing the current virtual viewing angle, traverse the triangle patch obtained in step (2), determine the triangle patch where the virtual viewing angle is located according to formulas (2), (5), and (6), and use the vector quantization calibration of the reference point pointing to the center of the sphere virtual perspective. The mapping relationship of formula (12) is calculated according to the parameters of the virtual viewing angle and the parameters of the sampling points.

(5)合成虚拟视角下图像,并进行中值滤波。(5) Synthesize images under the virtual perspective and perform median filtering.

a.当虚拟视角中的像素没有参考点的像素与之对应时,使用背景像素填充虚拟视角下图像;a. When the pixel in the virtual perspective does not correspond to the pixel of the reference point, use the background pixels to fill the image under the virtual perspective;

b.当虚拟视角中的中的像素只与一个参考点像素对应时,使用此对应像素填充虚拟视角下图像;b. When the pixel in the virtual perspective corresponds to only one reference point pixel, use this corresponding pixel to fill the image under the virtual perspective;

c.当虚拟视角中的像素与2到3个参考点的像素相对应时,设定thresh=15,按照公式(13)选择一个或者多个参考点像素合成虚拟视角下图像。c. When the pixels in the virtual perspective correspond to the pixels of 2 to 3 reference points, set thresh=15, and select one or more reference point pixels according to formula (13) to synthesize the image under the virtual perspective.

同时在绘制的过程中标记所有属于情形b与情形c的像素,将标记信息存储在灰度图中。对灰度图进行膨胀操作,并与原始的灰度图相减,得到模型的轮廓信息,沿着模型的轮廓进行中值滤波,得到最终的虚拟视角下的彩色图像。At the same time, all the pixels belonging to the situation b and the situation c are marked during the drawing process, and the marking information is stored in the grayscale image. The grayscale image is expanded and subtracted from the original grayscale image to obtain the contour information of the model, and the median filter is performed along the contour of the model to obtain the final color image under the virtual perspective.

本实例的输出是由用户交互控制下的虚拟视角下的彩色图,图13是本实例的部分输出结果。The output of this example is a color image under the virtual perspective controlled by user interaction, and Figure 13 is part of the output results of this example.

3.实施效果3. Implementation effect

主要从实时性与真实感两个方面说明实施效果。测试所用计算机的主要配置如下表所示:The implementation effect is mainly explained from two aspects of real-time and realism. The main configuration of the computer used in the test is shown in the table below:

表2Table 2

操作系统operating system64bit windows7旗舰版64bit windows7 Ultimate Edition处理器processorIntel CoreTM2Q9400(4CPUs)2.66GHzIntel CoreTM2Q9400 (4CPUs)2.66GHz显卡graphics cardNVIDIA GeForce GTX470NVIDIA GeForce GTX470内存Memory4G4G

2.1实时性2.1 Real-time

在测试用计算机上,以玉佛为测试对象,使用3DMAX软件进行玉佛建模,由于玉石材质的光学特征,光线射入材质内部后发生散射,最后射出物体并进入视野当中,这种现象被称为次表面散射,计算次表面散射需要耗费大量时间,使用3DMAX进行一个视角的绘制用时大约为40分钟。采用基于图像的三维模型绘制方法,建模时不需要考虑模型的材质,绘制时按照像素进行填充,在测试用计算机上进行绘制的实时速率约为25帧/秒,能够快速响应用户的输入,控制虚拟视角在空间中任意漫游,完全达到实时交互的要求。On the test computer, the Jade Buddha was used as the test object, and 3DMAX software was used to model the Jade Buddha. Due to the optical characteristics of the jade material, the light scattered after entering the material, and finally emitted the object and entered the field of vision. This phenomenon was recognized It is called subsurface scattering, and it takes a lot of time to calculate subsurface scattering. It takes about 40 minutes to draw a viewing angle using 3DMAX. Using the image-based 3D model drawing method, the material of the model does not need to be considered when modeling, and it is filled according to pixels when drawing. The real-time rate of drawing on the test computer is about 25 frames per second, which can quickly respond to user input. Control the virtual viewing angle to roam freely in the space, fully meeting the requirements of real-time interaction.

2.2真实感2.2 Realism

在此应用当中选择了162个采样点,并且选择与虚拟视角最为接近的三个采样点为参考点,以像素为单位进行绘制,初步绘制完成后进行场景优化,修补空洞并消除图像中的噪音,最终得到的效果如图13所示。与原始输入图12作比较,图13中基本上没有空洞与噪声像素,不用经过复杂的次表面散射计算也能得到逼真的绘制效果。In this application, 162 sampling points are selected, and the three sampling points closest to the virtual viewing angle are selected as reference points, and drawn in units of pixels. After the preliminary drawing is completed, the scene is optimized, holes are repaired and noise in the image is eliminated. , and the final effect is shown in Figure 13. Compared with the original input Figure 12, there are basically no holes and noise pixels in Figure 13, and realistic rendering effects can be obtained without complex subsurface scattering calculations.

本发明未详细阐述的部分属本领域技术人员公知技术。Parts not described in detail in the present invention belong to the well-known technology of those skilled in the art.

Claims (1)

representing the three-dimensional model of the object by depth images and color images of different sampling points and camera parameters of the sampling points, wherein the three-dimensional model M uses oneA couple of units<K,V>Expressing, wherein K is a simple complex shape and expresses the connection relation of sampling points; v denotes a set of sample points, V ═ Vi1,2,3. | V |), | V | representing the number of sampling points; vi=(ci,di,pi) Denotes the ith sample point, ciAnd diRespectively representing the colour image and depth image, p, of the ith sample pointiThe camera parameters, p, of the ith sample point are showni=(pci,poi,aspi,fovi,zni,zfi),pciIndicating the camera position, poiIndicating the target position of the camera, aspiAspect ratio representing the field of view of the camera, foviRepresenting the extent of the field of view of the camera, zni、zfiMinimum and maximum values representing the effective depth of the camera, respectively;
(2) calculating and drawing, namely calculating the mapping relation between the coordinates of each pixel in the three reference visual angle images and the pixel coordinates under the virtual visual angle according to the virtual visual angle, the position of the reference visual angle and the camera parameters; mapping each pixel in the color image under the reference visual angle to the image under the virtual visual angle according to the mapping relation, calculating the coordinate and the depth value of the pixel in the image under the virtual visual angle, and taking the pixel value with small depth value under the condition that the pixels of a plurality of reference visual angles are mapped to the same position; simultaneously marking all pixels filled by the reference visual angle pixels in the image under the virtual visual angle, and constructing a gray scale map reflecting the mapping condition from the reference visual angle to the virtual visual angle;
CN201310497271.8A2013-10-212013-10-21Complicated three-dimensional model drawing method based on imagesActiveCN103530907B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201310497271.8ACN103530907B (en)2013-10-212013-10-21Complicated three-dimensional model drawing method based on images

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201310497271.8ACN103530907B (en)2013-10-212013-10-21Complicated three-dimensional model drawing method based on images

Publications (2)

Publication NumberPublication Date
CN103530907A CN103530907A (en)2014-01-22
CN103530907Btrue CN103530907B (en)2017-02-01

Family

ID=49932885

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201310497271.8AActiveCN103530907B (en)2013-10-212013-10-21Complicated three-dimensional model drawing method based on images

Country Status (1)

CountryLink
CN (1)CN103530907B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104331918B (en)*2014-10-212017-09-29无锡梵天信息技术股份有限公司Based on earth's surface occlusion culling and accelerated method outside depth map real-time rendering room
CN105509671B (en)*2015-12-012018-01-09中南大学A kind of robot tooling center points scaling method using plane reference plate
CN107169924B (en)*2017-06-142020-10-09歌尔科技有限公司 Method and system for establishing three-dimensional panoramic image
CN107464278B (en)*2017-09-012020-01-24叠境数字科技(上海)有限公司Full-view sphere light field rendering method
CN108520342B (en)*2018-03-232021-12-17中建三局第一建设工程有限责任公司BIM-based Internet of things platform management method and system
CN111402404B (en)*2020-03-162021-03-23北京房江湖科技有限公司Panorama complementing method and device, computer readable storage medium and electronic equipment
CN111651055A (en)*2020-06-092020-09-11浙江商汤科技开发有限公司City virtual sand table display method and device, computer equipment and storage medium
CN112199756A (en)*2020-10-302021-01-08久瓴(江苏)数字智能科技有限公司Method and device for automatically determining distance between straight lines
CN114543816B (en)*2022-04-252022-07-12深圳市赛特标识牌设计制作有限公司Guiding method, device and system based on Internet of things
CN115272523B (en)*2022-09-222022-12-09中科三清科技有限公司Method and device for drawing air quality distribution map, electronic equipment and storage medium
CN116502371B (en)*2023-06-252023-09-08厦门蒙友互联软件有限公司 A method for generating ship-shaped diamond cutting models

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6697062B1 (en)*1999-08-062004-02-24Microsoft CorporationReflection space image based rendering
CN102945565A (en)*2012-10-182013-02-27深圳大学Three-dimensional photorealistic reconstruction method and system for objects and electronic device
CN103116897A (en)*2013-01-222013-05-22北京航空航天大学Three-dimensional dynamic data compression and smoothing method based on image space

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20060017720A1 (en)*2004-07-152006-01-26Li You FSystem and method for 3D measurement and surface reconstruction
US8643701B2 (en)*2009-11-182014-02-04University Of Illinois At Urbana-ChampaignSystem for executing 3D propagation for depth image-based rendering

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6697062B1 (en)*1999-08-062004-02-24Microsoft CorporationReflection space image based rendering
CN102945565A (en)*2012-10-182013-02-27深圳大学Three-dimensional photorealistic reconstruction method and system for objects and electronic device
CN103116897A (en)*2013-01-222013-05-22北京航空航天大学Three-dimensional dynamic data compression and smoothing method based on image space

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
An image-based rendering(ibr) approach for realistic stereo view synthesis of tv broadcast based on structure from motion;Knorr S 等;《Image Processing IEEE International Conference on》;20090930;第6卷;VI572-VI575*

Also Published As

Publication numberPublication date
CN103530907A (en)2014-01-22

Similar Documents

PublicationPublication DateTitle
CN103530907B (en)Complicated three-dimensional model drawing method based on images
CN109003325B (en)Three-dimensional reconstruction method, medium, device and computing equipment
CN102592275B (en)Virtual viewpoint rendering method
US9098930B2 (en)Stereo-aware image editing
CN103021017B (en)Three-dimensional scene rebuilding method based on GPU acceleration
CN108335352B (en) A texture mapping method for multi-view large-scale 3D scene reconstruction
US8711143B2 (en)System and method for interactive image-based modeling of curved surfaces using single-view and multi-view feature curves
CN118657888A (en) A sparse view 3D reconstruction method based on depth prior information
CN103500467B (en)Threedimensional model constructive method based on image
US20050140670A1 (en)Photogrammetric reconstruction of free-form objects with curvilinear structures
CN110223370B (en)Method for generating complete human texture map from single-view picture
CN116543117B (en) A high-precision three-dimensional modeling method for large scenes from drone images
CN116071278A (en) UAV aerial image synthesis method, system, computer equipment and storage medium
CN110349247A (en)A kind of indoor scene CAD 3D method for reconstructing based on semantic understanding
CN105205861B (en)Tree three-dimensional Visualization Model implementation method based on Sphere Board
CN103077552B (en)A kind of three-dimensional display method based on multi-view point video
CN111462030A (en)Multi-image fused stereoscopic set vision new angle construction drawing method
CN104318605B (en)Parallel lamination rendering method of vector solid line and three-dimensional terrain
Gu et al.Ue4-nerf: Neural radiance field for real-time rendering of large-scale scene
CN116416376A (en)Three-dimensional hair reconstruction method, system, electronic equipment and storage medium
CN119313828B (en)3D Gaussian reconstruction method for large-scene unmanned aerial vehicle image
CN116721210A (en)Real-time efficient three-dimensional reconstruction method and device based on neurosigned distance field
CN118710846A (en) Digital twin scene geometric modeling method and device
CN119888133A (en)Three-dimensional scene reconstruction method and device for structure perception
CN119478173B (en)Three-dimensional scene novel view synthesis method based on matched rays

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
CP02Change in the address of a patent holder

Address after:518133 23rd floor, Yishang science and technology creative building, Jiaan South Road, Haiwang community Central District, Xin'an street, Bao'an District, Shenzhen City, Guangdong Province

Patentee after:SHENZHEN ESUN DISPLAY Co.,Ltd.

Patentee after:BEIHANG University

Address before:No. 4001, Fuqiang Road, Futian District, Shenzhen, Guangdong 518048 (B301, Shenzhen cultural and Creative Park)

Patentee before:SHENZHEN ESUN DISPLAY Co.,Ltd.

Patentee before:BEIHANG University

CP02Change in the address of a patent holder

[8]ページ先頭

©2009-2025 Movatter.jp