

技术领域technical field
本发明涉及一种基于空间复用的真实感虚拟3D场景光场投影图像绘制方法,属于3D场景绘制和显示技术领域。The invention relates to a realistic virtual 3D scene light field projection image rendering method based on spatial multiplexing, and belongs to the technical field of 3D scene rendering and display.
背景技术Background technique
当前,真三维显示技术受到人们的广泛关注。光场三维显示技术是近年来出现的一种新型真三维显示技术。发表在《液晶与显示》2017年32卷4期的论文《基于液晶多层屏的3D显示系统和算法设计》介绍的多层LCD三维显示系统就是光场三维显示技术的一种具体实现。目前,主要有两种用于获取光场三维显示系统显示的三维数据的方法;一种是针对实际3D场景的360度多视角图像采集方法,其使用围绕实际3D场景圆周分布的CCD相机阵列从多个视角对场景的不同侧面进行拍摄,得到相应的拍摄图像;另一种是使用虚拟相机阵列拍摄虚拟3D场景,以获得虚拟3D场景的光场的投影图像。如果把光场三维显示技术应用在影视娱乐中,就显示内容制作而言,第二种方法可根据需要使用软件创建三维场景,因此具有更大的灵活性。已有文献对虚拟3D场景的光场的投影图像生成过程作过论述,例如,发表在《液晶与显示》2017年32卷4期的论文《基于液晶多层屏的3D显示系统和算法设计》、浙江大学的夏新星于2014年完成的博士学位论文《水平光场三维显示机理及实现技术研究》、浙江大学的丁俊于2016年完成的硕士学位论文《基于多层液晶的近眼三维显示研究》等。对于近眼光场显示系统而言,通常要求在整个圆形瞳孔区域内进行视点采样,每个视点采样对应一个虚拟相机的视点位置,所有虚拟相机构成一个虚拟相机阵列,另外还要求确定各个虚拟相机的视口朝向、视场角和相机分辨率。这一过程的具体实现方法在《基于多层液晶的近眼三维显示研究》中有介绍。在确定虚拟相机阵列的各个虚拟相机的视点位置、视口朝向、视场角和相机分辨率后,使用三维场景绘制技术可以绘制出各个虚拟相机拍摄到的虚拟3D场景画面,每幅画面就是一幅虚拟3D场景的光场投影图像。为了使虚拟3D场景画面具有良好的真实感,需要绘制虚拟3D场景画面的全局光照效果。对于复杂虚拟3D场景来说,绘制考虑全局光照效果时的场景画面是一件非常耗时的工作。在生成真实感虚拟3D场景的光场投影图像时,如果对虚拟相机阵列中的每个相机,都独立地执行一遍场景绘制操作,则总耗时将随虚拟相机个数的增加而成倍增长。对用于近眼光场显示的一系列光场投影图像而言,各幅光场投影图像之间虽然存在细微差异,但通常保持着明显的相似性。这种相似性为在绘制真实感虚拟3D场景画面的过程中利用空间复用来减少绘制操作时间消耗提供了物理基础。At present, true 3D display technology has received extensive attention. Light field 3D display technology is a new type of true 3D display technology that has emerged in recent years. The multi-layer LCD three-dimensional display system introduced in the paper "3D Display System and Algorithm Design Based on Liquid Crystal Multilayer Screen" published in "Liquid Crystal and Display", Vol. At present, there are mainly two methods for acquiring 3D data displayed by a light field 3D display system; one is a 360-degree multi-view image acquisition method for an actual 3D scene, which uses a CCD camera array distributed around the circumference of the actual 3D scene from The different sides of the scene are photographed from multiple perspectives to obtain corresponding photographed images; the other is to use a virtual camera array to photograph a virtual 3D scene to obtain a projection image of the light field of the virtual 3D scene. If the light field 3D display technology is applied in film and television entertainment, in terms of display content production, the second method can use software to create 3D scenes as needed, so it has greater flexibility. Existing literature has discussed the process of generating projection images of the light field of virtual 3D scenes. For example, the paper "3D Display System and Algorithm Design Based on Liquid Crystal Multilayer Screen" published in "Liquid Crystal and Display", Vol. 32, No. 4, 2017 , Doctoral dissertation "Research on Horizontal Light Field 3D Display Mechanism and Realization Technology" completed by Xia Xinxing of Zhejiang University in 2014, Master's thesis "Research on Near-Eye 3D Display Based on Multilayer Liquid Crystal" completed by Ding Jun of Zhejiang University in 2016 Wait. For near-eye light field display systems, viewpoint sampling is usually required in the entire circular pupil area. Each viewpoint sampling corresponds to the viewpoint position of a virtual camera. All virtual cameras form a virtual camera array. In addition, it is also required to determine each virtual camera. viewport orientation, field of view, and camera resolution. The specific implementation method of this process is introduced in "Research on Near-Eye 3D Display Based on Multilayer Liquid Crystal". After determining the viewpoint position, viewport orientation, field of view, and camera resolution of each virtual camera of the virtual camera array, the virtual 3D scene picture captured by each virtual camera can be drawn using the 3D scene rendering technology, and each picture is a A light field projection image of a virtual 3D scene. In order to make the virtual 3D scene picture have a good sense of reality, it is necessary to draw the global illumination effect of the virtual 3D scene picture. For complex virtual 3D scenes, it is a very time-consuming task to draw the scene picture considering the global illumination effect. When generating a light field projection image of a photorealistic virtual 3D scene, if each camera in the virtual camera array is to perform a scene rendering operation independently, the total time consumption will increase exponentially with the increase of the number of virtual cameras . For a series of light field projection images used for near-eye light field display, although there are subtle differences between the light field projection images, there is usually an obvious similarity. This similarity provides a physical basis for using spatial multiplexing to reduce the time consumption of rendering operations in the process of rendering photorealistic virtual 3D scenes.
光源发出的光入射到一个3D场景点后,会被3D场景点散射而沿其他方向传递光照。可以把经一个3D场景点散射后直接进入虚拟相机的光亮度分为源自直接光照的光亮度和源自间接光照的光亮度。源自直接光照的光亮度可以使用蒙特卡罗积分求解方法来估计。这需要在面光源上产生一定数目的采样点(如图1所示),并计算每个采样点与可视场景点之间的可见性。发表在《ACM Transactions on Graphics》1996年15卷1期1~36页的论文详细介绍了蒙特卡罗直接光照亮度值估计技术。光子映射(Photon Mapping)技术常被用来计算源自间接光照的光亮度。首先使用光子跟踪技术创建光子图,然后利用光线投射技术找到可视场景点。对每个漫反射类型的可视场景点,可从光子图中搜索与该可视场景点邻近的光子并据此计算被该可视场景点散射进入虚拟相机的源自间接光照的光亮度。然而,这种直接根据光子图中的光子计算被可视场景点散射进入虚拟相机的源自间接光照的光亮度的方式会导致三维场景画面中出现明显的低频噪声。解决这一问题的方法是使用最终聚集(Final Gathering)技术。最终聚集技术在许多文献中有专门论述,例如2010年MorganKaufmann出版社出版的由M.Pharr和G.Humphreys撰写的书《Physically BasedRendering:From Theory to Implementation,2nd Edition》、山东大学2014年崔云鹏的硕士学位论文《基于Renderman的光子映射算法研究与实现》等。光线投射技术是三维图形绘制中的一种常见技术,其关键步骤是从虚拟相机的视点位置发射穿过虚拟像素平面上的各个像素的中心点的光线,并计算光线与3D场景几何对象之间的离视点位置最近的交点。如图2所示,点E是虚拟相机的视点位置,矩形ABCD是虚拟像素平面,点E和矩形ABCD的中心点G的连线垂直于矩形ABCD所在平面,线段EG的长度可以指定为1,从点E指向矩形ABCD的中心点G的向量对应了虚拟相机的视口朝向,点K为线段BC的中点,点H为线段AD的中点,点R为线段AB的中点,点S为线段CD的中点,线段EH和线段EK所夹的角对应了虚拟相机的水平视场角,线段ER和线段ES所夹的角对应了虚拟相机的竖直视场角,矩形ABCD中的每个小方格表示虚拟像素平面上的一个像素,虚拟像素平面上的像素行数和列数由虚拟相机的分辨率确定,例如1024×768的虚拟相机分辨率可对应包含1024行像素、768列像素的虚拟像素平面。在三维图形绘制中,常用kd树空间数据结构来组织数据集,以便能够根据主键值快速地找到满足特定条件的数据集元素。kd树空间数据结构在Pearson Education公司2014年出版的《Computer Graphics:Principles and Practice(3rd Edition)》中有详细介绍。本发明公开一种利用光照计算结果空间复用来加快真实感虚拟3D场景光场投影图像绘制速度的方法,为快速生成真实感虚拟3D场景的光场投影图像提供支持。After the light emitted by the light source is incident on a 3D scene point, it will be scattered by the 3D scene point and transmit light in other directions. The lightness that directly enters the virtual camera after being scattered by a 3D scene point can be divided into lightness from direct lighting and lightness from indirect lighting. Luminance from direct lighting can be estimated using a Monte Carlo integration method. This requires generating a certain number of sample points on the surface light source (as shown in Figure 1), and calculating the visibility between each sample point and the visible scene point. The paper published in "ACM Transactions on Graphics", Vol. 15, No. 1, 1-36 pages in 1996 introduced the Monte Carlo direct illumination luminance value estimation technique in detail. Photon mapping (Photon Mapping) technology is often used to calculate the brightness from indirect lighting. Photon maps are first created using photon tracing techniques, and then raycasting techniques are used to find visible scene points. For each diffuse type visible scene point, photons adjacent to the visible scene point can be searched from the photon map and calculated accordingly the light intensity from indirect lighting scattered by the visible scene point into the virtual camera. However, this way of directly calculating the light intensity from indirect lighting that is scattered by visible scene points into the virtual camera from the photons in the photon map results in significant low-frequency noise in the 3D scene picture. The solution to this problem is to use the Final Gathering technique. Final gathering technology has been specially discussed in many literatures, such as the book "Physically BasedRendering: From Theory to Implementation,2nd Edition" by M.Pharr and G.Humphreys published by MorganKaufmann Publishing House in 2010, Cui Yunpeng's 2014 Shandong University Master's thesis "Research and Implementation of Photon Mapping Algorithm Based on Renderman", etc. Raycasting technology is a common technology in 3D graphics rendering. Its key step is to emit rays from the viewpoint position of the virtual camera through the center point of each pixel on the virtual pixel plane, and calculate the distance between the rays and the geometric objects of the 3D scene. The intersection point closest to the viewpoint position. As shown in Figure 2, the point E is the viewpoint position of the virtual camera, the rectangle ABCD is the virtual pixel plane, the line connecting the point E and the center point G of the rectangle ABCD is perpendicular to the plane where the rectangle ABCD is located, and the length of the line segment EG can be specified as 1, The vector from point E to the center point G of rectangle ABCD corresponds to the viewport orientation of the virtual camera. Point K is the midpoint of line segment BC, point H is the midpoint of line segment AD, point R is the midpoint of line segment AB, and point S is the midpoint of the line segment CD, the angle between the line segment EH and the line segment EK corresponds to the horizontal field of view of the virtual camera, and the angle between the line segment ER and the line segment ES corresponds to the vertical field of view of the virtual camera. Each small square represents a pixel on the virtual pixel plane, and the number of rows and columns of pixels on the virtual pixel plane is determined by the resolution of the virtual camera. Virtual pixel plane for column pixels. In 3D graphics rendering, the kd tree spatial data structure is often used to organize data sets, so that data set elements that meet specific conditions can be quickly found according to the primary key value. The kd tree spatial data structure is described in detail in "Computer Graphics: Principles and Practice (3rd Edition)" published by Pearson Education in 2014. The invention discloses a method for speeding up the drawing speed of a light field projection image of a realistic virtual 3D scene by utilizing the spatial multiplexing of lighting calculation results, and provides support for quickly generating a light field projection image of the realistic virtual 3D scene.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种基于空间复用的真实感虚拟3D场景光场投影图像绘制方法,绘制出使用包含Ncamr×Ncamc个虚拟相机的虚拟相机阵列拍摄的虚拟3D场景画面,为近眼光场显示应用系统提供三维数据。The object of the present invention is to provide a realistic virtual 3D scene light field projection image rendering method based on spatial multiplexing, which draws a virtual 3D scene picture captured by a virtual camera array including Ncamr ×Ncamc virtual cameras, which is a near- The eye field display application system provides three-dimensional data.
本发明的技术方案是这样实现的:一种基于空间复用的真实感虚拟3D场景光场投影图像绘制方法,其特征在于:首先使用光子跟踪技术创建光子图,接着计算虚拟相机阵列中的所有虚拟相机对应的可视场景点,并把所有可视场景点保存在一个列表中,然后计算列表中的所有可视场景点的全局光照值,具体实现步骤如下:The technical scheme of the present invention is achieved as follows: a method for rendering a realistic virtual 3D scene light field projection image based on spatial multiplexing, which is characterized in that: firstly, photon tracking technology is used to create a photon map, and then all images in the virtual camera array are calculated. The visual scene point corresponding to the virtual camera, and all the visual scene points are saved in a list, and then the global illumination value of all the visual scene points in the list is calculated. The specific implementation steps are as follows:
提供一种数据结构TVSPT,用于存储与可视场景点相关的数据;数据结构TVSPT包含可视场景点所在位置vsPos、可视场景点所在位置的几何对象表面的法向量vsNrm、可视场景点对应的虚拟相机编号nCam、可视场景点对应的虚拟相机的虚拟像素平面上的像素的行号vnRow、可视场景点对应的虚拟相机的虚拟像素平面上的像素的列号vnCol、从可视场景点散射进入对应虚拟相机的光线的光亮度vsL、可视场景点对应的光源采样点位置vsQ、可视场景点对应的光源可见性vsV共八个成员变量;A data structure TVSPT is provided for storing data related to visual scene points; the data structure TVSPT includes the position vsPos of the visual scene point, the normal vector vsNrm of the surface of the geometric object where the visual scene point is located, the visual scene point The corresponding virtual camera number nCam, the row number vnRow of the pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point, the column number vnCol of the pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point, from the visual There are eight member variables, including the brightness vsL of the light scattered by the scene point and entering the corresponding virtual camera, the position of the light source sampling point corresponding to the visible scene point vsQ, and the visibility of the light source corresponding to the visible scene point vsV;
提供一种数据结构TALSPT,用于存储光源采样点相关数据;数据结构TALSPT包含光源采样点位置lsPos、光源可见性lsV共两个成员变量;Provides a data structure TALSPT for storing data related to light source sampling points; the data structure TALSPT contains two member variables, lsPos, light source sampling point position, and light source visibility lsV;
1)使用光子跟踪技术创建光子图,具体方法如下:1) Create a photon map using photon tracking technology as follows:
首先在计算机的存储器中创建一个不包含任何光子记录的光子图PMap;使用光子跟踪技术从面光源向三维场景发射Npt个光子,跟踪这Npt个光子在3D场景中传播时与几何对象发生碰撞而被散射的过程;对于每个光子A002,在跟踪光子A002在3D场景中传播时与几何对象发生碰撞而被散射的过程中,从光子A002第二次与3D场景几何对象发生碰撞开始,每发生一次碰撞就向光子图PMap中添加一个光子记录,每个光子记录包括光子与3D场景几何对象的碰撞位置PPos、光子在碰撞位置PPos处的归一化入射方向向量PVi,光子在碰撞位置PPos处的入射功率PW共三个分量。First create a photon map PMap in the computer's memory that does not contain any photon records; use photon tracking technology to emit Npt photons from a surface light source to the 3D scene, and track the occurrence of these Npt photons with geometric objects as they travel in the 3D scene The process of being scattered by collision; for each photon A002, in the process of tracking photon A002 colliding with a geometric object while propagating in the 3D scene and being scattered, starting from the second collision of photon A002 with a 3D scene geometric object, Every time a collision occurs, a photon record is added to the photon map PMap. Each photon record includes the collision position PPos of the photon and the 3D scene geometry object, the normalized incident direction vector PVi of the photon at the collision position PPos, and the photon at the collision position. The incident power PW at PPos has three components.
2)计算包含Ncamr×Ncamc个虚拟相机的虚拟相机阵列中的每个虚拟相机的虚拟像素平面上的每个像素对应的可视场景点,具体方法如下:2) Calculate the visual scene point corresponding to each pixel on the virtual pixel plane of each virtual camera in the virtual camera array including Ncamr ×Ncamc virtual cameras, and the specific method is as follows:
步骤Step201:在计算机的存储器中创建一个列表Ltvspt,列表Ltvspt的每个元素用于存储一个数据结构TVSPT类型的变量,令列表Ltvspt为空;Step Step201: create a list Ltvspt in the memory of the computer, and each element of the list Ltvspt is used to store a variable of a data structure TVSPT type, so that the list Ltvspt is empty;
步骤Step202:对于包含Ncamr×Ncamc个虚拟相机的虚拟相机阵列中的每个虚拟相机A003,做如下操作:Step 202: For each virtual camera A003 in the virtual camera array including Ncamr ×Ncamc virtual cameras, do the following:
根据虚拟相机A003的视点位置、视口朝向、视场角和相机分辨率参数,利用光线投射技术,从虚拟相机A003的视点位置发射穿过虚拟相机A003的虚拟像素平面上的每个像素中心点的光线A004,光线A004与虚拟相机A003的虚拟像素平面上的像素一一对应;对虚拟相机A003的虚拟像素平面上的每个像素对应的光线A004,执行如下操作:According to the viewpoint position, viewport orientation, field of view and camera resolution parameters of the virtual camera A003, using ray casting technology, the center point of each pixel on the virtual pixel plane passing through the virtual camera A003 is emitted from the viewpoint position of the virtual camera A003 The ray A004 of the ray A004 is in one-to-one correspondence with the pixels on the virtual pixel plane of the virtual camera A003; to the ray A004 corresponding to each pixel on the virtual pixel plane of the virtual camera A003, perform the following operations:
判断光线A004与3D场景的几何对象是否相交,如果光线A004与3D场景的几何对象相交,则进一步执行如下两个子步骤:Determine whether the ray A004 intersects the geometric object of the 3D scene. If the ray A004 intersects the geometric object of the 3D scene, the following two sub-steps are further performed:
步骤Step202-1:计算光线A004与3D场景的几何对象的离虚拟相机A003的视点位置最近的交点A005,交点A005是一个可视场景点,创建一个数据结构TVSPT类型的变量A006,变量A006对应了一条唯一的光线A004,把变量A006的可视场景点所在位置vsPos成员变量赋值为交点A005的位置,把变量A006的可视场景点所在位置的几何对象表面的法向量vsNrm成员变量赋值为交点A005处的几何对象表面法向量,把变量A006的可视场景点对应的虚拟相机编号nCam成员变量赋值为虚拟相机A003在虚拟相机阵列中的编号,把变量A006的可视场景点对应的虚拟相机的虚拟像素平面上的像素的行号vnRow成员变量赋值为变量A006对应的光线A004对应的虚拟相机A003的虚拟像素平面上的像素的行号,把变量A006的可视场景点对应的虚拟相机的虚拟像素平面上的像素的列号vnCol成员变量赋值为变量A006对应的光线A004对应的虚拟相机A003的虚拟像素平面上的像素的列号,把变量A006的从可视场景点散射进入对应虚拟相机的光线的光亮度vsL成员变量赋值为0;Step 202-1: Calculate the intersection point A005 closest to the viewpoint position of the virtual camera A003 between the ray A004 and the geometric object of the 3D scene. The intersection point A005 is a visual scene point. Create a data structure TVSPT type variable A006, and the variable A006 corresponds to A unique ray A004, assign the position vsPos member variable of the visible scene point of variable A006 to the position of the intersection point A005, and assign the normal vector vsNrm member variable of the surface of the geometric object at the position of the visible scene point of variable A006 to the intersection point A005 The surface normal vector of the geometric object at the location, assign the virtual camera number nCam member variable corresponding to the visual scene point of variable A006 as the number of the virtual camera A003 in the virtual camera array, and assign the virtual camera corresponding to the visual scene point of variable A006. The row number vnRow member variable of the pixel on the virtual pixel plane is assigned as the row number of the pixel on the virtual pixel plane of the virtual camera A003 corresponding to the light A004 corresponding to the variable A006, and the virtual camera of the virtual camera corresponding to the visual scene point of the variable A006 is assigned. The column number vnCol member variable of the pixel on the pixel plane is assigned the column number of the pixel on the virtual pixel plane of the virtual camera A003 corresponding to the light A004 corresponding to the variable A006, and the variable A006 is scattered from the visual scene point into the corresponding virtual camera. The brightness of the light vsL member variable is assigned to 0;
步骤Step202-2:把变量A006添加到列表Ltvspt中;Step 202-2: Add the variable A006 to the list Ltvspt;
3)计算经每个可视场景点散射进入对应的虚拟相机的光线的光亮度,具体方法如下:3) Calculate the brightness of the light scattered into the corresponding virtual camera through each visible scene point, and the specific method is as follows:
步骤Step301:对列表Ltvspt中的每个元素B001,执行如下操作:Step 301: Perform the following operations on each element B001 in the list Ltvspt:
按均匀分布在面光源上产生一个随机光源采样点B002;把元素B001存储的数据结构TVSPT类型的变量的可视场景点对应的光源采样点位置vsQ成员变量赋值为光源采样点B002所在位置;判断从光源采样点B002所在位置到元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置的线段B003与3D场景的几何对象是否相交,如果相交,则把元素B001存储的数据结构TVSPT类型的变量的可视场景点对应的光源可见性vsV成员变量赋值为0,否则把元素B001存储的数据结构TVSPT类型的变量的可视场景点对应的光源可见性vsV成员变量赋值为1;Generate a random light source sampling point B002 on the surface light source according to the uniform distribution; assign the light source sampling point position vsQ member variable corresponding to the visual scene point of the variable of the data structure TVSPT type stored in the element B001 as the location of the light source sampling point B002; judge From the position of the light source sampling point B002 to the position of the visual scene point of the variable of the data structure TVSPT type stored in the element B001, whether the line segment B003 of the position indicated by the vsPos member variable intersects with the geometric object of the 3D scene, if so, the element B001 The light source visibility vsV member variable corresponding to the visual scene point of the variable of the stored data structure TVSPT type is assigned to 0, otherwise the light source visibility vsV member variable corresponding to the visual scene point of the variable of the data structure TVSPT type stored in the element B001 assign the value 1;
步骤Step302:以数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量的值作为主键值,把列表Ltvspt中的所有元素存储的数据结构TVSPT类型的变量保存在一个kd树空间数据结构C001中;Step Step302: The data structure TVSPT type variable of the data structure TVSPT type stored by all elements in the list Ltvspt is stored in a kd tree space data structure with the value of the vsPos member variable at the position of the visual scene point of the variable of the data structure TVSPT type as the primary key value. In C001;
步骤Step303:对列表Ltvspt中的每个元素B001,执行如下子步骤:Step 303: For each element B001 in the list Ltvspt, perform the following sub-steps:
步骤Step303-1:在计算机存储器中创建一个列表C002,列表C002的每个元素存储一个数据结构TVSPT类型的变量,令列表C002为空;从kd树空间数据结构C001中找出满足条件COND1的所有数据结构TVSPT类型的变量,并把这些被找出的数据结构TVSPT类型的变量添加到列表C002中;条件COND1为:kd树空间数据结构C001中存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置到元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置的距离小于Td;Step 303-1: Create a list C002 in the computer memory, each element of the list C002 stores a variable of the data structure TVSPT type, and make the list C002 empty; find out all the items that satisfy the condition COND1 from the kd tree space data structure C001. Variables of the data structure TVSPT type, and add these found variables of the data structure TVSPT type to the list C002; the condition COND1 is: the visual scene point of the variables of the data structure TVSPT type stored in the kd tree space data structure C001 The distance from the position represented by the position vsPos member variable to the visual scene point of the variable of the data structure TVSPT type stored in the element B001 is less than Td ;
步骤Step303-2:对列表C002中的每个元素C003,执行如下操作:Step Step303-2: For each element C003 in the list C002, perform the following operations:
步骤Step303-2-1:令Vs代表元素C003存储的数据结构TVSPT类型的变量的可视场景点所在位置的几何对象表面的法向量vsNrm成员变量表示的向量,令Vr代表元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置的几何对象表面的法向量vsNrm成员变量表示的向量;如果(Vs·Vr)/(|Vs|·|Vr|)小于Tv,则从列表C002中删除元素C003,并转步骤Step303-2-2,|Vs|表示Vs的长度,|Vr|表示Vr的长度;令P1代表元素C003存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置,令P2代表元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置,令Ql代表元素C003存储的数据结构TVSPT类型的变量的可视场景点对应的光源采样点位置vsQ成员变量表示的位置,令Vl1代表从Ql指向P1的向量,令Vl2代表从Ql指向P2的向量;如果(Vl1·Vl2)/(|Vl1|·|Vl2|)小于Tl,则从列表C002中删除元素C003,|Vl1|表示Vl1的长度,|Vl2|表示Vl2的长度;Step Step303-2-1: Let Vs represent the vector represented by the normal vector vsNrm member variable of the surface of the geometric object where the variable of the data structure TVSPT type of the data structure stored in element C003 is located, and let Vr represent the vector stored in element B001 The vector of the normal vector vsNrm member variable of the surface of the geometric object where the visible scene point of the variable of type TVSPT is located; if (Vs · Vr )/(|Vs | · |Vr |) is less than Tv , then delete the element C003 from the list C002, and go to Step 303-2-2, |Vs | represents the length of Vs , |Vr | represents the length of Vr ; let P1 represent the data structure TVSPT stored in the element C003 The position of the visual scene point of the variable of type vs the position represented by the member variable of Pos, let P2 represent the position of the visual scene point of the variable of the TVSPT type of the data structure stored in the element B001, and the position represented by the member variable of Pos, let Ql represent the element The position of the light source sampling point corresponding to the visual scene point of the variable of the data structure TVSPT type stored in C003 vs. the position represented by the member variable of Q, let Vl1 represent the vector pointing from Q1 to P1 , and let V12 represent the direction from Q1 to P2 If (Vl1 · Vl2 )/(|Vl1 | · |Vl2 |) is less than Tl , delete element C003 from list C002, |Vl1 | represents the length of Vl1 , |Vl2 | Indicates the length of Vl2 ;
步骤Step303-2-2:针对元素C003的操作结束;Step Step303-2-2: the operation on element C003 ends;
步骤Step303-3:在计算机的存储器中创建一个列表C004,列表C004的每个元素存储一个数据结构TALSPT类型的变量,令列表C004为空;对列表C002中的每个元素C005,执行如下操作:Step 303-3: Create a list C004 in the memory of the computer, each element of the list C004 stores a variable of the data structure TALSPT type, and make the list C004 empty; for each element C005 in the list C002, perform the following operations:
创建一个数据结构TALSPT类型的变量C006,把元素C005存储的数据结构TVSPT类型的变量的可视场景点对应的光源采样点位置vsQ成员变量的值赋给变量C006的光源采样点位置lsPos成员变量,把元素C005存储的数据结构TVSPT类型的变量的可视场景点对应的光源可见性vsV成员变量的值赋给变量C006的光源可见性lsV成员变量;把变量C006添加到列表C004中;Create a variable C006 of data structure TALSPT type, and assign the value of the light source sampling point position vsQ member variable corresponding to the visual scene point of the data structure TVSPT type variable stored in element C005 to the light source sampling point position lsPos member variable of variable C006, The value of the light source visibility vsV member variable corresponding to the visible scene point of the variable of the data structure TVSPT type stored in the element C005 is assigned to the light source visibility lsV member variable of the variable C006; the variable C006 is added to the list C004;
步骤Step303-4:令NC004表示列表C004的元素个数;如果NC004小于Nals,则按均匀分布在面光源上产生Nals-NC004个随机光源采样点C007,同时在计算机的存储器中创建Nals-NC004个数据结构TALSPT类型的变量C008,Nals-NC004个随机光源采样点C007和Nals-NC004个数据结构TALSPT类型的变量C008一一对应,把每个随机光源采样点C007所在位置赋值给与之对应的变量C008的光源采样点位置lsPos成员变量;判断从每个随机光源采样点C007到元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置的线段与3D场景的几何对象是否相交,如果相交,把随机光源采样点C007对应的变量C008的光源可见性lsV成员变量赋值为0,否则把随机光源采样点C007对应的变量C008的光源可见性lsV成员变量赋值为1;Step 303-4: Let NC004 represent the number of elements in the list C004; if NC004 is less than Nals , generate Nals -NC004 random light source sampling points C007 on the surface light source according to uniform distribution, and at the same time in the computer memory Create Nals -NC004 data structure TALSPT type variables C008, Nals -NC004 random light source sampling points C007 and Nals -NC004 data structure TALSPT type variables C008 one-to-one correspondence, sample each random light source The position of the point C007 is assigned to the lsPos member variable of the light source sampling point position of the corresponding variable C008; it is judged from each random light source sampling point C007 to the variable of the data structure TVSPT type stored in the element B001. The position of the visual scene point vsPos member Whether the line segment at the position represented by the variable intersects the geometric object of the 3D scene, if so, assign the light source visibility lsV member variable of the variable C008 corresponding to the random light source sampling point C007 to 0, otherwise assign the variable C008 corresponding to the random light source sampling point C007 The light source visibility lsV member variable is assigned a value of 1;
步骤Step303-5:令VSPOINT代表元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置,令NCam代表元素B001存储的数据结构TVSPT类型的变量的可视场景点对应的虚拟相机编号nCam成员变量表示的编号;根据光子图PMap和元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量的值、可视场景点所在位置的几何对象表面的法向量vsNrm成员变量的值,使用最终聚集技术计算面光源发出的光照先经其他3D场景点散射到VSPOINT位置后,再经VSPOINT位置散射进入第NCam个虚拟相机的光亮度D001;根据列表C004中存储的所有数据结构TALSPT类型的变量的光源采样点位置lsPos成员变量的值来确定蒙特卡罗直接光照亮度值估计所需的光源采样点,用列表C004中存储的所有数据结构TALSPT类型的变量的光源可见性lsV成员变量的值作为对应光源采样点对VSPOINT位置的可见性近似值,使用蒙特卡罗直接光照亮度值估计技术来计算面光源发出的光照直接经VSPOINT位置散射进入第NCam个虚拟相机的光亮度D002;把光亮度D001与光亮度D002之和赋值给元素B001存储的数据结构TVSPT类型的变量的从可视场景点散射进入对应虚拟相机的光线的光亮度vsL成员变量;Step Step303-5: Let VSPOINT represent the position of the visual scene point of the variable of the TVSPT type of the data structure stored in the element B001, and the position represented by the vsPos member variable, let NCam represent the visual scene point of the variable of the TVSPT type of the data structure stored in the element B001 The number represented by the corresponding virtual camera number nCam member variable; according to the data structure TVSPT type variable stored in the photon map PMap and element B001, the value of the position of the visible scene point vsPos member variable, the surface of the geometric object at the position of the visible scene point The value of the normal vector vsNrm member variable, using the final gathering technique to calculate the light emitted by the surface light source is first scattered to the VSPOINT position by other 3D scene points, and then scattered through the VSPOINT position into the NCam th virtual camera. The brightness D001; according to the list C004 The light source sampling point positions of all data structures TALSPT type variables stored in the value of the lsPos member variable to determine the light source sampling points required for Monte Carlo direct lighting brightness value estimation, use all data structures stored in list C004 The TALSPT type variables The value of the light source visibility lsV member variable is used as an approximation of the visibility of the corresponding light source sampling point to the VSPOINT position, and the Monte Carlo direct illumination brightness value estimation technique is used to calculate the light emitted by the surface light source directly scattered through the VSPOINT position into the NCamth virtual camera The brightness D002; assign the sum of the brightness D001 and the brightness D002 to the brightness vsL member variable of the light scattered from the visible scene point into the corresponding virtual camera of the variable of the data structure TVSPT type stored in the element B001;
4)根据列表Ltvspt中的元素,生成光场投影图像,具体方法如下:4) According to the elements in the list Ltvspt, generate a light field projection image, the specific method is as follows:
对于包含Ncamr×Ncamc个虚拟相机的虚拟相机阵列中的每个虚拟相机A003,做如下操作:For each virtual camera A003 in the virtual camera array containing Ncamr ×Ncamc virtual cameras, do the following:
步骤Step401:在计算机的存储器中创建一个包含Npixr行、Npixc列元素的二维数组ILLUMIN,Npixr为虚拟相机A003的虚拟像素平面上的像素行数,Npixc为虚拟相机A003的虚拟像素平面上的像素列数;数组ILLUMIN的元素与虚拟相机A003的虚拟像素平面上的像素一一对应;数组ILLUMIN用于存储经虚拟相机A003的虚拟像素平面上的像素对应的可视场景点散射进入虚拟相机A003的光亮度;把数组ILLUMIN的每个元素赋值为0;计算虚拟相机A003在虚拟相机阵列中的编号IDCam;在计算机的存储器中创建一个列表D003,令列表D003为空;把列表Ltvspt中的满足条件COND2的所有元素D004放到列表D003中;条件COND2为:元素D004存储的数据结构TVSPT类型的变量的可视场景点对应的虚拟相机编号nCam等于编号IDCam;对列表D003的每个元素D005,做如下操作:Step 401: Create a two-dimensional array ILLUMIN containing Npixr rows and Npixc column elements in the memory of the computer, where Npixr is the number of pixel rows on the virtual pixel plane of the virtual camera A003, and Npixc is the virtual pixel of the virtual camera A003 The number of pixel columns on the plane; the elements of the array ILLUMIN are in one-to-one correspondence with the pixels on the virtual pixel plane of the virtual camera A003; the array ILLUMIN is used to store the visual scene points corresponding to the pixels on the virtual pixel plane of the virtual camera A003. The brightness of the virtual camera A003; assign each element of the array ILLUMIN to 0; calculate the IDCam of the virtual camera A003 in the virtual camera array; create a list D003 in the computer's memory, and make the list D003 empty; put the list Ltvspt All elements D004 that meet the condition COND2 in the list are placed in the list D003; the condition COND2 is: the virtual camera number nCam corresponding to the visual scene point of the variable of the TVSPT type of the data structure stored in the element D004 is equal to the number IDCam; Element D005, do the following:
令IdR代表元素D005存储的数据结构TVSPT类型的变量的可视场景点对应的虚拟相机的虚拟像素平面上的像素的行号vnRow成员变量表示的行号,令IdC代表元素D005存储的数据结构TVSPT类型的变量的可视场景点对应的虚拟相机的虚拟像素平面上的像素的列号vnCol成员变量表示的列号,把元素D005存储的数据结构TVSPT类型的变量的从可视场景点散射进入对应虚拟相机的光线的光亮度vsL成员变量的值赋给数组ILLUMIN的第IdR行、第IdC列元素;Let IdR represent the row number represented by the row number vnRow member variable of the pixel on the virtual pixel plane of the virtual camera corresponding to the variable of the data structure TVSPT type stored in element D005, and let IdC represent the data structure TVSPT stored in element D005 The column number of the pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point of the variable of type The column number represented by the vnCol member variable, the data structure TVSPT type variable stored in element D005 is scattered from the visual scene point into the corresponding The value of the brightness vsL member variable of the light of the virtual camera is assigned to the elements of the IdR row and the IdC column of the array ILLUMIN;
步骤Step402:把数组ILLUMIN的每个元素保存的光亮度值转换成虚拟相机A003拍摄3D场景得到的画面图像像素颜色值,并把画面图像像素颜色值保存到与虚拟相机A003相对应的图像文件中,该图像文件存储的就是一幅光场投影图像。Step 402: Convert the brightness value stored in each element of the array ILLUMIN into the pixel color value of the picture image obtained by shooting the 3D scene with the virtual camera A003, and save the pixel color value of the picture image to the image file corresponding to the virtual camera A003 , the image file stores a light field projection image.
本发明的积极效果是能够在绘制真实感虚拟3D场景光场投影图像的过程中在空间上复用光照计算结果,从而提高虚拟相机阵列拍摄的真实感虚拟3D场景光场投影图像的绘制速度。The positive effect of the present invention is that the illumination calculation result can be spatially multiplexed in the process of drawing the photorealistic virtual 3D scene light field projection image, thereby improving the rendering speed of the photorealistic virtual 3D scene light field projection image captured by the virtual camera array.
附图说明Description of drawings
图1为面光源照射的三维场景示意图。FIG. 1 is a schematic diagram of a three-dimensional scene illuminated by a surface light source.
图2为虚拟像素平面示意图。FIG. 2 is a schematic diagram of a virtual pixel plane.
具体实施方式Detailed ways
为了使本方法的特征和优点更加清楚明白,下面结合具体实施例对本方法作进一步的描述。本实施例考虑一个放在封闭房间里的石膏雕像虚拟3D场景,在房间的天花板上有一个面光源,该3D场景中的所有几何对象表面都为漫反射表面。计算机系统的CPU选择Intel(R)Xeon(R)CPU E3-1225v3@3.20GHz,内存选择金士顿8GB DDR3 1333,磁盘选择Buffalo HD-CE 1.5TU2,显卡选用NVidia Quadro K2000;计算机操作系统选用Windows 7,软件编程工具选用VC++2010。In order to make the features and advantages of the method clearer, the method will be further described below with reference to specific embodiments. This embodiment considers a virtual 3D scene of a plaster statue placed in a closed room, there is a surface light source on the ceiling of the room, and all geometric object surfaces in the 3D scene are diffuse reflection surfaces. The CPU of the computer system is Intel(R) Xeon(R) CPU E3-1225v3@3.20GHz, the memory is Kingston 8GB DDR3 1333, the disk is Buffalo HD-CE 1.5TU2, the graphics card is NVidia Quadro K2000; the computer operating system is Windows 7, The software programming tool uses VC++2010.
首先使用光子跟踪技术创建光子图,接着计算虚拟相机阵列中的所有虚拟相机对应的可视场景点,并把所有可视场景点保存在一个列表中,然后计算列表中的所有可视场景点的全局光照值,具体实现步骤如下:First use photon tracking technology to create a photon map, then calculate the visible scene points corresponding to all virtual cameras in the virtual camera array, save all the visible scene points in a list, and then calculate the The global illumination value, the specific implementation steps are as follows:
提供一种数据结构TVSPT,用于存储与可视场景点相关的数据;数据结构TVSPT包含可视场景点所在位置vsPos、可视场景点所在位置的几何对象表面的法向量vsNrm、可视场景点对应的虚拟相机编号nCam、可视场景点对应的虚拟相机的虚拟像素平面上的像素的行号vnRow、可视场景点对应的虚拟相机的虚拟像素平面上的像素的列号vnCol、从可视场景点散射进入对应虚拟相机的光线的光亮度vsL、可视场景点对应的光源采样点位置vsQ、可视场景点对应的光源可见性vsV共八个成员变量;A data structure TVSPT is provided for storing data related to visual scene points; the data structure TVSPT includes the position vsPos of the visual scene point, the normal vector vsNrm of the surface of the geometric object where the visual scene point is located, the visual scene point The corresponding virtual camera number nCam, the row number vnRow of the pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point, the column number vnCol of the pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point, from the visual There are eight member variables, including the brightness vsL of the light scattered by the scene point and entering the corresponding virtual camera, the position of the light source sampling point corresponding to the visible scene point vsQ, and the visibility of the light source corresponding to the visible scene point vsV;
提供一种数据结构TALSPT,用于存储光源采样点相关数据;数据结构TALSPT包含光源采样点位置lsPos、光源可见性lsV共两个成员变量;Provides a data structure TALSPT for storing data related to light source sampling points; the data structure TALSPT contains two member variables, lsPos, light source sampling point position, and light source visibility lsV;
1)使用光子跟踪技术创建光子图,具体方法如下:1) Create a photon map using photon tracking technology as follows:
首先在计算机的存储器中创建一个不包含任何光子记录的光子图PMap;使用光子跟踪技术从面光源向三维场景发射Npt个光子,跟踪这Npt个光子在3D场景中传播时与几何对象发生碰撞而被散射的过程;对于每个光子A002,在跟踪光子A002在3D场景中传播时与几何对象发生碰撞而被散射的过程中,从光子A002第二次与3D场景几何对象发生碰撞开始,每发生一次碰撞就向光子图PMap中添加一个光子记录,每个光子记录包括光子与3D场景几何对象的碰撞位置PPos、光子在碰撞位置PPos处的归一化入射方向向量PVi,光子在碰撞位置PPos处的入射功率PW共三个分量。First create a photon map PMap in the computer's memory that does not contain any photon records; use photon tracking technology to emit Npt photons from a surface light source to the 3D scene, and track the occurrence of these Npt photons with geometric objects as they travel in the 3D scene The process of being scattered by collision; for each photon A002, in the process of tracking photon A002 colliding with a geometric object while propagating in the 3D scene and being scattered, starting from the second collision of photon A002 with a 3D scene geometric object, Every time a collision occurs, a photon record is added to the photon map PMap. Each photon record includes the collision position PPos of the photon and the 3D scene geometry object, the normalized incident direction vector PVi of the photon at the collision position PPos, and the photon at the collision position. The incident power PW at PPos has three components.
2)计算包含Ncamr×Ncamc个虚拟相机的虚拟相机阵列中的每个虚拟相机的虚拟像素平面上的每个像素对应的可视场景点,具体方法如下:2) Calculate the visual scene point corresponding to each pixel on the virtual pixel plane of each virtual camera in the virtual camera array including Ncamr ×Ncamc virtual cameras, and the specific method is as follows:
步骤Step201:在计算机的存储器中创建一个列表Ltvspt,列表Ltvspt的每个元素用于存储一个数据结构TVSPT类型的变量,令列表Ltvspt为空;Step Step201: create a list Ltvspt in the memory of the computer, and each element of the list Ltvspt is used to store a variable of a data structure TVSPT type, so that the list Ltvspt is empty;
步骤Step202:对于包含Ncamr×Ncamc个虚拟相机的虚拟相机阵列中的每个虚拟相机A003,做如下操作:Step 202: For each virtual camera A003 in the virtual camera array including Ncamr ×Ncamc virtual cameras, do the following:
根据虚拟相机A003的视点位置、视口朝向、视场角和相机分辨率参数,利用光线投射技术,从虚拟相机A003的视点位置发射穿过虚拟相机A003的虚拟像素平面上的每个像素中心点的光线A004,光线A004与虚拟相机A003的虚拟像素平面上的像素一一对应;对虚拟相机A003的虚拟像素平面上的每个像素对应的光线A004,执行如下操作:According to the viewpoint position, viewport orientation, field of view and camera resolution parameters of the virtual camera A003, using ray casting technology, the center point of each pixel on the virtual pixel plane passing through the virtual camera A003 is emitted from the viewpoint position of the virtual camera A003 The ray A004 of the ray A004 is in one-to-one correspondence with the pixels on the virtual pixel plane of the virtual camera A003; to the ray A004 corresponding to each pixel on the virtual pixel plane of the virtual camera A003, perform the following operations:
判断光线A004与3D场景的几何对象是否相交,如果光线A004与3D场景的几何对象相交,则进一步执行如下两个子步骤:Determine whether the ray A004 intersects the geometric object of the 3D scene. If the ray A004 intersects the geometric object of the 3D scene, the following two sub-steps are further performed:
步骤Step202-1:计算光线A004与3D场景的几何对象的离虚拟相机A003的视点位置最近的交点A005,交点A005是一个可视场景点,创建一个数据结构TVSPT类型的变量A006,变量A006对应了一条唯一的光线A004,把变量A006的可视场景点所在位置vsPos成员变量赋值为交点A005的位置,把变量A006的可视场景点所在位置的几何对象表面的法向量vsNrm成员变量赋值为交点A005处的几何对象表面法向量,把变量A006的可视场景点对应的虚拟相机编号nCam成员变量赋值为虚拟相机A003在虚拟相机阵列中的编号,把变量A006的可视场景点对应的虚拟相机的虚拟像素平面上的像素的行号vnRow成员变量赋值为变量A006对应的光线A004对应的虚拟相机A003的虚拟像素平面上的像素的行号,把变量A006的可视场景点对应的虚拟相机的虚拟像素平面上的像素的列号vnCol成员变量赋值为变量A006对应的光线A004对应的虚拟相机A003的虚拟像素平面上的像素的列号,把变量A006的从可视场景点散射进入对应虚拟相机的光线的光亮度vsL成员变量赋值为0;Step 202-1: Calculate the intersection point A005 closest to the viewpoint position of the virtual camera A003 between the ray A004 and the geometric object of the 3D scene. The intersection point A005 is a visual scene point. Create a data structure TVSPT type variable A006, and the variable A006 corresponds to A unique ray A004, assign the position vsPos member variable of the visible scene point of variable A006 to the position of the intersection point A005, and assign the normal vector vsNrm member variable of the surface of the geometric object at the position of the visible scene point of variable A006 to the intersection point A005 The surface normal vector of the geometric object at the location, assign the virtual camera number nCam member variable corresponding to the visual scene point of variable A006 as the number of the virtual camera A003 in the virtual camera array, and assign the virtual camera corresponding to the visual scene point of variable A006. The row number vnRow member variable of the pixel on the virtual pixel plane is assigned as the row number of the pixel on the virtual pixel plane of the virtual camera A003 corresponding to the light A004 corresponding to the variable A006, and the virtual camera of the virtual camera corresponding to the visual scene point of the variable A006 is assigned. The column number vnCol member variable of the pixel on the pixel plane is assigned the column number of the pixel on the virtual pixel plane of the virtual camera A003 corresponding to the light A004 corresponding to the variable A006, and the variable A006 is scattered from the visual scene point into the corresponding virtual camera. The brightness of the light vsL member variable is assigned to 0;
步骤Step202-2:把变量A006添加到列表Ltvspt中;Step 202-2: Add the variable A006 to the list Ltvspt;
3)计算经每个可视场景点散射进入对应的虚拟相机的光线的光亮度,具体方法如下:3) Calculate the brightness of the light scattered into the corresponding virtual camera through each visible scene point, and the specific method is as follows:
步骤Step301:对列表Ltvspt中的每个元素B001,执行如下操作:Step 301: Perform the following operations on each element B001 in the list Ltvspt:
按均匀分布在面光源上产生一个随机光源采样点B002;把元素B001存储的数据结构TVSPT类型的变量的可视场景点对应的光源采样点位置vsQ成员变量赋值为光源采样点B002所在位置;判断从光源采样点B002所在位置到元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置的线段B003与3D场景的几何对象是否相交,如果相交,则把元素B001存储的数据结构TVSPT类型的变量的可视场景点对应的光源可见性vsV成员变量赋值为0,否则把元素B001存储的数据结构TVSPT类型的变量的可视场景点对应的光源可见性vsV成员变量赋值为1;Generate a random light source sampling point B002 on the surface light source according to the uniform distribution; assign the light source sampling point position vsQ member variable corresponding to the visual scene point of the variable of the data structure TVSPT type stored in the element B001 as the location of the light source sampling point B002; judge From the position of the light source sampling point B002 to the position of the visual scene point of the variable of the data structure TVSPT type stored in the element B001, whether the line segment B003 of the position indicated by the vsPos member variable intersects with the geometric object of the 3D scene, if so, the element B001 The light source visibility vsV member variable corresponding to the visual scene point of the variable of the stored data structure TVSPT type is assigned to 0, otherwise the light source visibility vsV member variable corresponding to the visual scene point of the variable of the data structure TVSPT type stored in the element B001 assign the value 1;
步骤Step302:以数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量的值作为主键值,把列表Ltvspt中的所有元素存储的数据结构TVSPT类型的变量保存在一个kd树空间数据结构C001中;Step Step302: The data structure TVSPT type variable of the data structure TVSPT type stored by all elements in the list Ltvspt is stored in a kd tree space data structure with the value of the vsPos member variable at the position of the visual scene point of the variable of the data structure TVSPT type as the primary key value. In C001;
步骤Step303:对列表Ltvspt中的每个元素B001,执行如下子步骤:Step 303: For each element B001 in the list Ltvspt, perform the following sub-steps:
步骤Step303-1:在计算机存储器中创建一个列表C002,列表C002的每个元素存储一个数据结构TVSPT类型的变量,令列表C002为空;从kd树空间数据结构C001中找出满足条件COND1的所有数据结构TVSPT类型的变量,并把这些被找出的数据结构TVSPT类型的变量添加到列表C002中;条件COND1为:kd树空间数据结构C001中存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置到元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置的距离小于Td;Step 303-1: Create a list C002 in the computer memory, each element of the list C002 stores a variable of the data structure TVSPT type, and make the list C002 empty; find out all the items that satisfy the condition COND1 from the kd tree space data structure C001. Variables of the data structure TVSPT type, and add these found variables of the data structure TVSPT type to the list C002; the condition COND1 is: the visual scene point of the variables of the data structure TVSPT type stored in the kd tree space data structure C001 The distance from the position represented by the position vsPos member variable to the visual scene point of the variable of the data structure TVSPT type stored in the element B001 is less than Td ;
步骤Step303-2:对列表C002中的每个元素C003,执行如下操作:Step Step303-2: For each element C003 in the list C002, perform the following operations:
步骤Step303-2-1:令Vs代表元素C003存储的数据结构TVSPT类型的变量的可视场景点所在位置的几何对象表面的法向量vsNrm成员变量表示的向量,令Vr代表元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置的几何对象表面的法向量vsNrm成员变量表示的向量;如果(Vs·Vr)/(|Vs|·|Vr|)小于Tv,则从列表C002中删除元素C003,并转步骤Step303-2-2,|Vs|表示Vs的长度,|Vr|表示Vr的长度;令P1代表元素C003存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置,令P2代表元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置,令Ql代表元素C003存储的数据结构TVSPT类型的变量的可视场景点对应的光源采样点位置vsQ成员变量表示的位置,令Vl1代表从Ql指向P1的向量,令Vl2代表从Ql指向P2的向量;如果(Vl1·Vl2)/(|Vl1|·|Vl2|)小于Tl,则从列表C002中删除元素C003,|Vl1|表示Vl1的长度,|Vl2|表示Vl2的长度;Step Step303-2-1: Let Vs represent the vector represented by the normal vector vsNrm member variable of the surface of the geometric object where the variable of the data structure TVSPT type of the data structure stored in element C003 is located, and let Vr represent the vector stored in element B001 The vector of the normal vector vsNrm member variable of the surface of the geometric object where the visible scene point of the variable of type TVSPT is located; if (Vs · Vr )/(|Vs | · |Vr |) is less than Tv , then delete the element C003 from the list C002, and go to Step 303-2-2, |Vs | represents the length of Vs , |Vr | represents the length of Vr ; let P1 represent the data structure TVSPT stored in the element C003 The position of the visual scene point of the variable of type vs the position represented by the member variable of Pos, let P2 represent the position of the visual scene point of the variable of the TVSPT type of the data structure stored in the element B001, and the position represented by the member variable of Pos, let Ql represent the element The position of the light source sampling point corresponding to the visual scene point of the variable of the data structure TVSPT type stored in C003 vs. the position represented by the member variable of Q, let Vl1 represent the vector pointing from Q1 to P1 , and let V12 represent the direction from Q1 to P2 If (Vl1 · Vl2 )/(|Vl1 | · |Vl2 |) is less than Tl , delete element C003 from list C002, |Vl1 | represents the length of Vl1 , |Vl2 | Indicates the length of Vl2 ;
步骤Step303-2-2:针对元素C003的操作结束;Step Step303-2-2: the operation on element C003 ends;
步骤Step303-3:在计算机的存储器中创建一个列表C004,列表C004的每个元素存储一个数据结构TALSPT类型的变量,令列表C004为空;对列表C002中的每个元素C005,执行如下操作:Step 303-3: Create a list C004 in the memory of the computer, each element of the list C004 stores a variable of the data structure TALSPT type, and make the list C004 empty; for each element C005 in the list C002, perform the following operations:
创建一个数据结构TALSPT类型的变量C006,把元素C005存储的数据结构TVSPT类型的变量的可视场景点对应的光源采样点位置vsQ成员变量的值赋给变量C006的光源采样点位置lsPos成员变量,把元素C005存储的数据结构TVSPT类型的变量的可视场景点对应的光源可见性vsV成员变量的值赋给变量C006的光源可见性lsV成员变量;把变量C006添加到列表C004中;Create a variable C006 of data structure TALSPT type, and assign the value of the light source sampling point position vsQ member variable corresponding to the visual scene point of the data structure TVSPT type variable stored in element C005 to the light source sampling point position lsPos member variable of variable C006, The value of the light source visibility vsV member variable corresponding to the visible scene point of the variable of the data structure TVSPT type stored in the element C005 is assigned to the light source visibility lsV member variable of the variable C006; the variable C006 is added to the list C004;
步骤Step303-4:令NC004表示列表C004的元素个数;如果NC004小于Nals,则按均匀分布在面光源上产生Nals-NC004个随机光源采样点C007,同时在计算机的存储器中创建Nals-NC004个数据结构TALSPT类型的变量C008,Nals-NC004个随机光源采样点C007和Nals-NC004个数据结构TALSPT类型的变量C008一一对应,把每个随机光源采样点C007所在位置赋值给与之对应的变量C008的光源采样点位置lsPos成员变量;判断从每个随机光源采样点C007到元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置的线段与3D场景的几何对象是否相交,如果相交,把随机光源采样点C007对应的变量C008的光源可见性lsV成员变量赋值为0,否则把随机光源采样点C007对应的变量C008的光源可见性lsV成员变量赋值为1;Step 303-4: Let NC004 represent the number of elements in the list C004; if NC004 is less than Nals , generate Nals -NC004 random light source sampling points C007 on the surface light source according to uniform distribution, and at the same time in the computer memory Create Nals -NC004 data structure TALSPT type variables C008, Nals -NC004 random light source sampling points C007 and Nals -NC004 data structure TALSPT type variables C008 one-to-one correspondence, sample each random light source The position of the point C007 is assigned to the lsPos member variable of the light source sampling point position of the corresponding variable C008; it is judged from each random light source sampling point C007 to the variable of the data structure TVSPT type stored in the element B001. The position of the visual scene point vsPos member Whether the line segment at the position represented by the variable intersects the geometric object of the 3D scene, if so, assign the light source visibility lsV member variable of the variable C008 corresponding to the random light source sampling point C007 to 0, otherwise assign the variable C008 corresponding to the random light source sampling point C007 The light source visibility lsV member variable is assigned a value of 1;
步骤Step303-5:令VSPOINT代表元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量表示的位置,令NCam代表元素B001存储的数据结构TVSPT类型的变量的可视场景点对应的虚拟相机编号nCam成员变量表示的编号;根据光子图PMap和元素B001存储的数据结构TVSPT类型的变量的可视场景点所在位置vsPos成员变量的值、可视场景点所在位置的几何对象表面的法向量vsNrm成员变量的值,使用最终聚集技术计算面光源发出的光照先经其他3D场景点散射到VSPOINT位置后,再经VSPOINT位置散射进入第NCam个虚拟相机的光亮度D001;根据列表C004中存储的所有数据结构TALSPT类型的变量的光源采样点位置lsPos成员变量的值来确定蒙特卡罗直接光照亮度值估计所需的光源采样点,用列表C004中存储的所有数据结构TALSPT类型的变量的光源可见性lsV成员变量的值作为对应光源采样点对VSPOINT位置的可见性近似值,使用蒙特卡罗直接光照亮度值估计技术来计算面光源发出的光照直接经VSPOINT位置散射进入第NCam个虚拟相机的光亮度D002;把光亮度D001与光亮度D002之和赋值给元素B001存储的数据结构TVSPT类型的变量的从可视场景点散射进入对应虚拟相机的光线的光亮度vsL成员变量;Step Step303-5: Let VSPOINT represent the position of the visual scene point of the variable of the TVSPT type of the data structure stored in the element B001, and the position represented by the vsPos member variable, let NCam represent the visual scene point of the variable of the TVSPT type of the data structure stored in the element B001 The number represented by the corresponding virtual camera number nCam member variable; according to the data structure TVSPT type variable stored in the photon map PMap and element B001, the value of the position of the visible scene point vsPos member variable, the surface of the geometric object at the position of the visible scene point The value of the normal vector vsNrm member variable, using the final gathering technique to calculate the light emitted by the surface light source is first scattered to the VSPOINT position by other 3D scene points, and then scattered through the VSPOINT position into the NCam th virtual camera. The brightness D001; according to the list C004 The light source sampling point positions of all data structures TALSPT type variables stored in the value of the lsPos member variable to determine the light source sampling points required for Monte Carlo direct lighting brightness value estimation, use all data structures stored in list C004 The TALSPT type variables The value of the light source visibility lsV member variable is used as an approximation of the visibility of the corresponding light source sampling point to the VSPOINT position, and the Monte Carlo direct illumination brightness value estimation technique is used to calculate the light emitted by the surface light source directly scattered through the VSPOINT position into the NCamth virtual camera The brightness D002; assign the sum of the brightness D001 and the brightness D002 to the brightness vsL member variable of the light scattered from the visible scene point into the corresponding virtual camera of the variable of the data structure TVSPT type stored in the element B001;
4)根据列表Ltvspt中的元素,生成光场投影图像,具体方法如下:4) According to the elements in the list Ltvspt, generate a light field projection image, the specific method is as follows:
对于包含Ncamr×Ncamc个虚拟相机的虚拟相机阵列中的每个虚拟相机A003,做如下操作:For each virtual camera A003 in the virtual camera array containing Ncamr ×Ncamc virtual cameras, do the following:
步骤Step401:在计算机的存储器中创建一个包含Npixr行、Npixc列元素的二维数组ILLUMIN,Npixr为虚拟相机A003的虚拟像素平面上的像素行数,Npixc为虚拟相机A003的虚拟像素平面上的像素列数;数组ILLUMIN的元素与虚拟相机A003的虚拟像素平面上的像素一一对应;数组ILLUMIN用于存储经虚拟相机A003的虚拟像素平面上的像素对应的可视场景点散射进入虚拟相机A003的光亮度;把数组ILLUMIN的每个元素赋值为0;计算虚拟相机A003在虚拟相机阵列中的编号IDCam;在计算机的存储器中创建一个列表D003,令列表D003为空;把列表Ltvspt中的满足条件COND2的所有元素D004放到列表D003中;条件COND2为:元素D004存储的数据结构TVSPT类型的变量的可视场景点对应的虚拟相机编号nCam等于编号IDCam;对列表D003的每个元素D005,做如下操作:Step 401: Create a two-dimensional array ILLUMIN containing Npixr rows and Npixc column elements in the memory of the computer, where Npixr is the number of pixel rows on the virtual pixel plane of the virtual camera A003, and Npixc is the virtual pixel of the virtual camera A003 The number of pixel columns on the plane; the elements of the array ILLUMIN are in one-to-one correspondence with the pixels on the virtual pixel plane of the virtual camera A003; the array ILLUMIN is used to store the visual scene points corresponding to the pixels on the virtual pixel plane of the virtual camera A003. The brightness of the virtual camera A003; assign each element of the array ILLUMIN to 0; calculate the IDCam of the virtual camera A003 in the virtual camera array; create a list D003 in the computer's memory, and make the list D003 empty; put the list Ltvspt All elements D004 that meet the condition COND2 in the list are placed in the list D003; the condition COND2 is: the virtual camera number nCam corresponding to the visual scene point of the variable of the TVSPT type of the data structure stored in the element D004 is equal to the number IDCam; Element D005, do the following:
令IdR代表元素D005存储的数据结构TVSPT类型的变量的可视场景点对应的虚拟相机的虚拟像素平面上的像素的行号vnRow成员变量表示的行号,令IdC代表元素D005存储的数据结构TVSPT类型的变量的可视场景点对应的虚拟相机的虚拟像素平面上的像素的列号vnCol成员变量表示的列号,把元素D005存储的数据结构TVSPT类型的变量的从可视场景点散射进入对应虚拟相机的光线的光亮度vsL成员变量的值赋给数组ILLUMIN的第IdR行、第IdC列元素;Let IdR represent the row number represented by the row number vnRow member variable of the pixel on the virtual pixel plane of the virtual camera corresponding to the variable of the data structure TVSPT type stored in element D005, and let IdC represent the data structure TVSPT stored in element D005 The column number of the pixel on the virtual pixel plane of the virtual camera corresponding to the visual scene point of the variable of type The column number represented by the vnCol member variable, the data structure TVSPT type variable stored in element D005 is scattered from the visual scene point into the corresponding The value of the brightness vsL member variable of the light of the virtual camera is assigned to the elements of the IdR row and the IdC column of the array ILLUMIN;
步骤Step402:把数组ILLUMIN的每个元素保存的光亮度值转换成虚拟相机A003拍摄3D场景得到的画面图像像素颜色值,并把画面图像像素颜色值保存到与虚拟相机A003相对应的图像文件中,该图像文件存储的就是一幅光场投影图像。Step 402: Convert the brightness value stored in each element of the array ILLUMIN into the pixel color value of the picture image obtained by shooting the 3D scene with the virtual camera A003, and save the pixel color value of the picture image to the image file corresponding to the virtual camera A003 , the image file stores a light field projection image.
在本实施例中,Npt=1000,Ncamr=5,Ncamc=5,Tv=0.92,Tl=0.92,Nals=20;Td取值为刚好能包围3D场景所有几何对象的球的半径的二十分之一;所有虚拟相机的虚拟像素平面上的像素行数为1080,所有虚拟相机的虚拟像素平面上的像素列数为1440。In this embodiment, Npt =1000, Ncamr =5, Ncamc =5, Tv =0.92, Tl =0.92, Nals =20; Td is a value that can just enclose all geometric objects in the 3D scene One-twentieth of the radius of the sphere; the number of pixel rows on the virtual pixel plane of all virtual cameras is 1080, and the number of pixel columns on the virtual pixel plane of all virtual cameras is 1440.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711175280.XACN107909647B (en) | 2017-11-22 | 2017-11-22 | Light field projection image rendering method for realistic virtual 3D scene based on spatial multiplexing |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201711175280.XACN107909647B (en) | 2017-11-22 | 2017-11-22 | Light field projection image rendering method for realistic virtual 3D scene based on spatial multiplexing |
| Publication Number | Publication Date |
|---|---|
| CN107909647A CN107909647A (en) | 2018-04-13 |
| CN107909647Btrue CN107909647B (en) | 2020-09-15 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201711175280.XAActiveCN107909647B (en) | 2017-11-22 | 2017-11-22 | Light field projection image rendering method for realistic virtual 3D scene based on spatial multiplexing |
| Country | Link |
|---|---|
| CN (1) | CN107909647B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109493412B (en)* | 2018-11-07 | 2022-10-21 | 长春理工大学 | Supersampling Ray Tracing Method for Reusing the Visibility of Point Lights in Scenes |
| CN110675482B (en)* | 2019-08-28 | 2023-05-19 | 长春理工大学 | Spherical fibonacci pixel lattice panoramic picture rendering and displaying method of virtual three-dimensional scene |
| CN110751714B (en)* | 2019-10-18 | 2022-09-06 | 长春理工大学 | Indirect illumination multiplexing method based on object discrimination in three-dimensional scene rendering |
| CN113658318A (en)* | 2020-04-28 | 2021-11-16 | 阿里巴巴集团控股有限公司 | Data processing method and system, training data generation method and electronic device |
| CN113724309B (en)* | 2021-08-27 | 2024-06-14 | 杭州海康威视数字技术股份有限公司 | Image generation method, device, equipment and storage medium |
| US20230281918A1 (en)* | 2022-03-04 | 2023-09-07 | Bidstack Group PLC | Viewability testing in the presence of fine-scale occluders |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101982838A (en)* | 2010-11-02 | 2011-03-02 | 长春理工大学 | 3D virtual set ray tracking method for accelerating back light source irradiation |
| CN102074038A (en)* | 2010-12-28 | 2011-05-25 | 长春理工大学 | Method for drawing surface caustic effect of 3D virtual scene generated by smooth surface refraction |
| US8041463B2 (en)* | 2006-05-09 | 2011-10-18 | Advanced Liquid Logic, Inc. | Modular droplet actuator drive |
| CN104700448A (en)* | 2015-03-23 | 2015-06-10 | 山东大学 | Self adaption photon mapping optimization algorithm based on gradient |
| CN106251393A (en)* | 2016-07-14 | 2016-12-21 | 山东大学 | A kind of gradual Photon Mapping optimization method eliminated based on sample |
| CN106471392A (en)* | 2014-07-04 | 2017-03-01 | 株式会社岛津制作所 | Image reconstruction processing method |
| CN107274474A (en)* | 2017-07-03 | 2017-10-20 | 长春理工大学 | Indirect light during three-dimensional scenic stereoscopic picture plane is drawn shines multiplexing method |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8041463B2 (en)* | 2006-05-09 | 2011-10-18 | Advanced Liquid Logic, Inc. | Modular droplet actuator drive |
| CN101982838A (en)* | 2010-11-02 | 2011-03-02 | 长春理工大学 | 3D virtual set ray tracking method for accelerating back light source irradiation |
| CN102074038A (en)* | 2010-12-28 | 2011-05-25 | 长春理工大学 | Method for drawing surface caustic effect of 3D virtual scene generated by smooth surface refraction |
| CN106471392A (en)* | 2014-07-04 | 2017-03-01 | 株式会社岛津制作所 | Image reconstruction processing method |
| CN104700448A (en)* | 2015-03-23 | 2015-06-10 | 山东大学 | Self adaption photon mapping optimization algorithm based on gradient |
| CN106251393A (en)* | 2016-07-14 | 2016-12-21 | 山东大学 | A kind of gradual Photon Mapping optimization method eliminated based on sample |
| CN107274474A (en)* | 2017-07-03 | 2017-10-20 | 长春理工大学 | Indirect light during three-dimensional scenic stereoscopic picture plane is drawn shines multiplexing method |
| Publication number | Publication date |
|---|---|
| CN107909647A (en) | 2018-04-13 |
| Publication | Publication Date | Title |
|---|---|---|
| CN107909647B (en) | Light field projection image rendering method for realistic virtual 3D scene based on spatial multiplexing | |
| US10937223B2 (en) | Multi-view processing unit systems and methods | |
| CN107563088B (en) | A kind of light field display device simulating method based on Ray Tracing Algorithm | |
| US8970583B1 (en) | Image space stylization of level of detail artifacts in a real-time rendering engine | |
| US8436857B2 (en) | System and method for applying level of detail schemes | |
| Schott et al. | A directional occlusion shading model for interactive direct volume rendering | |
| Pfeiffer et al. | Model-based real-time visualization of realistic three-dimensional heat maps for mobile eye tracking and eye tracking in virtual reality | |
| CN112184575A (en) | Image rendering method and device | |
| Brabec et al. | Shadow volumes on programmable graphics hardware | |
| CN102243768B (en) | A three-dimensional virtual scene rendering method | |
| Jones et al. | Interpolating vertical parallax for an autostereoscopic three-dimensional projector array | |
| CN110276823B (en) | Ray tracing based real-time interactive integrated imaging generation method and system | |
| CN105209960A (en) | System, method, and computer program product to produce images for a near-eye light field display | |
| CN110390711A (en) | Computer Graphics Based on Layered Raycasting | |
| CN107274474B (en) | Indirect illumination multiplexing method in three-dimensional scene three-dimensional picture drawing | |
| US10846908B2 (en) | Graphics processing apparatus based on hybrid GPU architecture | |
| Matsubara et al. | Light field display simulation for light field quality assessment | |
| Ganestam et al. | Real-time multiply recursive reflections and refractions using hybrid rendering | |
| CN102306401B (en) | Left/right-eye three-dimensional picture drawing method for three-dimensional (3D) virtual scene containing fuzzy reflection effect | |
| Kuchelmeister et al. | GPU-based four-dimensional general-relativistic ray tracing | |
| TW201401225A (en) | Method for estimating the quantity of light received by a participating media, and corresponding device | |
| CN118337976B (en) | Global ocean current display method and system based on naked eye 3D | |
| Kim et al. | T-ReX: Interactive global illumination of massive models on heterogeneous computing resources | |
| Al-Oraiqat et al. | Specialized Computer systems for environment visualization | |
| Buhr et al. | Real-time aspects of VR systems |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| TR01 | Transfer of patent right | ||
| TR01 | Transfer of patent right | Effective date of registration:20220325 Address after:130000 room 504c-1, 4th floor, high tech entrepreneurship incubation Industrial Park, No. 1357, Jinhu Road, high tech Industrial Development Zone, Changchun City, Jilin Province Patentee after:Jilin Kasite Technology Co.,Ltd. Address before:130022 No. 7089 Satellite Road, Jilin, Changchun Patentee before:CHANGCHUN University OF SCIENCE AND TECHNOLOGY |