Movatterモバイル変換


[0]ホーム

URL:


CN100594519C - A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic Camera - Google Patents

A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic Camera
Download PDF

Info

Publication number
CN100594519C
CN100594519CCN200810101290ACN200810101290ACN100594519CCN 100594519 CCN100594519 CCN 100594519CCN 200810101290 ACN200810101290 ACN 200810101290ACN 200810101290 ACN200810101290 ACN 200810101290ACN 100594519 CCN100594519 CCN 100594519C
Authority
CN
China
Prior art keywords
model
augmented reality
light source
point
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200810101290A
Other languages
Chinese (zh)
Other versions
CN101246600A (en
Inventor
吴威
赵旭
张淑军
周忠
李艳丽
赵沁平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang UniversityfiledCriticalBeihang University
Priority to CN200810101290ApriorityCriticalpatent/CN100594519C/en
Publication of CN101246600ApublicationCriticalpatent/CN101246600A/en
Application grantedgrantedCritical
Publication of CN100594519CpublicationCriticalpatent/CN100594519C/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

The present invention provides a method of generating augmented reality surroundings illumination model by a spherical panoramic camera real time, including: (1) placing the spherical panoramic camerain the augmented reality surroundings, placing in the viewport of the virtual worlds, real-time collecting panoramic picture; (2) processing the texture map having optical property aimed at the realworld object after being collected panoramic picture of the step 1; (3) calculating collected panoramagram light intensity value aimed at the virtual world object after being collected the panoramic picture of the step 1, processing illumination model pattern matching, generating virtual world illumination model by the calculation; (4) calculating the illumination model of the augmented reality site under the condition of interactive local impact of the consideration reconstruction model and the virtual object model. The invention real-time generates illumination model in the complex photoenvironment, having advantage of support augmented reality site real time interactive.

Description

Translated fromChinese
用球面全景摄像机实时生成增强现实环境光照模型的方法A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic Camera

技术领域technical field

本发明属于增强现实领域,具体地说是一种应用于计算机真实感图形实时生成的方法。The invention belongs to the field of augmented reality, in particular to a method for real-time generation of realistic graphics applied to computers.

背景技术Background technique

增强现实系统是将虚拟世界叠加或融合于真实世界之上的人机交互系统。由于系统作用的客体对象包含真实世界对象,要求系统可实时的生成真实感几何模型。光照模型对真实感图形的生成至关重要。Augmented reality system is a human-computer interaction system that superimposes or fuses the virtual world on the real world. Since the objects of the system include real-world objects, the system is required to generate realistic geometric models in real time. Lighting models are critical to the generation of photorealistic graphics.

建立光照模型,主要是根据光学物理的有关定律,采用计算机模拟自然界中光照明的物理过程。根据是否考虑光源外其它物体的反光特性,将光照模型分为局部光照模型和全局光照模型。其中,局部光照模型忽略周围环境对物体的作用,只考虑光源对物体表面的照射效果。全局光照模型则考虑周围环境对景物表面的影响。一般说来,局部光照模型是一种理想情况,所得结果与真实世界有较大差别,不适合在增强现实领域内使用,通常使用全局光照模型生成真实感图形。The establishment of the lighting model is mainly based on the relevant laws of optical physics, using a computer to simulate the physical process of lighting in nature. According to whether to consider the reflective characteristics of other objects other than the light source, the illumination model is divided into a local illumination model and a global illumination model. Among them, the local illumination model ignores the effect of the surrounding environment on the object, and only considers the effect of the light source on the surface of the object. The global illumination model takes into account the influence of the surrounding environment on the surface of the scene. Generally speaking, the local illumination model is an ideal situation, and the results obtained are quite different from the real world, so it is not suitable for use in the field of augmented reality. The global illumination model is usually used to generate realistic graphics.

光源是光照模型的核心。根据其几何形状光源可归结为点光源、线光源、面光源和体光源。其中,点光源计算最为简单,且是其它几类光源计算的基础。Light sources are at the heart of the lighting model. According to its geometric shape, the light source can be classified into point light source, line light source, surface light source and volume light source. Among them, the calculation of point light sources is the simplest, and it is the basis for the calculation of other types of light sources.

在光源确定的基础上,建立光照模型。目前建立光照模型的方法大致有三类:一种是数学模拟方法;一种是根据三维标志物明暗逆推光照模型的方法;以及一种基于图像的光照模型生成方法。Based on the determination of the light source, an illumination model is established. At present, there are roughly three types of methods for establishing illumination models: one is a mathematical simulation method; the other is a method of inversely deriving an illumination model based on the brightness and darkness of three-dimensional markers; and an image-based illumination model generation method.

数学模拟生成光照模型的方法是最原始的光照模型生成方法,且被广被采用。目前常见的方法包括Lambet光照模型生成方法、Phong光照模型生成方法、Witted光照模型生成方法、光线跟踪方法以及辐射度方法等。但此方法并不适合于增强现实环境下光源模型复杂且实时生成的要求。增强现实系统要求对真实环境中实际存在光源一一登记注册,使用该方法复杂,工作量大,且有局限性。表现在每换一个环境,需要重新注册,但往往由于影响系统光照因素很多,实际上无法完全注册。The method of mathematical simulation to generate illumination model is the most original illumination model generation method, and is widely used. Currently common methods include Lambet illumination model generation method, Phong illumination model generation method, Witted illumination model generation method, ray tracing method and radiosity method, etc. However, this method is not suitable for the complex and real-time generation of light source models in augmented reality environments. The augmented reality system needs to register one by one the light sources that actually exist in the real environment, and this method is complex, heavy workload, and has limitations. The performance is that every time you change an environment, you need to re-register, but often due to many factors affecting the system lighting, it is actually impossible to completely register.

周雅等在文章《增强现实系统光照模型建立研究》中提出一种根据注册图像中标志物的明暗状况,利用计算机图形学的光照明模型计算方法进行逆推建立光照模型的方法。较有效的解决了增强现实系统光照因素复杂的问题,但该方法依赖于注册图像的标志物,且当视点变化时会引起光照误差,不能对任何位置和材质的物体满足照明要求。In the article "Research on the Establishment of Illumination Model of Augmented Reality System", Zhou Ya et al. proposed a method to establish an illumination model based on the light and shade conditions of the markers in the registered image, using the calculation method of the light illumination model in computer graphics. It effectively solves the problem of complex lighting factors in the augmented reality system, but this method relies on the markers of the registered image, and will cause lighting errors when the viewpoint changes, and cannot meet the lighting requirements for objects of any position and material.

王骏等在文章《基于图像的光照模型的研究与实现》中提出一种基于图像的光照模型生成方法,即将视觉所感知的任何物体看作光源,利用照相机记录周围场景、并依次转化为辐射图、全景图和光测图,从而实现对场景的照明。但该方法只适合静态场景光照模型的建立恢复,真实世界的对象不能动态移动,不满足增强现实系统对人机交互的实时性要求。In the article "Research and Implementation of Image-Based Illumination Model", Wang Jun et al. proposed an image-based illumination model generation method, which regards any object perceived by vision as a light source, uses a camera to record the surrounding scene, and converts it into radiation in turn. Maps, panoramas, and photometric maps for lighting the scene. However, this method is only suitable for the establishment and recovery of static scene lighting models, and objects in the real world cannot move dynamically, which does not meet the real-time requirements of human-computer interaction in augmented reality systems.

总的来说,现有的场景光照模型生成方法存在着计算复杂工作量大、不适合复杂光照环境下实时计算、无法支持增强现实系统实时人机交互的问题。In general, the existing scene lighting model generation methods have the problems of complex calculation and heavy workload, not suitable for real-time calculation in complex lighting environments, and unable to support real-time human-computer interaction in augmented reality systems.

发明内容Contents of the invention

本发明解决的技术问题是:克服现有生成光照模型方法存在着计算复杂,不能实时生成增强现实场景光照模型的缺点,提供一种使用全景摄像机实时生成增强现实环境光照模型的方法,支持增强现实系统实时人机交互。The technical problem to be solved by the present invention is: to overcome the disadvantages of complicated calculations and inability to generate an augmented reality scene illumination model in real time in the existing method for generating an illumination model, and provide a method for generating an augmented reality environment illumination model in real time by using a panoramic camera to support augmented reality System real-time human-computer interaction.

本发明提出一种使用全景摄像机实时生成增强现实环境光照模型的方法,包括以下步骤:The present invention proposes a method for generating an augmented reality environment illumination model in real time using a panoramic camera, comprising the following steps:

(1)将一球面全景摄像机置于增强现实环境中,位置置于虚拟世界的观察视点,实时采集全景图像;(1) A spherical panoramic camera is placed in the augmented reality environment, and the position is placed in the observation point of the virtual world, and the panoramic image is collected in real time;

(2)在步骤(1)采集全景图像后,针对真实世界对象,对其三维重建模型进行带有光属性的纹理贴图;(2) after the panoramic image is collected in step (1), for the real-world object, the texture map with light attributes is carried out to its three-dimensional reconstruction model;

(3)在步骤(1)采集全景图像后,针对虚拟世界对象,计算采集到的全景图光强值,进行光照模型模式匹配,通过计算生成虚拟世界光照模型;(3) After the panoramic image is collected in step (1), for the virtual world object, calculate the light intensity value of the collected panoramic image, perform illumination model pattern matching, and generate a virtual world illumination model through calculation;

(4)综合考虑重建模型与虚拟对象模型相互局部影响情况下,计算增强现实场景的光照模型。(4) Calculating the illumination model of the augmented reality scene under the condition that the reconstruction model and the virtual object model interact locally.

本发明的有益效果是:The beneficial effects of the present invention are:

(1)本发明克服现有技术使用照相机静态采集场景图像信息计算光照模型的缺点,能够实现实时生成增强现实环境光照模型,为增强现实环境实时动态交互提供光照支持。(1) The present invention overcomes the shortcomings of the existing technology of using cameras to statically collect scene image information to calculate lighting models, and can realize real-time generation of augmented reality environment lighting models, providing lighting support for real-time dynamic interaction in augmented reality environments.

(2)本发明方法将光照模型的计算区分为虚拟对象及真实世界对象不同计算方式,相对于现有的光照模型数学计算方法较为简单。现有技术往往假设增强现实虚拟世界光照模型参数,然后根据视觉要求迭代调整,而本发明从真实场景的全景图像进行逆向计算分析,在此基础上生成虚拟光照模型,提高了增强现实虚拟世界渲染的逼真程度。(2) The method of the present invention divides the calculation of the illumination model into different calculation methods for virtual objects and real-world objects, which is simpler than the existing mathematical calculation methods for illumination models. The existing technology often assumes the parameters of the lighting model of the augmented reality virtual world, and then adjusts iteratively according to the visual requirements. However, the present invention performs reverse calculation and analysis from the panoramic image of the real scene, and generates a virtual lighting model on this basis, which improves the rendering of the augmented reality virtual world. degree of realism.

(3)在本发明中,进行光源模式匹配时,除点光源模式、自然光源模式等,可人为的定义光照模式,以适应不同复杂的场景环境,具有较好的扩展性。(3) In the present invention, when performing light source mode matching, in addition to the point light source mode, natural light source mode, etc., the lighting mode can be artificially defined to adapt to different complex scene environments, and has good scalability.

附图说明Description of drawings

图1为本发明的全景摄像机与增强现实场景环境关系图;Fig. 1 is a panoramic camera of the present invention and an augmented reality scene environment relation diagram;

图2为本发明的全景图实时生成增强现实关照模型方法流程图;Fig. 2 is a flowchart of a method for generating an augmented reality care model in real time from a panorama of the present invention;

图3为本发明的全景图坐标世界坐标系坐标转换示意图。FIG. 3 is a schematic diagram of coordinate conversion of the panorama coordinate world coordinate system in the present invention.

具体实施方式Detailed ways

步骤1如图1所示,增强现实环境场景是立方体空间,人或其它对象在立方体空间内与增强现实的虚拟对象完成交互。将一个全景摄像机置于虚拟世界观察视点的位置,进行全景视频采集。Step 1 As shown in FIG. 1 , the augmented reality environment scene is a cube space, and people or other objects interact with augmented reality virtual objects in the cube space. Place a panoramic camera at the point of view in the virtual world to collect panoramic video.

步骤2如图2所示,给出通过分析全景图生成光照模型的流程图。本发明提出的方法包括六个步骤,分别是:全景图像采集、三维重建模型的纹理贴图、计算全景图光强值、光源模式匹配、计算光源位置及输入虚拟对象模型参数、计算光照模型。Step 2 As shown in Figure 2, a flow chart of generating an illumination model by analyzing the panorama is given. The method proposed by the invention includes six steps, namely: panorama image acquisition, texture map of three-dimensional reconstruction model, calculation of panorama light intensity value, light source pattern matching, calculation of light source position and input of virtual object model parameters, and calculation of illumination model.

步骤2.1全景图像采集。使用置于虚拟场景视点的球面全景摄像机采集全景视频。Step 2.1 Panoramic image acquisition. Panoramic video is captured using a spherical panoramic camera placed at the viewpoint of a virtual scene.

步骤2.2如步骤2.1所述内容基础上,对增强现实场景中真实世界的三维重建模型进行含光照信息的纹理贴图。增强现实虚拟世界对象模型分为两类,一类是虚拟对象模型,另一类是真实世界的三维重建模型。对于三维重建模型采用从全景视频流截取纹理图像并贴图的方式确定其光照特征。首先对增强现实场景中的真实对象进行背景分割,取前景图像,并将全景图中的像素信息映射至虚拟世界重建模型表面。Step 2.2 On the basis of the content described in step 2.1, perform texture mapping including illumination information on the real-world 3D reconstruction model in the augmented reality scene. The augmented reality virtual world object model is divided into two categories, one is the virtual object model, and the other is the 3D reconstruction model of the real world. For the 3D reconstruction model, the illumination characteristics are determined by intercepting the texture image from the panoramic video stream and mapping it. First, background segmentation is performed on the real objects in the augmented reality scene, the foreground image is taken, and the pixel information in the panorama is mapped to the surface of the virtual world reconstruction model.

常见的虚拟世界坐标系是三维立体坐标系,但全景摄像机采集的全景图像是基于球形坐标系的,需要将全景图像的坐标转换为虚拟世界坐标系中。The common virtual world coordinate system is a three-dimensional coordinate system, but the panoramic image collected by the panoramic camera is based on the spherical coordinate system, and the coordinates of the panoramic image need to be transformed into the virtual world coordinate system.

如图3所示,根据图1的全景摄像机与场景空间关系,给出全景图像坐标转换公式。设全景图像的坐标为(u,v),立体空间边长长度为d,转换到世界坐标系下的坐标(x′,y′,z′)的公式如下:As shown in Figure 3, according to the spatial relationship between the panoramic camera and the scene in Figure 1, the panoramic image coordinate transformation formula is given. Assuming that the coordinates of the panoramic image are (u, v), and the side length of the three-dimensional space is d, the formula for converting to the coordinates (x′, y′, z′) in the world coordinate system is as follows:

Figure C20081010129000061
Figure C20081010129000061

xx′′==xxythe y′′==ythe y++ddzz′′==zz++dd

步骤2.3如步骤2.1所述内容基础上,计算采集到全景图光强值。设全景图上坐标为(u,v),对应像素点的RGB值分别为:NR,NG,NB,使用RGB转换HIS公式,计算对应点的光强值,公式为:Step 2.3 Calculate the light intensity value of the collected panorama on the basis of the content described in step 2.1. Assuming that the coordinates on the panorama are (u, v), the RGB values of the corresponding pixels are:NR ,NG , andNB . Use the RGB conversion HIS formula to calculate the light intensity value of the corresponding point. The formula is:

NI=NR+NG+NBNI =NR +NG +NB

步骤2.4如步骤2.3所述内容基础上,进行光源模式匹配。所谓光源模式,即光源的个数与位置分布规律。首先按照一定光强阀值(比如N=220)对全景光强图进行过滤,并去除噪声点,得到全景图光源模式。如果只有一个点光源,即判断环境是单点光源模式;如果有若干个点光源,且彼此相互分布较为分散,则判断环境是多点光源模式;如果有大量的点光源,且分布均匀,则判断环境是自然光源模式;此外,对于特殊的光照环境,可人工制定其光源模式。Step 2.4 Perform light source pattern matching on the basis of the content described in step 2.3. The so-called light source mode refers to the number and position distribution of light sources. First, the panoramic light intensity map is filtered according to a certain light intensity threshold (for example, N=220), and noise points are removed to obtain the light source pattern of the panoramic picture. If there is only one point light source, it is judged that the environment is a single point light source mode; if there are several point light sources, and the distribution of each other is relatively scattered, it is judged that the environment is a multi-point light source mode; if there are a large number of point light sources and the distribution is uniform, then It is judged that the environment is a natural light source mode; in addition, for a special lighting environment, the light source mode can be manually formulated.

步骤2.5如步骤2.4所述内容基础上,计算匹配后光源在虚拟世界中的位置,并输入虚拟对象模型参数。对于增强现实虚拟世界中的虚拟对象模型,采用Phong光照模型计算方法。即首先根据步骤2计算光源在虚拟世界的坐标位置,并输入虚拟对象模型的参数,包括对象对环境光的漫反射系数,对象表面的漫反射系数及对象的镜面反射系数。代入Phong光照模型计算公式:Step 2.5 Based on the content described in step 2.4, calculate the position of the light source in the virtual world after matching, and input the virtual object model parameters. For the virtual object model in the augmented reality virtual world, the calculation method of Phong lighting model is adopted. That is, first calculate the coordinate position of the light source in the virtual world according to step 2, and input the parameters of the virtual object model, including the diffuse reflection coefficient of the object to the ambient light, the diffuse reflection coefficient of the object surface and the specular reflection coefficient of the object. Substitute into the calculation formula of Phong lighting model:

II==kkaaIIaa++ΣΣii==11Mmffii((dd))IInini[[kkdd((NN··LLii))++kksthe s((NN··Hhii))nno]]

其中,M是场景中点光源总个数;I是光照表面点(x,y)处的光强;Ia是入射环境光光强,即全景图全部像素点的平均光强值;Ini,fi(d)是第i个点光源发出的入射光光强和光源强度衰减因子;ka是模型表面对环境光的漫反射系数;kd是模型表面的漫反射系数;ks是模型表面的镜面反射系数;n是镜面高光指数;Li是第i个点光源发射方向单位向量;N是点(x,y)处表面单位法向量;Hi是将入射光反射到观察者方向的理想镜面的单位法向量,V是观察者视线单位向量。Among them, M is the total number of point light sources in the scene; I is the light intensity at the illuminated surface point (x, y); Ia is the incident ambient light intensity, that is, the average light intensity value of all pixels in the panorama; Ini , fi (d) is the incident light intensity emitted by the i-th point light source and the light source intensity attenuation factor; ka is the diffuse reflection coefficient of the model surface to the ambient light; kd is the diffuse reflection coefficient of the model surface; ks is The specular reflection coefficient of the model surface; n is the specular highlight index; Li is the unit vector of the emission direction of the i-th point light source; N is the surface unit normal vector at point (x, y); Hi is the incident light reflected to the observer The unit normal vector of the ideal mirror in the direction, V is the viewer line of sight unit vector.

步骤2.6如步骤2.2和步骤2.5所述内容基础上,计算虚拟世界光照模型。考虑三维重建模型和虚拟对象模型的相互作用,采用光线跟踪方法,计算重建模型和虚拟对象模型的相互遮挡时的光线反射情况,计算公式如下:Step 2.6 Calculate the virtual world lighting model based on the content described in step 2.2 and step 2.5. Considering the interaction between the 3D reconstruction model and the virtual object model, the ray tracing method is used to calculate the light reflection when the reconstruction model and the virtual object model are occluded from each other. The calculation formula is as follows:

R=L-2N(N·L)R=L-2N(N·L)

其中,R是反射光线单位向量;L是入射光线方向向量;N是表面单位法向量,且R,L,N共面,通过光线跟踪方法实现场景消隐和阴影生成,最终完成整个增强现实虚拟世界的光照模型的计算,计算公式为:Among them, R is the unit vector of reflected light; L is the direction vector of incident light; N is the unit normal vector of the surface, and R, L, and N are coplanar, and the scene blanking and shadow generation are realized by ray tracing method, and finally the entire augmented reality virtual The calculation of the lighting model of the world, the calculation formula is:

II((xx,,ythe y))==kkaaΣΣjj==11TT((NNRR++NNGG++NNBB))jjTT++

ΣΣii==11Mmffii((dd))((NNRR++NNGG++NNBB))nnoii((VV·&Center Dot;[[LLii--22NN((NN··LLii))]]))[[kkdd((NN·&Center Dot;LLii))++kksthe s((NN··LLii++VV22))nno]]

其中,I(x,y)表示点(x,y)计算所得的光强值;ka是模型表面对环境光的漫反射系数;kd是模型表面的漫反射系数;ks是模型表面的镜面反射系数;n是镜面高光指数;fi(d)是第i个点光源发出光的光源强度衰减因子;NR,NG,NB表示象素点的RGB值;Li是第i个点光源发射方向单位向量;N是点(x,y)处表面单位法向量;V是观察者视线单位向量;T是所有像素点总和;M是光照点数总和,通过前述的光照模型模式匹配的方法获得。Among them, I(x, y) represents the light intensity value calculated at point (x, y); ka is the diffuse reflection coefficient of the model surface to ambient light; kd is the diffuse reflection coefficient of the model surface; ks is the model surface n is the specular highlight index; fi (d) is the light source intensity attenuation factor of the i-th point light source; NR , NG ,NB represent the RGB value of the pixel point; Li is the i point light source emission direction unit vector; N is the surface unit normal vector at point (x, y); V is the observer line of sight unit vector; T is the sum of all pixels; M is the sum of the number of illumination points, through the aforementioned illumination model mode The matching method is obtained.

最后应说明的是,以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明利用球面全景摄像机实时生成增强现实环境光照模型的原理前提下,还可以做出若干改进或等同替换,这些改进和等同替换也应视为本发明的保护范围。Finally, it should be noted that the above description is only a preferred embodiment of the present invention. It should be pointed out that for those of ordinary skill in the art, without departing from the principle of the present invention to generate an augmented reality environment lighting model in real time using a spherical panoramic camera Under the premise, several improvements or equivalent replacements can also be made, and these improvements and equivalent replacements should also be regarded as the protection scope of the present invention.

Claims (5)

Translated fromChinese
1、一种用球面全景摄像机实时生成增强现实环境光照模型的方法,其特征在于包括:1. A method for generating an augmented reality environment lighting model in real time with a spherical panoramic camera, characterized in that it comprises:(1)将一球面全景摄像机置于增强现实环境中,位置置于虚拟世界的观察视点,实时采集全景图像;(1) A spherical panoramic camera is placed in the augmented reality environment, and the position is placed in the observation point of the virtual world, and the panoramic image is collected in real time;(2)在步骤(1)采集全景图像后,针对真实世界对象,对其三维重建模型进行带有光属性的纹理贴图,所述对三维重建模型进行带有光属性的纹理贴图的方法如下:首先对增强现实场景中的真实对象进行背景分割,提取前景图像,并将全景图像的坐标转换为虚拟世界坐标系;(2) After step (1) collects panoramic images, for real world objects, carry out texture mapping with light attributes to its three-dimensional reconstruction model, the method for carrying out texture mapping with light attributes to the three-dimensional reconstruction model is as follows: First, background segmentation is performed on the real objects in the augmented reality scene, the foreground image is extracted, and the coordinates of the panoramic image are converted into the virtual world coordinate system;(3)在步骤(1)采集全景图像后,针对虚拟世界对象,计算采集到的全景图光强值,进行光照模型模式匹配,通过计算生成虚拟世界光照模型;(3) After the panoramic image is collected in step (1), for the virtual world object, calculate the light intensity value of the collected panoramic image, perform illumination model pattern matching, and generate a virtual world illumination model through calculation;(4)综合考虑重建模型与虚拟对象模型相互局部影响情况下,计算增强现实场景的光照模型,所述计算增强现实场景的光照模型的方法为:(4) In the case of comprehensive consideration of the mutual local influence between the reconstruction model and the virtual object model, calculate the illumination model of the augmented reality scene, the method for calculating the illumination model of the augmented reality scene is:(4.1)采用光线跟踪方法,计算重建模型和虚拟对象模型的相互遮挡时的光线反射情况,计算公式如下:(4.1) Use the ray tracing method to calculate the light reflection when the reconstruction model and the virtual object model are mutually occluded. The calculation formula is as follows:R=L-2N(N·L)R=L-2N(N·L)其中,R是反射光线单位向量、L是入射光线方向向量、N是表面单位法向量,且R,L,N共面;Among them, R is the reflected light unit vector, L is the incident light direction vector, N is the surface unit normal vector, and R, L, N are coplanar;(4.2)通过光线跟踪方法实现场景消隐和阴影生成,最终完成整个增强现实虚拟世界的光照模型的计算,计算公式为:(4.2) Realize scene blanking and shadow generation by ray tracing method, and finally complete the calculation of the lighting model of the entire augmented reality virtual world. The calculation formula is:II((xx,,ythe y))==kkaaΣΣjj==11TT((NNRR++NNGG++NNBB))jjTT++ΣΣii==11mmffii((dd))((NNRR++NNGG++NNBB))nnoii((VV·&Center Dot;[[LLii--22NN((NN·&Center Dot;LLii))]]))[[kkdd((NN··LLii))++kksthe s((NN··LLii++VV22))22]]其中,I(x,y)表示点(x,y)计算所得的光强值;ka是模型表面对环境光的漫反射系数;kb是模型表面的漫反射系数;ks是模型表面的镜面反射系数;n是镜面高光指数;fi(d)是第i个点光源发出光的光源强度衰减因子;NR,NG,NB表示象素点的RGB值;Li是第i个点光源发射方向单位向量;N是点(x,y)处表面单位法向量;V是观察者视线单位向量;T是所有像素点总和;M是光照点数总和,通过光照模型模式匹配的方法获得。Among them, I(x, y) represents the light intensity value calculated at point (x, y); ka is the diffuse reflection coefficient of the model surface to ambient light; kb is the diffuse reflection coefficient of the model surface; ks is the model surface n is the specular highlight index; fi (d) is the light source intensity attenuation factor of the i-th point light source; NR , NG ,NB represent the RGB value of the pixel point; Li is the i point light source emission direction unit vector; N is the surface unit normal vector at point (x, y); V is the observer line of sight unit vector; T is the sum of all pixels; M is the sum of the number of illumination points, matched by the illumination model pattern method to obtain.2、根据权利要求1所述的用球面全景摄像机实时生成增强现实环境光照模型的方法,其特征在于:所述步骤(2)将全景图像的坐标转换为虚拟世界坐标系的公式为:设全景图像的坐标为(u,v),立体空间边长长度为d,转换到世界坐标系下的坐标(x′,y′,z′)的公式如下:2. The method for generating an augmented reality environment lighting model in real time with a spherical panoramic camera according to claim 1, wherein the formula for converting the coordinates of the panoramic image into the virtual world coordinate system in the step (2) is: set the panoramic The coordinates of the image are (u, v), the side length of the three-dimensional space is d, and the formula for converting to the coordinates (x′, y′, z′) in the world coordinate system is as follows:
Figure C2008101012900003C1
Figure C2008101012900003C1
.3、根据权利要求1所述的用球面全景摄像机实时生成增强现实环境光照模型的方法,其特征在于:所述步骤(3)中全景图光强值计算如下:3. The method for generating an augmented reality environment lighting model in real time with a spherical panoramic camera according to claim 1, characterized in that: the light intensity value of the panorama in the step (3) is calculated as follows:设全景图的坐标为(u,v),对应像素点的红R、绿G、兰B值分别为:NR,NG,NB,使用RGB转换HIS公式,计算对应点的光强值,公式为:Let the coordinates of the panorama be (u, v), and the red R, green G, and blue B values of the corresponding pixels are: NR , NG , NB , use the RGB conversion HIS formula to calculate the light intensity value of the corresponding point , the formula is:NI=NR+IG+NBNI =NR +IG +NB .4、根据权利要求1所述的用球面全景摄像机实时生成增强现实环境光照模型的方法,其特征在于:所述步骤(3)中进行光照模型模式匹配的方法为:4. The method for generating an augmented reality environment illumination model in real time with a spherical panoramic camera according to claim 1, wherein the method for performing illumination model pattern matching in the step (3) is:(3.1)首先按照一定光强阀值对全景光强图进行过滤,并去除噪声点,得到全景图光源模式;(3.1) First filter the panoramic light intensity map according to a certain light intensity threshold, and remove noise points to obtain the panoramic light source pattern;(3.2)将光源模式分为单点光源模式、多点光源模式、自然光源模式和自定义光源模式,即如果只有一个点光源,即判断环境是单点光源模式;如果有若干个点光源,且彼此相互分布较为分散,则判断环境是多点光源模式;如果有大量的点光源,且分布均匀,则判断环境是自然光源模式;对于特殊的光照环境,可人工制定其光源模式。(3.2) The light source mode is divided into single-point light source mode, multi-point light source mode, natural light source mode and custom light source mode, that is, if there is only one point light source, it is judged that the environment is a single-point light source mode; if there are several point light sources, If the distribution of each other is relatively scattered, the environment is judged to be a multi-point light source mode; if there are a large number of point light sources and the distribution is uniform, the environment is judged to be a natural light source mode; for special lighting environments, the light source mode can be manually formulated.5、根据权利要求1所述的用球面全景摄像机实时生成增强现实环境光照模型的方法,其特征在于:所述步骤(3)中计算生成虚拟世界光照模型的公式为:5. The method for generating an augmented reality environment lighting model in real time with a spherical panoramic camera according to claim 1, wherein the formula for calculating and generating the virtual world lighting model in the step (3) is:II==kkaaIIaa++ΣΣii==11Mmffii((dd))IInini[[kkdd((NN··LLii))++kksthe s((NN··Hhii))nno]]其中:M是场景中点光源总个数;I是光照表面点(x,y)处的光强;Ia是入射环境光光强,即全景图全部像素点的平均光强值;Ini,fi(d)是第i个点光源发出的入射光光强和光源强度衰减因子;ka是模型表面对环境光的漫反射系数;kd是模型表面的漫反射系数;ks是模型表面的镜面反射系数;n是镜面高光指数;Li是第i个点光源发射方向单位向量;N是点(x,y)处表面单位法向量;Hi是将入射光反射到观察者方向的理想镜面的单位法向量,
Figure C2008101012900003C5
V是观察者视线单位向量。
Among them: M is the total number of point light sources in the scene; I is the light intensity at the point (x, y) of the illuminated surface; Ia is the incident ambient light intensity, that is, the average light intensity value of all pixels in the panorama; Ini , fi (d) is the incident light intensity emitted by the i-th point light source and the light source intensity attenuation factor; ka is the diffuse reflection coefficient of the model surface to the ambient light; kd is the diffuse reflection coefficient of the model surface; ks is The specular reflection coefficient of the model surface; n is the specular highlight index; Li is the unit vector of the emission direction of the i-th point light source; N is the surface unit normal vector at point (x, y); Hi is the incident light reflected to the observer The unit normal vector of the ideal mirror in the direction,
Figure C2008101012900003C5
V is the viewer line of sight unit vector.
CN200810101290A2008-03-032008-03-03 A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic CameraExpired - Fee RelatedCN100594519C (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN200810101290ACN100594519C (en)2008-03-032008-03-03 A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic Camera

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN200810101290ACN100594519C (en)2008-03-032008-03-03 A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic Camera

Publications (2)

Publication NumberPublication Date
CN101246600A CN101246600A (en)2008-08-20
CN100594519Ctrue CN100594519C (en)2010-03-17

Family

ID=39947036

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN200810101290AExpired - Fee RelatedCN100594519C (en)2008-03-032008-03-03 A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic Camera

Country Status (1)

CountryLink
CN (1)CN100594519C (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102436663A (en)*2010-08-122012-05-02株式会社泛泰User equipment, server, and method for selectively filtering augmented reality
US9449428B2 (en)2009-12-212016-09-20Thomson LicensingMethod for generating an environment map
US12437488B2 (en)2023-09-252025-10-07International Business Machines CorporationAugmenting objects for immersive gaming environments

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9566029B2 (en)2008-09-302017-02-14Cognisens Inc.Method and device for assessing, training and improving perceptual-cognitive abilities of individuals
FR2965652A1 (en)*2010-09-302012-04-06Thomson Licensing METHOD FOR ESTIMATING LIGHT QUANTITY RECEIVED IN ONE POINT OF A VIRTUAL ENVIRONMENT
WO2012078006A2 (en)*2010-12-092012-06-14삼성전자 주식회사Image processor, lighting processor and method therefor
CN102236912A (en)*2011-07-082011-11-09清华大学Three-dimensional reconstruction method and device of moving target under variable illumination condition
CN103438868B (en)*2012-01-092015-09-09刘进Based on the object height measuring method of spherical panorama camera
WO2014040281A1 (en)*2012-09-142014-03-20华为技术有限公司Augmented reality processing method and device for mobile terminal
US9524585B2 (en)2012-11-052016-12-20Microsoft Technology Licensing, LlcConstructing augmented reality environment with pre-computed lighting
CN103108452B (en)*2013-01-102015-01-21北京航空航天大学Scene illumination reappearing method driven by dynamic light field data
CN103489002B (en)*2013-09-272017-03-29广州中国科学院软件应用技术研究所A kind of augmented reality method and system
CN104580920B (en)*2013-10-212018-03-13华为技术有限公司The method and user terminal of a kind of imaging
CN103761763B (en)*2013-12-182017-01-04微软技术许可有限责任公司For the method using precalculated illumination to build augmented reality environment
CN104268939B (en)*2014-09-282017-02-08国家电网公司Transformer substation virtual-reality management system based on three-dimensional panoramic view and implementation method of transformer substation virtual-reality management system based on three-dimensional panoramic view
CN104331929B (en)*2014-10-292018-02-02深圳先进技术研究院Scene of a crime restoring method based on video map and augmented reality
CN104392481B (en)*2014-11-252017-12-05无锡梵天信息技术股份有限公司A kind of method and device that high light reflectivity definition is controlled using textures
US9779512B2 (en)*2015-01-292017-10-03Microsoft Technology Licensing, LlcAutomatic generation of virtual materials from real-world materials
CN105138763A (en)*2015-08-192015-12-09中山大学Method for real scene and reality information superposition in augmented reality
CN105447906B (en)*2015-11-122018-03-13浙江大学The method that weight illumination render is carried out based on image and model calculating illumination parameter
CN105488840B (en)*2015-11-262019-04-23联想(北京)有限公司A kind of information processing method and electronic equipment
CN105844695B (en)*2016-03-182017-05-24山东大学Illumination modeling method based on real material measurement data
CN106056550B (en)*2016-05-252019-02-22广东工业大学 A kind of rendering method and device based on high dynamic range image
CN106327438B (en)*2016-08-122019-02-26武汉秀宝软件有限公司A kind of pair of bloom and the augmented reality method and Crawl mat application for repeating texture elimination
CN106840389A (en)*2016-12-302017-06-13歌尔科技有限公司Light source estimating and measuring method and device, intelligent electronic device based on multiple balls
CN106658148B (en)*2017-01-162020-04-10深圳创维-Rgb电子有限公司VR playing method, VR playing device and VR playing system
CN106940897A (en)*2017-03-022017-07-11苏州蜗牛数字科技股份有限公司A kind of method that real shadow is intervened in AR scenes
CN106981087A (en)*2017-04-052017-07-25杭州乐见科技有限公司Lighting effect rendering intent and device
CN109427089B (en)*2017-08-252023-04-28微软技术许可有限责任公司 Mixed reality object rendering based on ambient lighting conditions
CN107749076B (en)*2017-11-012021-04-20太平洋未来科技(深圳)有限公司Method and device for generating real illumination in augmented reality scene
CN108509887A (en)*2018-03-262018-09-07深圳超多维科技有限公司A kind of acquisition ambient lighting information approach, device and electronic equipment
CN109883414B (en)*2019-03-202021-08-27百度在线网络技术(北京)有限公司Vehicle navigation method and device, electronic equipment and storage medium
CN110033423B (en)*2019-04-162020-08-28北京字节跳动网络技术有限公司Method and apparatus for processing image
CN111866489A (en)*2019-04-292020-10-30浙江开奇科技有限公司Method for realizing immersive panoramic teaching
CN110471061A (en)*2019-07-162019-11-19青岛擎鹰信息科技有限责任公司A kind of emulation mode and its system for realizing airborne synthetic aperture radar imaging
CN111145333B (en)*2019-12-112022-08-12江苏艾佳家居用品有限公司Indoor scene illumination layout method
CN111127624A (en)*2019-12-272020-05-08珠海金山网络游戏科技有限公司Illumination rendering method and device based on AR scene
CN111932641B (en)*2020-09-272021-05-14北京达佳互联信息技术有限公司Image processing method and device, electronic equipment and storage medium
CN112562051B (en)*2020-11-302023-06-27腾讯科技(深圳)有限公司Virtual object display method, device, equipment and storage medium
CN114979457B (en)*2021-02-262023-04-07华为技术有限公司Image processing method and related device
CN113743380B (en)*2021-11-032022-02-15江苏博子岛智能产业技术研究院有限公司Active tracking method based on video image dynamic monitoring
CN115130171B (en)*2022-05-252025-04-25北京河图联合创新科技有限公司 AR scene-based environmental analysis system, method, electronic device and storage medium
CN116485984B (en)*2023-06-252024-05-31深圳元戎启行科技有限公司Global illumination simulation method, device, equipment and medium for panoramic image vehicle model
CN116664752B (en)*2023-08-012023-10-17南京维赛客网络科技有限公司Method, system and storage medium for realizing panoramic display based on patterned illumination
CN119323656B (en)*2024-12-162025-06-20山东天竞电子科技有限公司 A method for synthesizing realistic virtual images

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于图像的光照模型的研究与实现. 王骏.计算机工程与设计,第26卷第1期. 2005*
增强现实系统光照模型建立研究. 周雅.中国图象图形学报,第9卷第8期. 2004*

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9449428B2 (en)2009-12-212016-09-20Thomson LicensingMethod for generating an environment map
CN102436663A (en)*2010-08-122012-05-02株式会社泛泰User equipment, server, and method for selectively filtering augmented reality
US12437488B2 (en)2023-09-252025-10-07International Business Machines CorporationAugmenting objects for immersive gaming environments

Also Published As

Publication numberPublication date
CN101246600A (en)2008-08-20

Similar Documents

PublicationPublication DateTitle
CN100594519C (en) A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic Camera
CN102096941B (en)Consistent lighting method under falsehood-reality fused environment
CN107341853B (en)Virtual-real fusion method and system for super-large virtual scene and dynamic screen shooting
CN110148204B (en) Method and system for representing virtual objects in a view of a real environment
WO2022121645A1 (en)Method for generating sense of reality of virtual object in teaching scene
CN103500465B (en)Ancient cultural relic scene fast rendering method based on augmented reality technology
CN104766270B (en)One kind is based on fish-eye actual situation illumination fusion method
CN108460841A (en)A kind of indoor scene light environment method of estimation based on single image
CN105844695B (en)Illumination modeling method based on real material measurement data
CN104463198A (en)Method for carrying out illumination estimation on real illumination environment
CN102426695A (en)Virtual-real illumination fusion method of single image scene
CN110033509B (en)Method for constructing three-dimensional face normal based on diffuse reflection gradient polarized light
JP2006053694A (en) Space simulator, space simulation method, space simulation program, recording medium
CN107330964B (en)Display method and system of complex three-dimensional object
CN103226830A (en)Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN106919257B (en) Texture force reproduction method based on image brightness information force-tactile interaction
WO2021151380A1 (en)Method for rendering virtual object based on illumination estimation, method for training neural network, and related products
CN110060335B (en)Virtual-real fusion method for mirror surface object and transparent object in scene
CN116485984B (en)Global illumination simulation method, device, equipment and medium for panoramic image vehicle model
CN107016719B (en) A Real-time Rendering Method of Subsurface Scattering Effect in Screen Space
CN104517313B (en)The method of ambient light masking based on screen space
CN108364292A (en)A kind of illumination estimation method based on several multi-view images
FrankeDelta light propagation volumes for mixed reality
WO2024152649A1 (en)Wave field reconstruction method based on optical sensing
CN103413346B (en)A kind of sense of reality fluid real-time reconstruction method and system thereof

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
C17Cessation of patent right
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20100317

Termination date:20130303


[8]ページ先頭

©2009-2025 Movatter.jp