Movatterモバイル変換


[0]ホーム

URL:


CN111726594A - A realization method of efficient optimized rendering and fusion with pose anti-distortion - Google Patents

A realization method of efficient optimized rendering and fusion with pose anti-distortion
Download PDF

Info

Publication number
CN111726594A
CN111726594ACN201910218901.0ACN201910218901ACN111726594ACN 111726594 ACN111726594 ACN 111726594ACN 201910218901 ACN201910218901 ACN 201910218901ACN 111726594 ACN111726594 ACN 111726594A
Authority
CN
China
Prior art keywords
rendering
pose
fusion
fov
gpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910218901.0A
Other languages
Chinese (zh)
Other versions
CN111726594B (en
Inventor
周正华
周益安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Taojinglihua Information Technology Co ltd
Original Assignee
Shanghai Flying Ape Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Flying Ape Information Technology Co ltdfiledCriticalShanghai Flying Ape Information Technology Co ltd
Priority to CN201910218901.0ApriorityCriticalpatent/CN111726594B/en
Publication of CN111726594ApublicationCriticalpatent/CN111726594A/en
Application grantedgrantedCritical
Publication of CN111726594BpublicationCriticalpatent/CN111726594B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The invention provides a method for realizing efficient optimization rendering and pose anti-distortion fusion, which relates to the embedded field and comprises the following steps: s1: acquiring external input data by using a CPU, wherein the external input data comprises four types, namely a panoramic or 3D data source, pose information, an FOV (field of view) and a projection mode; s2: performing hardware decoding by using the VPU, and transmitting the decoded output to the GPU; s3: performing map rendering by using a GPU, wherein the map rendering comprises projection modeling, chromaticity space reduction, FOV initialization and attitude fusion; s4: and performing loop iteration according to the requirements of the video or the image. The invention fully utilizes hardware acceleration, provides a real-time video pipeline rendering comprehensive method with various projection modes and supporting FOV (field of view) and pose information and the like, and solves VR (virtual reality) rendering of high-resolution panoramic videos, panoramic images and 3D (three-dimensional) video images by utilizing the performances of a VPU (virtual private Unit), a CPU (Central processing Unit) and a GPU (graphics processing Unit).

Description

Translated fromChinese
一种高效优化渲染及与位姿反畸变融合的实现方法A realization method of efficient optimized rendering and fusion with pose anti-distortion

技术领域technical field

本发明涉及嵌入式领域,尤其涉及一种高效优化渲染及与位姿反畸变融合的实现方法。The invention relates to the field of embedded, in particular to a method for realizing efficient optimized rendering and fusion with pose anti-distortion.

背景技术Background technique

随着VR(虚拟现实)的兴起,怎样使得最常用的手持设备支持VR全景的生成和输出成为热门的研究问题。With the rise of VR (Virtual Reality), how to make the most commonly used handheld devices support the generation and output of VR panorama has become a hot research problem.

常用的IVS(独立软件供应商)的解决方法大多是基于CPU的指令加速,例如:通过汇编的加速实现解码加速;但是对于全景的渲染,传统的基于指令加速方法基本不能实现,原因是涉及太多的矩阵运算,而基本指令加速是远远不够的;虽然也有基于硬件加速,但是基本都限于视频的解码;而对于全景,反畸变这类矩阵型运算较多的运用场景,传统的CPU是很低效的;随着VR的兴起,也有基于GPU来进行渲染,但是没有将GPU(图像处理单元)、CPU(通用计算处理单元)、VPU(视频处理单元)与VR相关的公共属性结合起来,难以实现硬件整体性能的优化和VR渲染。The commonly used IVS (Independent Software Vendor) solutions are mostly CPU-based instruction acceleration, for example: decoding acceleration through assembly acceleration; but for panorama rendering, traditional instruction-based acceleration methods are basically impossible to achieve, because it involves too much There are many matrix operations, and the basic instruction acceleration is far from enough; although there are also hardware-based accelerations, they are basically limited to video decoding; and for panorama, anti-distortion and other matrix-based operations, the traditional CPU is Very inefficient; with the rise of VR, there are also GPU-based rendering, but there is no combination of GPU (image processing unit), CPU (general-purpose computing processing unit), VPU (video processing unit) and VR-related common attributes , it is difficult to optimize the overall hardware performance and VR rendering.

发明内容SUMMARY OF THE INVENTION

鉴于以上所述现有技术的缺点,本发明的目的在于提供一种高效优化渲染及与位姿反畸变融合的实现方法,具有多种投影模式,利用VPU、CPU、GPU的性能,解决硬件整体性能的优化和高分辨率全景视频的VR渲染。In view of the above-mentioned shortcomings of the prior art, the purpose of the present invention is to provide a method for efficiently optimizing rendering and fusion with pose anti-distortion, which has multiple projection modes, and utilizes the performance of VPU, CPU, and GPU to solve the overall hardware problem. Performance optimization and VR rendering of high-resolution panoramic videos.

本发明提供一种高效优化渲染及与位姿反畸变融合的实现方法,所述方法包括以下步骤:The present invention provides a method for efficiently optimizing rendering and fusion with pose anti-distortion, the method comprising the following steps:

S1:利用CPU采集外部输入数据,所述外部输入数据包括四种,分别是全景或3D数据源、位姿信息、FOV、投影模式;S1: use the CPU to collect external input data, the external input data includes four types, namely panoramic or 3D data source, pose information, FOV, and projection mode;

S2:利用VPU进行硬件解码,并将解码的输出传输到GPU;S2: Use the VPU for hardware decoding, and transmit the decoded output to the GPU;

S3:利用GPU进行贴图渲染,所述贴图渲染包括投影建模、色度空间降转、FOV初始化以及姿态融合;S3: Use GPU to perform texture rendering, the texture rendering includes projection modeling, chromaticity space downshifting, FOV initialization, and pose fusion;

S4:按照视频或图像的需求,进行循环迭代。S4: Perform loop iterations according to the requirements of the video or image.

进一步的,所述全景或3D数据源为视频或图像,所述视频和图像是等经纬度展开的全景格式的,或者3D格式的;所述位姿信息为能够提供三维位姿信息设备的输出数据;FOV为显示视场角。Further, the panorama or 3D data source is a video or an image, and the video and the image are in a panorama format with equal latitude and longitude, or in a 3D format; the pose information is the output data of a device that can provide three-dimensional pose information. ; FOV is the display field of view.

进一步的,所述投影模式包括平面投影模式、球面投影模式、立方体投影模式。Further, the projection modes include a plane projection mode, a spherical projection mode, and a cube projection mode.

进一步的,反畸变也是一种特殊的投影模式。Further, anti-distortion is also a special projection mode.

进一步的,所述贴图渲染包括以下步骤:Further, the texture rendering includes the following steps:

S3.1:基于投影模式进行投影建模;S3.1: Projection modeling based on projection mode;

S3.2:根据外部输入数据,对GPU进行初始化;S3.2: Initialize the GPU according to the external input data;

S3.3:通过定制顶点着色器进行拼接融合;S3.3: splicing and fusion through custom vertex shader;

S3.4:通过定制片元着色器进行色度空间转换;S3.4: Chroma space conversion by custom fragment shader;

S3.5:基于位姿信息和FOV进行姿态融合;S3.5: pose fusion based on pose information and FOV;

S3.6:利用GPU的流水线,进行实时渲染,利用乒乓Buffer机制来对输出显示进行控制。S3.6: Use the GPU pipeline to perform real-time rendering, and use the ping-pong Buffer mechanism to control the output display.

如上所述,本发明的一种高效优化渲染及与位姿反畸变融合的实现方法,具有以下有益效果:本发明充分利用手持设备的CPU、VPU、GPU等与VR相关的公共属性来对全景视频和图片进行渲染,结合全景视频和图像大分辨率、不同投影模式、位姿信息、FOV等需求点,有效利用硬件加速来完成渲染需求,从而为通用的嵌入式系统高效完成全景渲染提供有效的技术保障,充分利用通用系统的硬件能力,极大降低了全景渲染对硬件的要求,大大推动了全景的应用落地,为所有的2K、4K、6K及将来更高分辨率的全景进行解码处理。As described above, a method for realizing efficient optimized rendering and fusion with pose anti-distortion of the present invention has the following beneficial effects: Video and picture are rendered, combined with the demand points of panoramic video and image large resolution, different projection modes, pose information, FOV, etc., and hardware acceleration is effectively used to complete the rendering requirements, so as to provide effective panoramic rendering for general embedded systems. It fully utilizes the hardware capabilities of general-purpose systems, greatly reduces the hardware requirements for panorama rendering, and greatly promotes the application of panorama, and decodes all 2K, 4K, 6K and higher resolution panoramas in the future. .

附图说明Description of drawings

图1显示为本发明实施例中公开的实现方法的整体流程图;Fig. 1 shows the overall flow chart of the implementation method disclosed in the embodiment of the present invention;

图2显示为本发明实施例中公开的CPU、GPU、VPU三者之间的关系图;FIG. 2 shows a relationship diagram among the CPU, GPU, and VPU disclosed in the embodiment of the present invention;

图3显示为本发明实施例中公开的拼接融合步骤流程图;3 shows a flowchart of the splicing and fusion steps disclosed in the embodiment of the present invention;

图4显示为本发明实施例中公开的色度空间降转步骤流程图;FIG. 4 is a flowchart showing the steps of down-conversion of the chromaticity space disclosed in the embodiment of the present invention;

图5显示为本发明实施例中公开的姿态融合步骤流程图。FIG. 5 is a flowchart showing the steps of pose fusion disclosed in the embodiment of the present invention.

具体实施方式Detailed ways

以下通过特定的具体实例说明本发明的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本发明的其他优点与功效。本发明还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本发明的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。The embodiments of the present invention are described below through specific specific examples, and those skilled in the art can easily understand other advantages and effects of the present invention from the contents disclosed in this specification. The present invention can also be implemented or applied through other different specific embodiments, and various details in this specification can also be modified or changed based on different viewpoints and applications without departing from the spirit of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other under the condition of no conflict.

需要说明的是,以下实施例中所提供的图示仅以示意方式说明本发明的基本构想,遂图式中仅显示与本发明中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。It should be noted that the drawings provided in the following embodiments are only used to illustrate the basic concept of the present invention in a schematic way, so the drawings only show the components related to the present invention rather than the number, shape and number of components in actual implementation. For dimension drawing, the type, quantity and proportion of each component can be changed at will in actual implementation, and the component layout may also be more complicated.

如图1和图2所示,本发明提供一种高效优化渲染及与位姿反畸变融合的实现方法,所述方法包括以下步骤:As shown in FIG. 1 and FIG. 2 , the present invention provides an implementation method for efficient optimized rendering and fusion with pose anti-distortion. The method includes the following steps:

S1:利用CPU采集外部输入数据,所述外部输入数据包括四种,分别是全景或3D数据源、位姿信息、FOV(视场角)、投影模式;S1: use the CPU to collect external input data, the external input data includes four types, namely panoramic or 3D data source, pose information, FOV (field of view), and projection mode;

其中,所述全景或3D数据源可以是视频或者图像,所述视频和图像是等经纬度展开的全景格式的,一般高宽比是2:1,如果是立体的,左右比是4:1,上下比是1:1;所述位姿信息是能够提供三维位姿信息设备的输出数据,比如:thetas/Phi/Gamma(三维空间沿着XYZ方向的转角),一般是陀螺仪的输出数据,但是不限制于陀螺仪;FOV一般是显示视场角,常用的有90°、110°、130°等;投影模式常用的是平面投影模式、球面投影模式、立方体投影模式等;Wherein, the panorama or 3D data source may be a video or an image, and the video and the image are in a panorama format expanded with equal latitude and longitude. Generally, the aspect ratio is 2:1. If it is stereo, the left-right ratio is 4:1. The top-bottom ratio is 1:1; the pose information is the output data of a device that can provide three-dimensional pose information, such as: thetas/Phi/Gamma (the rotation angle of the three-dimensional space along the XYZ direction), which is generally the output data of the gyroscope, However, it is not limited to gyroscopes; FOV generally displays the field of view, commonly used are 90°, 110°, 130°, etc.; the commonly used projection modes are plane projection mode, spherical projection mode, cube projection mode, etc.;

此外,反畸变也是一种特殊的投影模式;In addition, anti-distortion is also a special projection mode;

S2:利用VPU进行硬件解码,并将解码的输出传输到GPU;S2: Use the VPU for hardware decoding, and transmit the decoded output to the GPU;

在传统的视频渲染中,视频解码是重中之重,对于全景渲染而言,视频解码不是最重要的,但也是重要的环节,利用VPU进行硬件解码得到纹理输入,同时将纹理输入传给GPU进行高性能运算;In traditional video rendering, video decoding is the top priority. For panoramic rendering, video decoding is not the most important, but it is also an important link. Use VPU to perform hardware decoding to obtain texture input, and at the same time pass the texture input to GPU perform high-performance computing;

所述纹理输入除了常见的视频解码数据外,水印或者logo的数据也是纹理输入;In addition to the common video decoding data, the texture input, the data of the watermark or the logo is also the texture input;

S3:利用GPU进行贴图渲染,所述贴图渲染包括投影建模、色度空间降转、FOV初始化以及姿态融合;S3: Use GPU to perform texture rendering, the texture rendering includes projection modeling, chromaticity space downshifting, FOV initialization, and pose fusion;

所述贴图渲染包括以下步骤:The texture rendering includes the following steps:

S3.1:基于投影模式进行投影建模;S3.1: Projection modeling based on projection mode;

S3.2:根据外部输入数据,对GPU进行初始化;S3.2: Initialize the GPU according to the external input data;

S3.3:通过定制顶点着色器进行拼接融合,以球面投影为例,根据定制顶点着色器来计算球面XYZ坐标,全景或3D数据源的UV(纹理)坐标、权重、顶点顺序坐标,并对全景或3D数据源进行拼接融合;如图3所示,每一行和每一列的单元数都是通过LUT(查找表)从全景或3D数据源的原图上得到的,通过对行数和列数的配置,可以对渲染的质量和效率进行平衡;S3.3: Perform splicing and fusion by customizing vertex shaders. Taking spherical projection as an example, calculate spherical XYZ coordinates, UV (texture) coordinates, weights, and vertex order coordinates of panorama or 3D data sources according to the custom vertex shader. The panorama or 3D data source is spliced and fused; as shown in Figure 3, the number of cells in each row and each column is obtained from the original image of the panorama or 3D data source through LUT (look-up table). The configuration of numbers can balance the quality and efficiency of rendering;

其中,所述LUT可以是自行标定后的输出数据,也可以是第三方工具如PT-GUI生成的,是特征匹配后用来展开具体位置的查找表;Wherein, the LUT can be output data after self-calibration, or can be generated by a third-party tool such as PT-GUI, and is a look-up table used to expand specific locations after feature matching;

S3.4:通过定制片元着色器进行色度空间降换,如图4所示,根据定制片元着色器来配置YUV空间到RGB空间的变换矩阵,将每一帧的YUV数据,转换成适合LCD显示的RGB信息;S3.4: Convert the chromaticity space through the custom fragment shader, as shown in Figure 4, configure the transformation matrix from YUV space to RGB space according to the custom fragment shader, and convert the YUV data of each frame into RGB information suitable for LCD display;

S3.5:基于位姿信息和FOV进行姿态融合,如图5所示,首先根据FOV的输入,得到基于FOV的视图投影,然后基于位姿矩阵进行融合投影,最后根据初始化的放大属性,得到最终投影;S3.5: Attitude fusion based on pose information and FOV, as shown in Figure 5, first, according to the input of FOV, FOV-based view projection is obtained, then fusion projection is performed based on the pose matrix, and finally, according to the initialized magnification attribute, we get final projection;

S3.6:利用GPU的流水线,进行实时渲染,利用乒乓Buffer(存储缓存)机制来对输出显示进行控制。S3.6: Use the GPU pipeline to perform real-time rendering, and use the ping-pong Buffer (storage cache) mechanism to control the output display.

S4:按照视频或者图像的需求,进行循环迭代;S4: Perform loop iteration according to the requirements of video or image;

对于反畸变、Logo、水印可以看作是常用渲染的一种特殊形式,也是同样的逻辑,通过顶点着色器和片元着色器的定制,只是在常用渲染的同时,做反畸变处理,水印、Logo做渲染输出。Anti-distortion, Logo, and watermark can be regarded as a special form of common rendering, and the same logic is used. Through the customization of vertex shader and fragment shader, anti-distortion processing, watermark, Logo for rendering output.

综上所述,本发明充分利用硬件加速,提供一种具有多种投影模式,支持FOV(视场角)、位姿信息等实时视频流水线渲染综合方法,利用VPU、CPU、GPU的性能,解决了高分辨率全景视频的VR渲染。所以,本发明有效克服了现有技术中的种种缺点而具高度产业利用价值。In summary, the present invention makes full use of hardware acceleration, provides a comprehensive method for real-time video pipeline rendering with multiple projection modes, supports FOV (field of view), pose information, etc., and utilizes the performance of VPU, CPU, and GPU to solve the problem. VR rendering of high-resolution panoramic video. Therefore, the present invention effectively overcomes various shortcomings in the prior art and has high industrial utilization value.

上述实施例仅例示性说明本发明的原理及其功效,而非用于限制本发明。任何熟悉此技术的人士皆可在不违背本发明的精神及范畴下,对上述实施例进行修饰或改变。因此,举凡所属技术领域中具有通常知识者在未脱离本发明所揭示的精神与技术思想下所完成的一切等效修饰或改变,仍应由本发明的权利要求所涵盖。The above-mentioned embodiments merely illustrate the principles and effects of the present invention, but are not intended to limit the present invention. Anyone skilled in the art can modify or change the above embodiments without departing from the spirit and scope of the present invention. Therefore, all equivalent modifications or changes made by those with ordinary knowledge in the technical field without departing from the spirit and technical idea disclosed in the present invention should still be covered by the claims of the present invention.

Claims (5)

Translated fromChinese
1.一种高效优化渲染及与位姿反畸变融合的实现方法,其特征在于,所述方法包括以下步骤:1. a kind of realization method of efficient optimization rendering and anti-distortion fusion with pose, is characterized in that, described method comprises the following steps:S1:利用CPU采集外部输入数据,所述外部输入数据包括四种,分别是全景或3D数据源、位姿信息、FOV、投影模式;S1: use the CPU to collect external input data, the external input data includes four types, namely panoramic or 3D data source, pose information, FOV, and projection mode;S2:利用VPU进行硬件解码,并将解码的输出传输到GPU;S2: Use the VPU for hardware decoding, and transmit the decoded output to the GPU;S3:利用GPU进行贴图渲染,所述贴图渲染包括投影建模、色度空间降转、FOV初始化以及姿态融合;S3: Use GPU to perform texture rendering, the texture rendering includes projection modeling, chromaticity space downshifting, FOV initialization, and pose fusion;S4:按照视频或图像的需求,进行循环迭代。S4: Perform loop iterations according to the requirements of the video or image.2.根据权利要求1所述的高效优化渲染及与位姿反畸变融合的实现方法,其特征在于:所述全景或3D数据源为视频或图像,所述视频和图像是等经纬度展开的全景格式的或者3D格式的;所述位姿信息为能够提供三维位姿信息设备的输出数据;FOV为-显示视场角。2. The realization method of efficient optimized rendering and fusion with pose anti-distortion according to claim 1, characterized in that: the panorama or 3D data source is a video or an image, and the video and the image are the panorama unfolded with equal longitude and latitude Format or 3D format; the pose information is the output data of a device that can provide three-dimensional pose information; FOV is - display field of view.3.根据权利要求1所述的高效优化渲染及与位姿反畸变融合的实现方法,其特征在于:所述投影模式包括平面投影模式、球面投影模式、立方体投影模式。3 . The method for realizing efficient optimized rendering and fusion with pose anti-distortion according to claim 1 , wherein the projection modes include a plane projection mode, a spherical projection mode, and a cube projection mode. 4 .4.根据权利要求1所述的高效优化渲染及与位姿反畸变融合的实现方法,其特征在于:反畸变也是一种特殊的投影模式。4 . The method for realizing efficient optimized rendering and fusion with pose anti-distortion according to claim 1 , wherein the anti-distortion is also a special projection mode. 5 .5.根据权利要求1所述的高效优化渲染及与位姿反畸变融合的实现方法,其特征在于,所述贴图渲染包括以下步骤:5. The method for realizing efficient optimized rendering and fusion with pose anti-distortion according to claim 1, wherein the texture rendering comprises the following steps:S3.1:基于投影模式进行投影建模;S3.1: Projection modeling based on projection mode;S3.2:根据外部输入数据,对GPU进行初始化;S3.2: Initialize the GPU according to the external input data;S3.3:通过定制顶点着色器进行拼接融合;S3.3: splicing and fusion through custom vertex shader;S3.4:通过定制片元着色器进行色度空间转换;S3.4: Chroma space conversion by custom fragment shader;S3.5:基于位姿信息和FOV进行姿态融合;S3.5: pose fusion based on pose information and FOV;S3.6:利用GPU的流水线,进行实时渲染,利用乒乓Buffer机制来对输出显示进行控制。S3.6: Use the GPU pipeline to perform real-time rendering, and use the ping-pong Buffer mechanism to control the output display.
CN201910218901.0A2019-03-212019-03-21Implementation method for efficient optimized rendering and pose anti-distortion fusionActiveCN111726594B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910218901.0ACN111726594B (en)2019-03-212019-03-21Implementation method for efficient optimized rendering and pose anti-distortion fusion

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910218901.0ACN111726594B (en)2019-03-212019-03-21Implementation method for efficient optimized rendering and pose anti-distortion fusion

Publications (2)

Publication NumberPublication Date
CN111726594Atrue CN111726594A (en)2020-09-29
CN111726594B CN111726594B (en)2024-11-29

Family

ID=72562771

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910218901.0AActiveCN111726594B (en)2019-03-212019-03-21Implementation method for efficient optimized rendering and pose anti-distortion fusion

Country Status (1)

CountryLink
CN (1)CN111726594B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112164378A (en)*2020-10-282021-01-01上海盈赞通信科技有限公司VR glasses all-in-one machine anti-distortion method and device
CN112437287A (en)*2020-11-232021-03-02成都易瞳科技有限公司Panoramic image scanning and splicing method
CN113205599A (en)*2021-04-252021-08-03武汉大学GPU accelerated video texture updating method in video three-dimensional fusion
CN114866760A (en)*2022-03-212022-08-05浙江大华技术股份有限公司Virtual reality display method, equipment, system and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2016095057A1 (en)*2014-12-192016-06-23Sulon Technologies Inc.Peripheral tracking for an augmented reality head mounted device
US20170339391A1 (en)*2016-05-192017-11-23Avago Technologies General Ip (Singapore) Pte. Ltd.360 degree video system with coordinate compression
CN107844190A (en)*2016-09-202018-03-27腾讯科技(深圳)有限公司Image presentation method and device based on Virtual Reality equipment
US20180174619A1 (en)*2016-12-192018-06-21Microsoft Technology Licensing, LlcInterface for application-specified playback of panoramic video
US20180176483A1 (en)*2014-12-292018-06-21Metaio GmbhMethod and sytem for generating at least one image of a real environment
CN108616731A (en)*2016-12-302018-10-02艾迪普(北京)文化科技股份有限公司360 degree of VR panoramic images images of one kind and video Real-time Generation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2016095057A1 (en)*2014-12-192016-06-23Sulon Technologies Inc.Peripheral tracking for an augmented reality head mounted device
US20180176483A1 (en)*2014-12-292018-06-21Metaio GmbhMethod and sytem for generating at least one image of a real environment
US20170339391A1 (en)*2016-05-192017-11-23Avago Technologies General Ip (Singapore) Pte. Ltd.360 degree video system with coordinate compression
CN107844190A (en)*2016-09-202018-03-27腾讯科技(深圳)有限公司Image presentation method and device based on Virtual Reality equipment
US20180174619A1 (en)*2016-12-192018-06-21Microsoft Technology Licensing, LlcInterface for application-specified playback of panoramic video
CN108616731A (en)*2016-12-302018-10-02艾迪普(北京)文化科技股份有限公司360 degree of VR panoramic images images of one kind and video Real-time Generation

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112164378A (en)*2020-10-282021-01-01上海盈赞通信科技有限公司VR glasses all-in-one machine anti-distortion method and device
CN112437287A (en)*2020-11-232021-03-02成都易瞳科技有限公司Panoramic image scanning and splicing method
CN113205599A (en)*2021-04-252021-08-03武汉大学GPU accelerated video texture updating method in video three-dimensional fusion
CN114866760A (en)*2022-03-212022-08-05浙江大华技术股份有限公司Virtual reality display method, equipment, system and readable storage medium

Also Published As

Publication numberPublication date
CN111726594B (en)2024-11-29

Similar Documents

PublicationPublication DateTitle
US12347016B2 (en)Image rendering method and apparatus, device, medium, and computer program product
US8692848B2 (en)Method and system for tile mode renderer with coordinate shader
CN111726594A (en) A realization method of efficient optimized rendering and fusion with pose anti-distortion
EP3121786B1 (en)Graphics pipeline method and apparatus
US7671862B1 (en)Systems and methods for providing an enhanced graphics pipeline
TWI654874B (en) Method and apparatus for processing a projection frame having at least one non-uniform mapping generated projection surface
CN107924556B (en) Image generation device and image display control device
CN106558017B (en)Spherical display image processing method and system
CN107392988A (en)System, the method and computer program product for being used to render with variable sampling rate using perspective geometry distortion
CN114445257B (en)Method for streaming light field compression using lossless or lossy compression and storage medium
CN102999946B (en)A kind of 3D Disposal Method about Graphics Data, device and equipment
CN112017101B (en) Variable Rasterization Rate
CN114782612A (en) Image rendering method, device, electronic device and storage medium
US20080024510A1 (en)Texture engine, graphics processing unit and video processing method thereof
KR20210087043A (en) Concurrent texture sampling
CN107392836A (en)The more projections of solid realized using graphics processing pipeline
Chen et al.Real-time lens based rendering algorithm for super-multiview integral photography without image resampling
CN116977532A (en)Cube texture generation method, apparatus, device, storage medium, and program product
CN106886974B (en)Image accelerator apparatus and related methods
CN114491352A (en)Model loading method and device, electronic equipment and computer readable storage medium
TW202322043A (en)Meshlet shading atlas
CN103617650A (en)Displaying method for complex three-dimensional terrain
CN112149383B (en)Text real-time layout method based on GPU, electronic device and storage medium
KR20100103703A (en)Multi-format support for surface creation in a graphics processing system
CN111726566B (en)Realization method for correcting splicing anti-shake in real time

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right
TA01Transfer of patent application right

Effective date of registration:20230412

Address after:200136 Room 2903, 29th Floor, No. 28 Xinjinqiao Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after:Shanghai taojinglihua Information Technology Co.,Ltd.

Address before:200126 building 13, 728 Lingyan South Road, Pudong New Area, Shanghai

Applicant before:Shanghai flying ape Information Technology Co.,Ltd.

GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp