Movatterモバイル変換


[0]ホーム

URL:


CN118196268A - Meta-universe digital person rendering method, device, equipment and storage medium - Google Patents

Meta-universe digital person rendering method, device, equipment and storage medium
Download PDF

Info

Publication number
CN118196268A
CN118196268ACN202410450527.8ACN202410450527ACN118196268ACN 118196268 ACN118196268 ACN 118196268ACN 202410450527 ACN202410450527 ACN 202410450527ACN 118196268 ACN118196268 ACN 118196268A
Authority
CN
China
Prior art keywords
digital human
data
model
point cloud
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410450527.8A
Other languages
Chinese (zh)
Other versions
CN118196268B (en
Inventor
吴湛
车守刚
刘永逵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Space Computing Technology Group Co ltd
Original Assignee
Guangdong Space Computing Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Space Computing Technology Group Co ltdfiledCriticalGuangdong Space Computing Technology Group Co ltd
Priority to CN202410450527.8ApriorityCriticalpatent/CN118196268B/en
Publication of CN118196268ApublicationCriticalpatent/CN118196268A/en
Application grantedgrantedCritical
Publication of CN118196268BpublicationCriticalpatent/CN118196268B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The application relates to the technical field of digital person rendering, and discloses a method, a device, equipment and a storage medium for rendering a metauniverse digital person. The method comprises the following steps: carrying out three-dimensional data scanning on the target person through a preset high-resolution three-dimensional scanning device to obtain point cloud data; carrying out digital person construction on the target person in the meta universe according to the point cloud data to obtain an initial digital person model; configuring texture attributes of the initial digital person to obtain a digital person model to be processed; and acquiring action data of the target person, and performing model rendering optimization on the digital person to be processed according to the action data to obtain a target digital person model.

Description

Translated fromChinese
元宇宙数字人的渲染方法、装置、设备及存储介质Metaverse digital human rendering method, device, equipment and storage medium

技术领域Technical Field

本申请涉及数字人渲染领域,尤其涉及一种元宇宙数字人的渲染方法、装置、设备及存储介质。The present application relates to the field of digital human rendering, and in particular to a method, device, equipment and storage medium for rendering a metaverse digital human.

背景技术Background technique

在数字人渲染领域,现有技术主要依赖于传统的三维建模、纹理贴图和简单的光照处理技术。这些技术能够创建基本的三维人物模型,并通过手工方式添加纹理和材质,以及实现基本的动作动画。此外,传统方法还会应用简单的环境光照模拟,以提升场景的真实感。这一系列技术为数字娱乐、虚拟现实和增强现实等领域提供了基础支持,使得数字人物能够在各种虚拟环境中得到应用。In the field of digital human rendering, existing technologies mainly rely on traditional 3D modeling, texture mapping, and simple lighting processing technologies. These technologies can create basic 3D character models, add textures and materials manually, and realize basic action animations. In addition, traditional methods also apply simple ambient lighting simulations to enhance the realism of the scene. This series of technologies provides basic support for fields such as digital entertainment, virtual reality, and augmented reality, allowing digital characters to be used in various virtual environments.

然而,现有技术在渲染真实感高度复杂的数字人方面存在诸多不足。传统的三维建模和纹理贴图方法在处理人体细节(如皮肤纹理、面部表情等)时,往往无法达到高度真实的效果,尤其是在复杂光照和动态环境下。其次,动作动画的自然性和流畅性难以通过手工制作的方式获得优化,限制了数字人物表现力和互动性。简单的光照处理技术无法准确模拟真实世界中光线的复杂交互,如反射、折射和散射等,导致渲染出的场景缺乏深度和层次感。However, existing technologies have many deficiencies in rendering highly complex digital humans. Traditional 3D modeling and texture mapping methods often fail to achieve highly realistic effects when processing human details (such as skin texture, facial expressions, etc.), especially in complex lighting and dynamic environments. Secondly, the naturalness and smoothness of action animations are difficult to optimize through manual production, which limits the expressiveness and interactivity of digital characters. Simple lighting processing technology cannot accurately simulate the complex interactions of light in the real world, such as reflection, refraction and scattering, resulting in a lack of depth and layering in the rendered scenes.

发明内容Summary of the invention

本申请提供了一种元宇宙数字人的渲染方法、装置、设备及存储介质,用于提高元宇宙数字人的渲染的真实性。The present application provides a method, apparatus, device and storage medium for rendering a digital human in a metaverse, which are used to improve the authenticity of the rendering of a digital human in a metaverse.

第一方面,本申请提供了一种元宇宙数字人的渲染方法,所述元宇宙数字人的渲染方法包括:通过预置的高分辨率三维扫描装置对目标人物进行三维数据扫描,得到点云数据;根据所述点云数据在元宇宙中对所述目标人物进行数字人构建,得到初始数字人模型;对所述初始数字人进行纹理属性配置,得到待处理数字人模型;采集所述目标人物的动作数据,并根据所述动作数据对所述待处理数字人进行模型渲染优化,得到目标数字人模型。In the first aspect, the present application provides a method for rendering a metaverse digital human, which comprises: performing a three-dimensional data scan of a target person through a preset high-resolution three-dimensional scanning device to obtain point cloud data; constructing a digital human of the target person in the metaverse according to the point cloud data to obtain an initial digital human model; configuring texture attributes of the initial digital human to obtain a digital human model to be processed; collecting action data of the target person, and performing model rendering optimization on the digital human to be processed according to the action data to obtain a target digital human model.

结合第一方面,在本申请第一方面的第一种实现方式中,所述通过预置的高分辨率三维扫描装置对目标人物进行三维数据扫描,得到点云数据,包括:通过所述高分辨率三维扫描装置对所述目标人物进行三维数据扫描,得到初始点云数据;对所述初始点云数据进行下采样,得到采样点云数据;根据所述采样点云数据构建图结构,并提取所述图结构的结构数据;将所述结构数据输入预置的图神经网络进行噪声点识别,得到噪声点数据;基于所述噪声点数据对所述采样点云数据进行去噪,得到去噪点云数据;对所述去噪点云数据进行数据对齐处理,得到所述点云数据。In combination with the first aspect, in a first implementation method of the first aspect of the present application, the three-dimensional data scanning of the target person by a preset high-resolution three-dimensional scanning device to obtain point cloud data includes: performing three-dimensional data scanning on the target person by the high-resolution three-dimensional scanning device to obtain initial point cloud data; down-sampling the initial point cloud data to obtain sampled point cloud data; constructing a graph structure according to the sampled point cloud data, and extracting structural data of the graph structure; inputting the structural data into a preset graph neural network to identify noise points to obtain noise point data; denoising the sampled point cloud data based on the noise point data to obtain denoised point cloud data; performing data alignment processing on the denoised point cloud data to obtain the point cloud data.

结合第一方面,在本申请第一方面的第二种实现方式中,所述根据所述点云数据在元宇宙中对所述目标人物进行数字人构建,得到初始数字人模型,包括:对所述点云数据进行点云分布区域拆分,得到多个点云分布区域;分别对每个所述点云分布区域进行点云分布密度分析,得到每个所述点云分布区域的点云分布密度;将每个所述点云分布区域的点云分布密度输入预置的自适应网格重建算法进行网格属性匹配,得到网格属性数据;基于所述网格属性数据,通过预置的泊松表面重建算法在所述元宇宙中对所述目标人物进行数字人模型构建,得到第一数字人模型;对所述第一数字人模型进行空洞缺陷填充,得到第二数字人模型;对所述第二数字人模型进行细节增强处理,得到第三数字人模型;对所述第三数字人模型进行多视角融合优化,得到所述初始数字人模型。In combination with the first aspect, in a second implementation method of the first aspect of the present application, the digital human model of the target person is constructed in the metaverse according to the point cloud data to obtain an initial digital human model, including: splitting the point cloud data into point cloud distribution areas to obtain multiple point cloud distribution areas; performing point cloud distribution density analysis on each of the point cloud distribution areas respectively to obtain the point cloud distribution density of each point cloud distribution area; inputting the point cloud distribution density of each point cloud distribution area into a preset adaptive grid reconstruction algorithm for grid attribute matching to obtain grid attribute data; based on the grid attribute data, a digital human model of the target person is constructed in the metaverse through a preset Poisson surface reconstruction algorithm to obtain a first digital human model; filling holes and defects in the first digital human model to obtain a second digital human model; performing detail enhancement processing on the second digital human model to obtain a third digital human model; and performing multi-perspective fusion optimization on the third digital human model to obtain the initial digital human model.

结合第一方面,在本申请第一方面的第三种实现方式中,所述对所述初始数字人进行纹理属性配置,得到待处理数字人模型,包括:对所述初始数字人进行纹理图像采集,得到模型纹理图像数据;对所述模型纹理图像数据进行高频细节提取,得到高频细节数据;对所述模型纹理图像数据进行低频细节提取,得到低频细节数据;对所述初始数字人进行几何特征分析,得到所述初始数字人的几何特征集;基于所述高频细节数据以及所述低频细节数据,对所述初始数字人进行模型曲率分析,得到模型曲率数据;将所述几何特征集以及所述模型曲率数据输入预置的纹理映射算法进行纹理属性映射,得到目标纹理属性;基于所述目标纹理属性,对所述初始数字人进行纹理属性配置,得到待处理数字人模型。In combination with the first aspect, in a third implementation of the first aspect of the present application, the configuring texture attributes of the initial digital human to obtain the digital human model to be processed includes: performing texture image acquisition on the initial digital human to obtain model texture image data; performing high-frequency detail extraction on the model texture image data to obtain high-frequency detail data; performing low-frequency detail extraction on the model texture image data to obtain low-frequency detail data; performing geometric feature analysis on the initial digital human to obtain a geometric feature set of the initial digital human; performing model curvature analysis on the initial digital human based on the high-frequency detail data and the low-frequency detail data to obtain model curvature data; inputting the geometric feature set and the model curvature data into a preset texture mapping algorithm for texture attribute mapping to obtain target texture attributes; and configuring texture attributes of the initial digital human based on the target texture attributes to obtain the digital human model to be processed.

结合第一方面,在本申请第一方面的第四种实现方式中,所述采集所述目标人物的动作数据,并根据所述动作数据对所述待处理数字人进行模型渲染优化,得到目标数字人模型,包括:采集所述目标人物的动作数据,并对所述动作数据进行时序数据匹配,得到时序数据集;基于所述时序数据集对所述动作数据进行关键动作提取,得到关键动作数据;对所述关键动作数据进行动作细节扫描,得到动作细节数据集;对所述动作细节数据集进行细节连贯性分析,得到细节连贯性评价值;基于所述细节连贯性评价值对所述动作细节数据集进行细节优化,得到优化细节数据集;基于所述优化细节数据集对所述待处理数字人进行模型渲染优化,得到目标数字人模型。In combination with the first aspect, in a fourth implementation method of the first aspect of the present application, the action data of the target person is collected, and model rendering optimization of the digital human to be processed is performed according to the action data to obtain a target digital human model, including: collecting the action data of the target person, and performing time series data matching on the action data to obtain a time series data set; extracting key actions from the action data based on the time series data set to obtain key action data; scanning the key action data for action details to obtain an action detail data set; performing detail coherence analysis on the action detail data set to obtain a detail coherence evaluation value; performing detail optimization on the action detail data set based on the detail coherence evaluation value to obtain an optimized detail data set; performing model rendering optimization on the digital human to be processed based on the optimized detail data set to obtain a target digital human model.

结合第一方面,在本申请第一方面的第五种实现方式中,所述基于所述优化细节数据集对所述待处理数字人进行模型渲染优化,得到目标数字人模型,包括:对所述待处理数字人进行光照模型构建,得到目标光照模型;通过光线追踪算法对所述目标光照模型进行光照缺失区域分析,得到光照缺失区域;对所述光照缺失区域进行光照增强,得到增强数字人模型;对所述增强数字人模型进行阴影边缘检测,得到阴影轮廓;根据所述目标光照模型,对所述阴影轮廓进行阴影光照参数匹配,得到阴影光照参数集;基于所述阴影光照参数集对所述增强数字人模型进行参数配置,得到配置数字人模型;基于所述优化细节数据集对所述配置数字人模型进行模型渲染优化,得到所述目标数字人模型。In combination with the first aspect, in a fifth implementation method of the first aspect of the present application, the model rendering optimization of the digital human to be processed based on the optimized detail data set to obtain a target digital human model includes: constructing a lighting model for the digital human to be processed to obtain a target lighting model; performing lighting missing area analysis on the target lighting model through a ray tracing algorithm to obtain a lighting missing area; performing lighting enhancement on the lighting missing area to obtain an enhanced digital human model; performing shadow edge detection on the enhanced digital human model to obtain a shadow contour; performing shadow lighting parameter matching on the shadow contour according to the target lighting model to obtain a shadow lighting parameter set; performing parameter configuration on the enhanced digital human model based on the shadow lighting parameter set to obtain a configured digital human model; performing model rendering optimization on the configured digital human model based on the optimized detail data set to obtain the target digital human model.

结合第一方面,在本申请第一方面的第六种实现方式中,所述基于所述优化细节数据集对所述配置数字人模型进行模型渲染优化,得到所述目标数字人模型,包括:对所述优化细节数据集进行动作类型分析,得到动作类型集合;基于所述动作类型集合,对所述优化细节数据集进行数据分组,得到多组动作细节数据;对每组所述动作细节数据进行动作参数分析,得到每组所述动作细节数据的动作参数集;基于每组所述动作细节数据的动作参数据对所述配置数字人模型进行模型渲染优化,得到所述目标数字人模型。In combination with the first aspect, in a sixth implementation method of the first aspect of the present application, the model rendering optimization of the configured digital human model based on the optimized detail data set to obtain the target digital human model includes: performing action type analysis on the optimized detail data set to obtain an action type set; based on the action type set, data grouping the optimized detail data set to obtain multiple groups of action detail data; performing action parameter analysis on each group of the action detail data to obtain an action parameter set for each group of the action detail data; performing model rendering optimization on the configured digital human model based on the action parameter data of each group of the action detail data to obtain the target digital human model.

第二方面,本申请提供了一种元宇宙数字人的渲染装置,所述元宇宙数字人的渲染装置包括:扫描模块,用于通过预置的高分辨率三维扫描装置对目标人物进行三维数据扫描,得到点云数据;构建模块,用于根据所述点云数据在元宇宙中对所述目标人物进行数字人构建,得到初始数字人模型;配置模块,用于对所述初始数字人进行纹理属性配置,得到待处理数字人模型;优化模块,用于采集所述目标人物的动作数据,并根据所述动作数据对所述待处理数字人进行模型渲染优化,得到目标数字人模型。In the second aspect, the present application provides a rendering device for a metaverse digital human, and the rendering device for a metaverse digital human includes: a scanning module, which is used to perform three-dimensional data scanning on a target person through a preset high-resolution three-dimensional scanning device to obtain point cloud data; a construction module, which is used to construct a digital human of the target person in the metaverse according to the point cloud data to obtain an initial digital human model; a configuration module, which is used to configure texture attributes of the initial digital human to obtain a digital human model to be processed; an optimization module, which is used to collect action data of the target person, and perform model rendering optimization on the digital human to be processed according to the action data to obtain a target digital human model.

本申请第三方面提供了一种元宇宙数字人的渲染设备,包括:存储器和至少一个处理器,所述存储器中存储有指令;所述至少一个处理器调用所述存储器中的所述指令,以使得所述元宇宙数字人的渲染设备执行上述的元宇宙数字人的渲染方法。The third aspect of the present application provides a rendering device for a metaverse digital human, comprising: a memory and at least one processor, wherein the memory stores instructions; the at least one processor calls the instructions in the memory so that the rendering device for the metaverse digital human executes the above-mentioned rendering method for the metaverse digital human.

本申请的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述的元宇宙数字人的渲染方法。A fourth aspect of the present application provides a computer-readable storage medium, in which instructions are stored. When the computer-readable storage medium is run on a computer, the computer executes the above-mentioned method for rendering a metaverse digital human.

本申请提供的技术方案中,通过使用预置的高分辨率三维扫描装置进行目标人物的三维数据扫描,能够捕捉到人物细微的几何特征,确保了数字人模型的高度真实性和精确性。这为构建高质量的元宇宙数字人提供了坚实的基础。通过对初始点云数据进行下采样、构建图结构、提取结构数据、进行噪声点识别和去噪处理,以及数据对齐,有效地优化了点云数据的质量。这不仅提升了数据处理的效率,还保证了最终数字人模型的几何细节的清晰度和准确性。采用自适应网格重建算法和泊松表面重建算法,结合细节增强处理和多视角融合优化,能够根据点云数据的分布自动调整网格,填充空洞缺陷,并增强模型细节。这一系列优化措施大大提高了数字人模型的视觉真实性和美观度。通过纹理图像的采集、细节提取、几何特征分析和纹理映射算法,能够精确地配置数字人模型的纹理属性。特别是结合高频与低频细节数据,以及模型曲率分析,保证了纹理的自然贴合和视觉效果的一致性。采集并分析目标人物的动作数据,通过时序匹配、关键动作提取、动作细节扫描和连贯性分析,优化了动作细节的自然度和流畅性。这确保了数字人在元宇宙中的动作表现既真实又自然。构建目标光照模型并通过光线追踪技术优化光照效果,结合阴影轮廓检测和光照参数匹配,进一步增强了数字人模型的立体感和在场感。通过模型渲染优化,确保了数字人模型在各种光照条件下均展现出高度真实和吸引人的视觉效果。In the technical solution provided by the present application, by using a preset high-resolution three-dimensional scanning device to scan the three-dimensional data of the target person, the subtle geometric features of the person can be captured, ensuring the high authenticity and accuracy of the digital human model. This provides a solid foundation for building a high-quality metaverse digital human. By downsampling the initial point cloud data, building a graph structure, extracting structural data, identifying and denoising noise points, and aligning data, the quality of the point cloud data is effectively optimized. This not only improves the efficiency of data processing, but also ensures the clarity and accuracy of the geometric details of the final digital human model. Adopting an adaptive mesh reconstruction algorithm and a Poisson surface reconstruction algorithm, combined with detail enhancement processing and multi-view fusion optimization, the mesh can be automatically adjusted according to the distribution of point cloud data, void defects can be filled, and model details can be enhanced. This series of optimization measures greatly improves the visual authenticity and aesthetics of the digital human model. Through the acquisition of texture images, detail extraction, geometric feature analysis and texture mapping algorithms, the texture properties of the digital human model can be accurately configured. In particular, the combination of high-frequency and low-frequency detail data, as well as model curvature analysis, ensures the natural fit of the texture and the consistency of the visual effect. The target person's motion data is collected and analyzed. Through timing matching, key motion extraction, motion detail scanning and coherence analysis, the naturalness and smoothness of the motion details are optimized. This ensures that the digital human's motion performance in the metaverse is both realistic and natural. The target lighting model is constructed and the lighting effect is optimized through ray tracing technology. Combined with shadow contour detection and lighting parameter matching, the three-dimensional sense and presence of the digital human model are further enhanced. Through model rendering optimization, it is ensured that the digital human model shows highly realistic and attractive visual effects under various lighting conditions.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以基于这些附图获得其他的附图。In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the accompanying drawings required for use in the description of the embodiments will be briefly introduced below. Obviously, the accompanying drawings described below are some embodiments of the present invention. For ordinary technicians in this field, other accompanying drawings can be obtained based on these accompanying drawings without paying creative work.

图1为本申请实施例中元宇宙数字人的渲染方法的一个实施例示意图;FIG1 is a schematic diagram of an embodiment of a method for rendering a digital human in a metaverse according to an embodiment of the present application;

图2为本申请实施例中元宇宙数字人的渲染装置的一个实施例示意图。FIG. 2 is a schematic diagram of an embodiment of a rendering device for a metaverse digital human in an embodiment of the present application.

具体实施方式Detailed ways

本申请实施例提供了一种元宇宙数字人的渲染方法、装置、设备及存储介质。本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”或“具有”及其任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。The embodiments of the present application provide a method, device, equipment and storage medium for rendering a digital human in the metaverse. The terms "first", "second", "third", "fourth", etc. (if any) in the specification and claims of the present application and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the data used in this way can be interchangeable where appropriate, so that the embodiments described here can be implemented in an order other than that illustrated or described here. In addition, the terms "including" or "having" and any variations thereof are intended to cover non-exclusive inclusions, for example, a process, method, system, product or device comprising a series of steps or units is not necessarily limited to those steps or units clearly listed, but may include other steps or units that are not clearly listed or inherent to these processes, methods, products or devices.

为便于理解,下面对本申请实施例的具体流程进行描述,请参阅图1,本申请实施例中元宇宙数字人的渲染方法的一个实施例包括:For ease of understanding, the specific process of the embodiment of the present application is described below. Please refer to FIG1. An embodiment of the method for rendering a digital human in the metaverse in the embodiment of the present application includes:

步骤S101、通过预置的高分辨率三维扫描装置对目标人物进行三维数据扫描,得到点云数据;Step S101, performing three-dimensional data scanning on a target person by using a preset high-resolution three-dimensional scanning device to obtain point cloud data;

具体的,通过高分辨率三维扫描装置对目标人物进行三维扫描,得到目标人物的外观和形态的精确三维信息,生成初始点云数据。由于直接生成的点云数据往往非常庞大,可能包含数以百万计的点,为了提高后续处理的效率和减少计算资源的消耗,接下来对初始点云数据进行下采样处理,减少数据量,同时保留足够的信息以准确表现目标人物的三维形态。基于采样点云数据构建图结构,并从中提取图结构的结构数据。通过分析点云数据中各点之间的空间关系,帮助后续的算法更好地理解和处理这些数据。得到图结构后,将其结构数据输入预置的图神经网络,通过它对噪声点的识别,能够有效地从采样点云数据中识别出噪声点数据。噪声点是指那些不属于目标人物主体的、可能由扫描环境或其他因素引入的误差点。基于噪声点数据对采样点云数据进行去噪处理,清除那些不必要的噪声点,得到更加纯净、准确代表目标人物三维形态的去噪点云数据。为了确保点云数据能够准确对应到目标人物的真实三维空间中,进行数据对齐处理。数据对齐处理通过调整点云数据中点的位置,确保它们正确匹配目标人物的三维形态特征,最终得到点云数据。Specifically, a high-resolution three-dimensional scanning device is used to perform a three-dimensional scan on the target person, and accurate three-dimensional information of the appearance and form of the target person is obtained to generate initial point cloud data. Since the directly generated point cloud data is often very large and may contain millions of points, in order to improve the efficiency of subsequent processing and reduce the consumption of computing resources, the initial point cloud data is then downsampled to reduce the amount of data while retaining enough information to accurately represent the three-dimensional form of the target person. A graph structure is constructed based on the sampled point cloud data, and structural data of the graph structure is extracted from it. By analyzing the spatial relationship between the points in the point cloud data, the subsequent algorithms are helped to better understand and process the data. After obtaining the graph structure, its structural data is input into a preset graph neural network, and through its recognition of noise points, noise point data can be effectively identified from the sampled point cloud data. Noise points refer to error points that do not belong to the main body of the target person and may be introduced by the scanning environment or other factors. The sampled point cloud data is denoised based on the noise point data, and those unnecessary noise points are removed to obtain more pure and accurate denoised point cloud data that represents the three-dimensional form of the target person. In order to ensure that the point cloud data can accurately correspond to the real three-dimensional space of the target person, data alignment processing is performed. The data alignment process adjusts the positions of the points in the point cloud data to ensure that they correctly match the three-dimensional morphological features of the target person, and finally obtains the point cloud data.

步骤S102、根据点云数据在元宇宙中对目标人物进行数字人构建,得到初始数字人模型;Step S102: construct a digital human of the target person in the metaverse according to the point cloud data to obtain an initial digital human model;

需要说明的是,对点云数据进行区域拆分。通过分析整个点云数据集,将其划分为多个点云分布区域,每个区域包含了一部分点云数据。对每个点云分布区域进行点云分布密度分析,通过计算每个区域内点云的密集程度获得点云分布密度信息,指导重建算法在不同区域采取不同的处理策略,以达到更好的重建效果。将点云分布密度输入预置的自适应网格重建算法进行网格属性匹配,算法根据点云分布密度的不同,在不同区域采用不同的网格密度进行重建,生成网格属性数据。基于网格属性数据,采用预置的泊松表面重建算法在元宇宙中对目标人物进行数字人模型构建,该算法能够有效地利用点云数据和网格属性数据生成连续、平滑的表面,从而得到第一数字人模型。对第一数字人模型进行空洞缺陷填充。模型中依然可能存在由于数据不完整或重建算法限制导致的空洞或缺陷。通过专门的算法对空洞和缺陷进行填补,得到更加完整的第二数字人模型。为了进一步提升模型的质量,对第二数字人模型进行细节增强处理,通过增加表面细节或调整表面纹理等方式,增强数字人模型的真实感和视觉效果,得到第三数字人模型。对第三数字人模型进行多视角融合优化。通过分析模型在不同视角下的表现,调整和优化模型数据,使其在元宇宙中的表现更加自然和真实,最终得到初始数字人模型。It should be noted that the point cloud data is segmented into regions. By analyzing the entire point cloud data set, it is divided into multiple point cloud distribution regions, each of which contains a portion of point cloud data. The point cloud distribution density is analyzed for each point cloud distribution region, and the point cloud distribution density information is obtained by calculating the density of the point cloud in each region, guiding the reconstruction algorithm to adopt different processing strategies in different regions to achieve better reconstruction results. The point cloud distribution density is input into the preset adaptive grid reconstruction algorithm for grid attribute matching. The algorithm uses different grid densities in different regions for reconstruction according to the different point cloud distribution densities to generate grid attribute data. Based on the grid attribute data, the preset Poisson surface reconstruction algorithm is used to construct a digital human model of the target person in the metaverse. The algorithm can effectively use the point cloud data and grid attribute data to generate a continuous and smooth surface, thereby obtaining the first digital human model. The first digital human model is filled with holes and defects. There may still be holes or defects in the model due to incomplete data or limitations of the reconstruction algorithm. The holes and defects are filled by a special algorithm to obtain a more complete second digital human model. In order to further improve the quality of the model, the second digital human model is enhanced in detail, and the realism and visual effect of the digital human model are enhanced by adding surface details or adjusting surface textures, so as to obtain the third digital human model. The third digital human model is optimized by multi-view fusion. By analyzing the performance of the model under different perspectives, the model data is adjusted and optimized to make its performance in the metaverse more natural and realistic, and finally the initial digital human model is obtained.

步骤S103、对初始数字人进行纹理属性配置,得到待处理数字人模型;Step S103, configuring texture attributes of the initial digital human to obtain a digital human model to be processed;

具体的,对初始数字人进行纹理图像采集,捕获或选择与目标人物肤色、服饰等相关的纹理图像,获得模型纹理图像数据。这些数据不仅需要包含颜色信息,还应该包含足够的细节信息,以便更好地模拟真实世界中的物理属性和视觉效果。通过对模型纹理图像数据进行分析,分别提取高频和低频的细节数据。高频细节提取关注于捕捉纹理中的微小细节,如皮肤的细纹、布料的细微纹理等,这些细节有助于增强数字人模型的真实感。同时,低频细节提取则专注于纹理中的较大区域变化和平滑过渡,例如肤色的渐变、服饰的大面积颜色变化等,这些细节有助于形成数字人模型的基本视觉印象。进行几何特征分析,通过分析数字人模型的形状、体积和表面特性,获取初始数字人的几何特征集。几何特征决定了纹理如何根据模型的具体形状和曲率进行适配和调整。基于高频细节数据和低频细节数据,对初始数字人进行模型曲率分析,获得模型曲率数据。这一分析帮助理解模型表面的弯曲度和复杂性,确保纹理映射时能够精确地贴合模型表面,特别是在模型的弯曲或凹凸部分。将几何特征集和模型曲率数据一同输入预置的纹理映射算法中,进行纹理属性映射。该算法将纹理图像映射到数字人模型的表面,确保纹理的正确位置、方向和尺寸。通过映射,得到目标纹理属性,这些属性包括纹理的精确放置、纹理之间的无缝融合以及纹理细节的逼真展现。基于目标纹理属性,对初始数字人进行纹理属性配置,对纹理细节进行调整和优化,确保生成的待处理数字人模型在视觉上既丰富又逼真。Specifically, texture images are collected for the initial digital human, and texture images related to the target person's skin color, clothing, etc. are captured or selected to obtain model texture image data. These data need to contain not only color information, but also sufficient detail information to better simulate the physical properties and visual effects in the real world. By analyzing the model texture image data, high-frequency and low-frequency detail data are extracted respectively. High-frequency detail extraction focuses on capturing tiny details in the texture, such as fine lines on the skin and subtle textures of fabrics, etc. These details help enhance the realism of the digital human model. At the same time, low-frequency detail extraction focuses on larger regional changes and smooth transitions in the texture, such as gradual changes in skin color and large-area color changes in clothing, etc. These details help form the basic visual impression of the digital human model. Geometric feature analysis is performed to obtain the geometric feature set of the initial digital human by analyzing the shape, volume and surface characteristics of the digital human model. Geometric features determine how the texture is adapted and adjusted according to the specific shape and curvature of the model. Based on the high-frequency detail data and low-frequency detail data, the initial digital human is subjected to model curvature analysis to obtain model curvature data. This analysis helps understand the curvature and complexity of the model surface, ensuring that the texture mapping can accurately fit the model surface, especially on curved or concave parts of the model. The geometric feature set and model curvature data are input into the preset texture mapping algorithm for texture attribute mapping. The algorithm maps the texture image to the surface of the digital human model to ensure the correct position, orientation and size of the texture. Through mapping, the target texture properties are obtained, which include accurate placement of textures, seamless fusion between textures, and realistic presentation of texture details. Based on the target texture properties, the texture properties of the initial digital human are configured, and the texture details are adjusted and optimized to ensure that the generated digital human model to be processed is visually rich and realistic.

步骤S104、采集目标人物的动作数据,并根据动作数据对待处理数字人进行模型渲染优化,得到目标数字人模型。Step S104: collect the action data of the target person, and perform model rendering optimization on the digital human to be processed according to the action data to obtain the target digital human model.

需要说明的是,采集目标人物的动作数据。使用高精度的动作捕捉系统记录目标人物的每一个动作和表情。对动作数据进行时序数据匹配,将动作与其发生的时间点关联起来,生成时序数据集。基于时序数据集,对动作数据进行分析,通过识别和提取出动作序列中的关键动作,生成关键动作数据。对关键动作数据进行动作细节扫描,获取动作细节数据集,数据集包含了每个关键动作的细微变化和特征。对动作细节数据集进行细节连贯性分析,评估各个动作细节之间的连贯性,确保动作过渡自然、流畅,无不自然的抖动或突变,得到细节连贯性评价值,该评价值是对动作细节连贯性的量化评估。基于细节连贯性评价值,对动作细节数据集进行细节优化,调整和改善动作数据中那些连贯性不足的部分,使整个动作序列更加流畅自然,优化后得到优化细节数据集。基于优化细节数据集,对待处理数字人进行模型渲染优化,将优化后的动作细节应用到数字人模型上,通过高级渲染技术重现目标人物的动作和表情,最终得到目标数字人模型。It should be noted that the motion data of the target person is collected. A high-precision motion capture system is used to record every motion and expression of the target person. The motion data is matched with time series data, and the motion is associated with the time point when it occurs to generate a time series data set. Based on the time series data set, the motion data is analyzed, and the key motion data is generated by identifying and extracting the key motions in the motion sequence. The key motion data is scanned for motion details to obtain a motion detail data set, which contains the subtle changes and characteristics of each key motion. The motion detail data set is analyzed for detail coherence, and the coherence between the various motion details is evaluated to ensure that the motion transition is natural and smooth without unnatural jitter or mutation, and the detail coherence evaluation value is obtained, which is a quantitative evaluation of the coherence of the motion details. Based on the detail coherence evaluation value, the motion detail data set is optimized for details, and those parts of the motion data with insufficient coherence are adjusted and improved to make the entire motion sequence smoother and more natural. After optimization, an optimized detail data set is obtained. Based on the optimized detail data set, the model rendering optimization is performed on the digital human to be processed, and the optimized motion details are applied to the digital human model. The motion and expression of the target person are reproduced through advanced rendering technology, and finally the target digital human model is obtained.

其中,对待处理数字人进行光照模型构建,模拟现实世界中光照对物体的影响,确保数字人模型在不同的光照条件下都能展现出逼真的视觉效果。通过光照模型,生成目标光照模型,该模型考虑了光源的位置、强度、颜色等因素。采用光线追踪算法对目标光照模型进行分析,特别是针对光照缺失区域的分析。光线追踪算法通过模拟光线与物体表面的交互过程,计算出光线的传播、反射和折射,从而有效地识别出在当前光照模型下,哪些区域未能得到充分照明。对光照缺失区域进行专门的光照增强处理,通过调整光照条件或添加虚拟光源等方式,改善这些区域的光照效果,使得整个数字人模型的光照分布更为均匀和自然,得到增强数字人模型。进行阴影边缘检测,理解和模拟阴影对数字人模型的影响,通过分析增强数字人模型的表面和光照模型的关系,从而精确地确定阴影的轮廓。根据目标光照模型对阴影轮廓进行阴影光照参数匹配,通过分析阴影产生的具体条件,如光源位置、光源强度等,确定适当的阴影光照参数集。该参数集用于调整和优化模型中阴影的表现,使其更加真实和自然。基于阴影光照参数集对增强数字人模型进行参数配置,对模型的光照、阴影等视觉效果进行调整,确保模型在任何光照条件下都能展现出最佳的视觉效果,得到配置数字人模型。在此基础上,利用优化细节数据集对配置数字人模型进行最终的模型渲染优化,包括对模型细节的进一步精化,以及对光照和阴影效果的最终调整,确保目标数字人模型在视觉上的完美呈现。Among them, a lighting model is constructed for the digital human to be processed to simulate the effect of light on objects in the real world, ensuring that the digital human model can show realistic visual effects under different lighting conditions. Through the lighting model, a target lighting model is generated, which takes into account factors such as the position, intensity, and color of the light source. The target lighting model is analyzed using a ray tracing algorithm, especially for the analysis of the lighting-missing areas. The ray tracing algorithm calculates the propagation, reflection, and refraction of light by simulating the interaction between light and the surface of the object, thereby effectively identifying which areas are not fully illuminated under the current lighting model. Special lighting enhancement processing is performed on the lighting-missing areas, and the lighting effects of these areas are improved by adjusting the lighting conditions or adding virtual light sources, so that the lighting distribution of the entire digital human model is more uniform and natural, and an enhanced digital human model is obtained. Shadow edge detection is performed to understand and simulate the effect of shadows on the digital human model, and the relationship between the surface of the enhanced digital human model and the lighting model is analyzed to accurately determine the outline of the shadow. Shadow lighting parameters are matched for the shadow outline according to the target lighting model, and the appropriate shadow lighting parameter set is determined by analyzing the specific conditions for the shadow generation, such as the position of the light source, the intensity of the light source, etc. This parameter set is used to adjust and optimize the performance of shadows in the model to make it more realistic and natural. Based on the shadow lighting parameter set, the enhanced digital human model is configured with parameters, and the model's lighting, shadow and other visual effects are adjusted to ensure that the model can show the best visual effects under any lighting conditions, thus obtaining a configured digital human model. On this basis, the optimized detail data set is used to perform the final model rendering optimization on the configured digital human model, including further refinement of the model details and final adjustment of the lighting and shadow effects, to ensure the perfect visual presentation of the target digital human model.

其中,对优化细节数据集进行动作类型分析,从已优化的细节数据集中识别出不同的动作类型,比如行走、跳跃或是其他特定的动作表现,得到一个全面覆盖各种动作类型的集合。这为后续的数据处理和模型渲染提供了基础分类标准,确保针对不同类型动作的优化措施能够精准施加。基于动作类型集合,将优化细节数据集进行数据分组,将数据根据其所属的动作类型进行分类,得到多组针对特定动作的细节数据。分组有助于系统化地处理数据,还能针对不同的动作类型采取最合适的渲染和优化策略,提高整个优化过程的效率和效果。对每组动作细节数据进行动作参数分析,理解每种动作的特征和要求,通过分析得到每组动作细节数据的动作参数集。这些动作参数包括动作的速度、幅度、持续时间等关键因素,它们对于重现准确和自然的动作表现至关重要。通过动作参数分析,确保对不同动作的渲染能够尽可能地贴近真实表现,增强了数字人模型的真实感和动态表现力。基于每组动作细节数据的动作参数,对配置数字人模型进行模型渲染优化。将分析得到的动作参数应用于配置数字人模型的具体渲染过程中,确保每一种动作都能在模型上以最逼真的形式呈现,得到目标数字人模型。Among them, the action type analysis is performed on the optimized detail data set, and different action types, such as walking, jumping or other specific action performances, are identified from the optimized detail data set to obtain a set that comprehensively covers various action types. This provides a basic classification standard for subsequent data processing and model rendering, ensuring that optimization measures for different types of actions can be accurately applied. Based on the action type set, the optimized detail data set is grouped, and the data is classified according to the action type to which it belongs, to obtain multiple groups of detail data for specific actions. Grouping helps to process data systematically, and can also adopt the most appropriate rendering and optimization strategies for different action types, thereby improving the efficiency and effectiveness of the entire optimization process. Action parameter analysis is performed on each group of action detail data to understand the characteristics and requirements of each action, and the action parameter set of each group of action detail data is obtained through analysis. These action parameters include key factors such as the speed, amplitude, and duration of the action, which are essential for reproducing accurate and natural action performance. Through action parameter analysis, it is ensured that the rendering of different actions can be as close to the real performance as possible, enhancing the realism and dynamic expression of the digital human model. Based on the action parameters of each group of action detail data, the model rendering optimization of the configured digital human model is performed. The action parameters obtained by analysis are applied to the specific rendering process of configuring the digital human model to ensure that each action can be presented in the most realistic form on the model to obtain the target digital human model.

可以理解的是,本申请的执行主体可以为元宇宙数字人的渲染装置,还可以是终端或者服务器,具体此处不做限定。本申请实施例以服务器为执行主体为例进行说明。It is understandable that the execution subject of the present application can be a rendering device of the Metaverse Digital Human, or a terminal or a server, which is not limited here. The present application embodiment is described by taking the server as the execution subject as an example.

本申请实施例中,通过使用预置的高分辨率三维扫描装置进行目标人物的三维数据扫描,能够捕捉到人物细微的几何特征,确保了数字人模型的高度真实性和精确性。这为构建高质量的元宇宙数字人提供了坚实的基础。通过对初始点云数据进行下采样、构建图结构、提取结构数据、进行噪声点识别和去噪处理,以及数据对齐,有效地优化了点云数据的质量。这不仅提升了数据处理的效率,还保证了最终数字人模型的几何细节的清晰度和准确性。采用自适应网格重建算法和泊松表面重建算法,结合细节增强处理和多视角融合优化,能够根据点云数据的分布自动调整网格,填充空洞缺陷,并增强模型细节。这一系列优化措施大大提高了数字人模型的视觉真实性和美观度。通过纹理图像的采集、细节提取、几何特征分析和纹理映射算法,能够精确地配置数字人模型的纹理属性。特别是结合高频与低频细节数据,以及模型曲率分析,保证了纹理的自然贴合和视觉效果的一致性。采集并分析目标人物的动作数据,通过时序匹配、关键动作提取、动作细节扫描和连贯性分析,优化了动作细节的自然度和流畅性。这确保了数字人在元宇宙中的动作表现既真实又自然。构建目标光照模型并通过光线追踪技术优化光照效果,结合阴影轮廓检测和光照参数匹配,进一步增强了数字人模型的立体感和在场感。通过模型渲染优化,确保了数字人模型在各种光照条件下均展现出高度真实和吸引人的视觉效果。In the embodiment of the present application, by using a preset high-resolution three-dimensional scanning device to scan the three-dimensional data of the target person, the subtle geometric features of the person can be captured, ensuring the high authenticity and accuracy of the digital human model. This provides a solid foundation for building a high-quality metaverse digital human. By downsampling the initial point cloud data, building a graph structure, extracting structural data, identifying and denoising noise points, and aligning data, the quality of the point cloud data is effectively optimized. This not only improves the efficiency of data processing, but also ensures the clarity and accuracy of the geometric details of the final digital human model. Adopting an adaptive mesh reconstruction algorithm and a Poisson surface reconstruction algorithm, combined with detail enhancement processing and multi-view fusion optimization, the mesh can be automatically adjusted according to the distribution of point cloud data, void defects can be filled, and model details can be enhanced. This series of optimization measures greatly improves the visual authenticity and aesthetics of the digital human model. Through the acquisition of texture images, detail extraction, geometric feature analysis and texture mapping algorithms, the texture properties of the digital human model can be accurately configured. In particular, the combination of high-frequency and low-frequency detail data, as well as model curvature analysis, ensures the natural fit of the texture and the consistency of the visual effect. The target person's motion data is collected and analyzed. Through timing matching, key motion extraction, motion detail scanning and coherence analysis, the naturalness and smoothness of the motion details are optimized. This ensures that the digital human's motion performance in the metaverse is both realistic and natural. The target lighting model is constructed and the lighting effect is optimized through ray tracing technology. Combined with shadow contour detection and lighting parameter matching, the three-dimensional sense and presence of the digital human model are further enhanced. Through model rendering optimization, it is ensured that the digital human model shows highly realistic and attractive visual effects under various lighting conditions.

在一具体实施例中,执行步骤S101的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S101 may specifically include the following steps:

(1)通过高分辨率三维扫描装置对目标人物进行三维数据扫描,得到初始点云数据;(1) Scanning the target person in three dimensions using a high-resolution three-dimensional scanning device to obtain initial point cloud data;

(2)对初始点云数据进行下采样,得到采样点云数据;(2) down-sampling the initial point cloud data to obtain sampled point cloud data;

(3)根据采样点云数据构建图结构,并提取图结构的结构数据;(3) construct a graph structure based on the sampled point cloud data and extract the structural data of the graph structure;

(4)将结构数据输入预置的图神经网络进行噪声点识别,得到噪声点数据;(4) Inputting the structural data into a preset graph neural network to identify noise points and obtain noise point data;

(5)基于噪声点数据对采样点云数据进行去噪,得到去噪点云数据;(5) De-noising the sampled point cloud data based on the noise point data to obtain de-noised point cloud data;

(6)对去噪点云数据进行数据对齐处理,得到点云数据。(6) Perform data alignment on the denoised point cloud data to obtain point cloud data.

具体的,通过高分辨率三维扫描装置对目标人物进行三维数据扫描。将真实世界中的人物转换成数字形式的初始点云数据。高分辨率三维扫描装置通过发射激光或者结构光并捕获其反射回来的光线,测量目标人物表面的距离,生成代表人物表面形状的点云数据。这些初始点云数据包含了大量的空间坐标点,每一个点都是目标人物表面的一个采样点。由于直接生成的点云数据通常非常庞大,为了提高后续处理步骤的效率,对初始点云数据进行下采样。下采样的过程涉及到减少点云中的点的数量,同时尽量保留原始形状的重要特征。通过算法,如体素网格过滤或随机采样等,在不显著损失模型细节的前提下,减少数据处理的负担。根据下采样后的点云数据构建图结构,并提取图结构的结构数据。理解和处理点云数据之间的空间关系,通过将点云数据转换成图结构,利用图中节点和边来表示点云中的点以及它们之间的关系。这种转换使得后续能够利用图算法分析和处理点云数据,例如通过图的遍历来识别连通区域或者利用图的属性来进行特征提取。将图结构的结构数据输入到预置的图神经网络中,进行噪声点的识别。图神经网络是一种专门处理图结构数据的深度学习模型,可以学习到图中节点的高维特征表示,并基于这些学习到的特征来执行各种任务,如节点分类、图分类等。本实施例中,图神经网络被用来识别噪声点,即那些不属于目标人物主体的点,可能是由于扫描环境干扰或其他因素导致的误差点。基于噪声点数据对采样点云数据进行去噪处理,清除那些不必要的噪声点,得到目标人物三维形态的点云数据。去噪过程可以通过多种算法实现,如统计滤波、半径滤波等,这些算法根据点云数据的分布特征来判断哪些点可能是噪声,并将其移除。对去噪后的点云数据进行数据对齐处理,确保点云数据能够准确对应到目标人物的真实三维空间中。数据对齐,也称为点云配准,涉及到使用算法调整点云的方向和位置,以使得多个点云数据集能够在同一坐标系统中正确地融合在一起。Specifically, a high-resolution three-dimensional scanning device is used to perform a three-dimensional data scan on the target person. The person in the real world is converted into initial point cloud data in digital form. The high-resolution three-dimensional scanning device emits laser or structured light and captures the light reflected back, measures the distance of the surface of the target person, and generates point cloud data representing the surface shape of the person. These initial point cloud data contain a large number of spatial coordinate points, each of which is a sampling point on the surface of the target person. Since the directly generated point cloud data is usually very large, in order to improve the efficiency of subsequent processing steps, the initial point cloud data is downsampled. The downsampling process involves reducing the number of points in the point cloud while retaining the important features of the original shape as much as possible. Through algorithms such as voxel grid filtering or random sampling, the burden of data processing is reduced without significantly losing model details. A graph structure is constructed based on the downsampled point cloud data, and the structural data of the graph structure is extracted. The spatial relationship between the point cloud data is understood and processed by converting the point cloud data into a graph structure, and using nodes and edges in the graph to represent the points in the point cloud and the relationship between them. This conversion enables the subsequent use of graph algorithms to analyze and process the point cloud data, such as identifying connected areas by traversing the graph or using the attributes of the graph to extract features. The structural data of the graph structure is input into the preset graph neural network to identify noise points. The graph neural network is a deep learning model that specializes in processing graph structure data. It can learn the high-dimensional feature representation of the nodes in the graph and perform various tasks based on these learned features, such as node classification, graph classification, etc. In this embodiment, the graph neural network is used to identify noise points, that is, those points that do not belong to the main body of the target person, which may be error points caused by interference from the scanning environment or other factors. The sampled point cloud data is denoised based on the noise point data, and those unnecessary noise points are removed to obtain the point cloud data of the three-dimensional form of the target person. The denoising process can be implemented by a variety of algorithms, such as statistical filtering, radius filtering, etc. These algorithms determine which points may be noise based on the distribution characteristics of the point cloud data and remove them. The denoised point cloud data is aligned to ensure that the point cloud data can accurately correspond to the real three-dimensional space of the target person. Data alignment, also known as point cloud registration, involves using an algorithm to adjust the direction and position of the point cloud so that multiple point cloud data sets can be correctly fused together in the same coordinate system.

在一具体实施例中,执行步骤S102的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S102 may specifically include the following steps:

(1)对点云数据进行点云分布区域拆分,得到多个点云分布区域;(1) splitting the point cloud data into point cloud distribution areas to obtain multiple point cloud distribution areas;

(2)分别对每个点云分布区域进行点云分布密度分析,得到每个点云分布区域的点云分布密度;(2) performing point cloud distribution density analysis on each point cloud distribution area respectively to obtain the point cloud distribution density of each point cloud distribution area;

(3)将每个点云分布区域的点云分布密度输入预置的自适应网格重建算法进行网格属性匹配,得到网格属性数据;(3) inputting the point cloud distribution density of each point cloud distribution area into a preset adaptive mesh reconstruction algorithm for mesh attribute matching to obtain mesh attribute data;

(4)基于网格属性数据,通过预置的泊松表面重建算法在元宇宙中对目标人物进行数字人模型构建,得到第一数字人模型;(4) Based on the grid attribute data, a digital human model of the target person is constructed in the metaverse by using a preset Poisson surface reconstruction algorithm to obtain a first digital human model;

(5)对第一数字人模型进行空洞缺陷填充,得到第二数字人模型;(5) filling the void defects of the first digital human model to obtain a second digital human model;

(6)对第二数字人模型进行细节增强处理,得到第三数字人模型;(6) performing detail enhancement processing on the second digital human model to obtain a third digital human model;

(7)对第三数字人模型进行多视角融合优化,得到初始数字人模型。(7) Perform multi-perspective fusion optimization on the third digital human model to obtain an initial digital human model.

具体的,对点云数据进行点云分布区域拆分,将点云数据集分割成多个小的区域。不同区域的点云可能代表了人物模型的不同部分,如头部、手臂或躯干等,每个部分在几何形态和细节上都有所不同。例如,通过设定一定的空间阈值或利用聚类算法,可以根据点云中各点的空间邻近性将其划分为若干个区域。对每个点云分布区域进行点云分布密度分析,评估每个区域内点的分布紧密程度,即每个区域的点云密度。点云密度直接影响到后续重建过程中的精细程度和重建难易程度,因为密度高的区域通常包含更多的细节信息。例如,人物模型的面部区域往往需要更高的点云密度以精确捕捉面部特征。将每个点云分布区域的点云分布密度输入预置的自适应网格重建算法中进行网格属性匹配。自适应网格重建算法的作用是基于点云密度自动调整网格的大小和密度,以保证在细节丰富的区域使用更细小的网格,而在细节较少的区域使用较大的网格,从而有效地平衡重建过程中的细节捕捉与计算资源消耗。这一过程生成的网格属性数据包含了每个区域网格的尺寸、形状和密度等信息。基于网格属性数据,使用预置的泊松表面重建算法对目标人物进行数字人模型构建。泊松表面重建算法是一种高效的算法,能够根据点云数据和网格属性生成连续、光滑的表面,通过数学上的泊松方程求解表面的最优形态,使得重建的数字人模型既精确又平滑。从而得到第一数字人模型,它是基于原始点云数据和经过优化的网格属性重建的初步模型。第一数字人模型可能存在空洞和缺陷,这是由于原始点云数据的不完整或扫描盲区引起的。因此,接下来对这个模型进行空洞缺陷填充,使用算法自动检测并填补这些空洞,确保模型的完整性。经过填充处理后,得到第二数字人模型。为了进一步提升模型的质量,对第二数字人模型进行细节增强处理。对模型表面的微小纹理和特征进行处理,如增加皮肤纹理、服饰细节等,使模型看起来更加逼真和详细,得到第三数字人模型。对第三数字人模型进行多视角融合优化。通过多视角融合优化,调整和优化模型的全局一致性,确保模型无论从哪个角度查看都是完整和自然的,最终得到初始数字人模型。Specifically, the point cloud data is split into multiple small areas by point cloud distribution area splitting. The point clouds in different areas may represent different parts of the character model, such as the head, arms or torso, and each part is different in geometric form and details. For example, by setting a certain spatial threshold or using a clustering algorithm, the point cloud can be divided into several areas according to the spatial proximity of each point in the point cloud. The point cloud distribution density is analyzed for each point cloud distribution area to evaluate the distribution density of the points in each area, that is, the point cloud density of each area. The point cloud density directly affects the degree of refinement and reconstruction difficulty in the subsequent reconstruction process, because high-density areas usually contain more detailed information. For example, the facial area of the character model often requires a higher point cloud density to accurately capture facial features. The point cloud distribution density of each point cloud distribution area is input into the preset adaptive mesh reconstruction algorithm for mesh attribute matching. The role of the adaptive mesh reconstruction algorithm is to automatically adjust the size and density of the mesh based on the point cloud density to ensure that a finer mesh is used in areas with rich details, and a larger mesh is used in areas with fewer details, thereby effectively balancing the capture of details and the consumption of computing resources in the reconstruction process. The mesh attribute data generated by this process contains information such as the size, shape and density of each area mesh. Based on the mesh attribute data, the preset Poisson surface reconstruction algorithm is used to construct a digital human model of the target person. The Poisson surface reconstruction algorithm is an efficient algorithm that can generate a continuous and smooth surface based on point cloud data and mesh attributes. The optimal form of the surface is solved by the mathematical Poisson equation, so that the reconstructed digital human model is both accurate and smooth. Thus, the first digital human model is obtained, which is a preliminary model reconstructed based on the original point cloud data and the optimized mesh attributes. The first digital human model may have holes and defects, which are caused by the incompleteness of the original point cloud data or the scanning blind area. Therefore, the model is then filled with holes and defects, and the algorithm is used to automatically detect and fill these holes to ensure the integrity of the model. After filling, the second digital human model is obtained. In order to further improve the quality of the model, the second digital human model is subjected to detail enhancement. The tiny textures and features on the surface of the model are processed, such as adding skin texture, clothing details, etc., to make the model look more realistic and detailed, and the third digital human model is obtained. The third digital human model is optimized by multi-view fusion. Through multi-perspective fusion optimization, the global consistency of the model is adjusted and optimized to ensure that the model is complete and natural no matter from which angle it is viewed, and finally the initial digital human model is obtained.

在一具体实施例中,执行步骤S103的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S103 may specifically include the following steps:

(1)对初始数字人进行纹理图像采集,得到模型纹理图像数据;(1) Collecting texture images of the initial digital human to obtain model texture image data;

(2)对模型纹理图像数据进行高频细节提取,得到高频细节数据;(2) extracting high-frequency details from the model texture image data to obtain high-frequency detail data;

(3)对模型纹理图像数据进行低频细节提取,得到低频细节数据;(3) extracting low-frequency details from the model texture image data to obtain low-frequency detail data;

(4)对初始数字人进行几何特征分析,得到初始数字人的几何特征集;(4) Analyze the geometric features of the initial digital human to obtain a geometric feature set of the initial digital human;

(5)基于高频细节数据以及低频细节数据,对初始数字人进行模型曲率分析,得到模型曲率数据;(5) Based on the high-frequency detail data and the low-frequency detail data, the model curvature analysis is performed on the initial digital human to obtain the model curvature data;

(6)将几何特征集以及模型曲率数据输入预置的纹理映射算法进行纹理属性映射,得到目标纹理属性;(6) inputting the geometric feature set and the model curvature data into a preset texture mapping algorithm for texture attribute mapping to obtain target texture attributes;

(7)基于目标纹理属性,对初始数字人进行纹理属性配置,得到待处理数字人模型。(7) Based on the target texture attributes, the texture attributes of the initial digital human are configured to obtain the digital human model to be processed.

具体的,对初始数字人进行纹理图像采集,收集用于覆盖数字人模型表面的实际图像数据,这些图像数据可以从现实世界中直接采集,比如通过高分辨率相机拍摄目标人物的皮肤、服装等,或者从图像库中选择适合的纹理。对模型纹理图像数据进行高频和低频细节提取。高频细节提取关注于图像中的微小变化,如皮肤纹理的细微裂纹或服装的细节纹理,这些细节在视觉上有助于增加数字人物的真实感。同时,低频细节提取则着重于图像中的大范围颜色渐变和较大的纹理模式,这些信息有助于构建数字人物的基础外观和整体感觉。对初始数字人进行几何特征分析,获取初始数字人的几何特征集。通过分析数字人模型的形状、曲线和其他几何属性,为后续的纹理映射提供重要的参考信息。几何特征集能够帮助识别模型的关键区域和特定特征,如脸部轮廓、肌肉线条等。基于高频细节数据和低频细节数据,对初始数字人进行模型曲率分析。获得模型表面的几何变化,包括凹凸、弯曲等,得到模型曲率数据。这些数据后续有助于纹理映射过程中确保纹理图像能够准确贴合模型表面的不同曲率,因为纹理在曲面上的展现需要根据曲率进行适当调整,以防止拉伸或压缩导致的失真。将几何特征集和模型曲率数据输入到预置的纹理映射算法中。算法根据模型的几何和曲率特征自动调整纹理图像,使之与模型表面精准匹配,生成目标纹理属性。目标纹理属性包括纹理在模型上的具体放置、方向、伸缩等信息。基于目标纹理属性,对初始数字人进行纹理属性配置,得到待处理数字人模型。将纹理图像准确地映射到数字人模型的各个部分,包括细节的调整和优化,以确保纹理的自然过渡和无缝拼接。Specifically, texture image acquisition is performed on the initial digital human to collect actual image data used to cover the surface of the digital human model. These image data can be directly collected from the real world, such as photographing the target person's skin, clothing, etc. with a high-resolution camera, or selecting suitable textures from an image library. High-frequency and low-frequency detail extraction is performed on the model texture image data. High-frequency detail extraction focuses on small changes in the image, such as fine cracks in the skin texture or detailed texture of clothing. These details help to increase the realism of the digital person visually. At the same time, low-frequency detail extraction focuses on large-scale color gradients and larger texture patterns in the image. This information helps to build the basic appearance and overall feeling of the digital person. The geometric feature analysis of the initial digital human is performed to obtain the geometric feature set of the initial digital human. By analyzing the shape, curves and other geometric properties of the digital human model, important reference information is provided for subsequent texture mapping. The geometric feature set can help identify the key areas and specific features of the model, such as facial contours, muscle lines, etc. Based on the high-frequency detail data and low-frequency detail data, the model curvature analysis is performed on the initial digital human. The geometric changes on the model surface, including concavity, convexity, curvature, etc., are obtained to obtain the model curvature data. These data will subsequently help ensure that the texture image can accurately fit the different curvatures of the model surface during the texture mapping process, because the display of the texture on the curved surface needs to be appropriately adjusted according to the curvature to prevent distortion caused by stretching or compression. The geometric feature set and model curvature data are input into the preset texture mapping algorithm. The algorithm automatically adjusts the texture image according to the geometric and curvature features of the model to accurately match the model surface and generate the target texture properties. The target texture properties include the specific placement, direction, and scaling of the texture on the model. Based on the target texture properties, the texture properties of the initial digital human are configured to obtain the digital human model to be processed. The texture image is accurately mapped to each part of the digital human model, including the adjustment and optimization of details to ensure the natural transition and seamless splicing of the texture.

在一具体实施例中,执行步骤S104的过程可以具体包括如下步骤:In a specific embodiment, the process of executing step S104 may specifically include the following steps:

(1)采集目标人物的动作数据,并对动作数据进行时序数据匹配,得到时序数据集;(1) Collect the target person’s motion data and perform time series data matching on the motion data to obtain a time series data set;

(2)基于时序数据集对动作数据进行关键动作提取,得到关键动作数据;(2) Extract key actions from action data based on the time series data set to obtain key action data;

(3)对关键动作数据进行动作细节扫描,得到动作细节数据集;(3) Scanning the key action data for action details to obtain an action detail data set;

(4)对动作细节数据集进行细节连贯性分析,得到细节连贯性评价值;(4) Perform detail coherence analysis on the action detail data set to obtain a detail coherence evaluation value;

(5)基于细节连贯性评价值对动作细节数据集进行细节优化,得到优化细节数据集;(5) Optimizing the details of the action detail dataset based on the detail coherence evaluation value to obtain an optimized detail dataset;

(6)基于优化细节数据集对待处理数字人进行模型渲染优化,得到目标数字人模型。(6) Based on the optimized detail data set, the model rendering of the digital human to be processed is optimized to obtain the target digital human model.

具体的,采集目标人物的动作数据。通过高精度的动作捕捉系统装配在人体各关键部位的传感器记录人物的每一个动作和姿态,包括走路、跳跃、面部表情等。这些传感器能够捕获人体动作的微小变化,并将其转化为数字信号,形成初始的动作数据集。进行时序数据匹配。将捕获到的动作数据按照时间顺序进行排序和同步,确保动作的连贯性和逻辑性。时序数据匹配的过程包括分析每个动作的时间戳,将它们按照发生的先后顺序排列,以形成一个连续流畅的动作序列。基于时序数据集进行关键动作提取。关键动作是指在整个动作序列中起到承上启下作用的动作,它们是理解和还原目标人物动作模式的关键。例如,在一系列走路动作中,每一步的着地和起步都可能被视为关键动作。通过特定算法,如动作识别和模式分析等,从复杂的动作序列中提取出这些关键动作数据。提取关键动作数据后,对这些数据进行动作细节扫描,获取动作细节数据集。分析每个关键动作中的细微变化和特征,比如手部的姿态变化、面部表情的细节等。对动作细节数据集进行细节连贯性分析,确保动作细节在整个动作序列中的自然过渡和一致性,避免动作间出现不自然的跳跃或突变。通过比对动作细节数据集中的连续动作细节,计算出细节连贯性评价值,这个评价值反映了动作序列的流畅度和自然度。基于细节连贯性评价值,对动作细节数据集进行细节优化。包括调整动作间的过渡,平滑不自然的动作断点,以及增强动作的自然流畅性,确保每一个动作都以最自然的方式呈现。优化后得到优化动作细节数据集。基于优化后的动作细节数据集,对待处理的数字人进行模型渲染优化,得到最终的目标数字人模型。将优化后的动作数据应用到数字人模型上,通过高级渲染技术和算法,确保模型不仅在静态状态下的外观精确无误,而且在动态表现上也能够真实反映目标人物的动作和表情。Specifically, the motion data of the target person is collected. The sensors installed on the key parts of the human body through the high-precision motion capture system record every movement and posture of the person, including walking, jumping, facial expressions, etc. These sensors can capture the slight changes in human body movements and convert them into digital signals to form an initial motion data set. Time series data matching is performed. The captured motion data are sorted and synchronized in chronological order to ensure the coherence and logic of the movements. The process of time series data matching includes analyzing the timestamp of each action and arranging them in the order of occurrence to form a continuous and smooth sequence of actions. Key actions are extracted based on the time series data set. Key actions refer to actions that play a connecting role in the entire action sequence. They are the key to understanding and restoring the target person's action pattern. For example, in a series of walking actions, the landing and starting of each step may be regarded as key actions. These key action data are extracted from complex action sequences through specific algorithms, such as action recognition and pattern analysis. After extracting the key action data, the action details are scanned to obtain the action detail data set. The subtle changes and features in each key action are analyzed, such as changes in hand posture and details of facial expressions. The action detail data set is analyzed for detail coherence to ensure the natural transition and consistency of the action details in the entire action sequence, and to avoid unnatural jumps or mutations between actions. By comparing the continuous action details in the action detail data set, the detail coherence evaluation value is calculated, which reflects the fluency and naturalness of the action sequence. Based on the detail coherence evaluation value, the action detail data set is optimized in detail. This includes adjusting the transition between actions, smoothing unnatural action breakpoints, and enhancing the natural fluency of actions, ensuring that each action is presented in the most natural way. After optimization, an optimized action detail data set is obtained. Based on the optimized action detail data set, the model rendering optimization is performed on the digital human to be processed to obtain the final target digital human model. The optimized action data is applied to the digital human model, and through advanced rendering technology and algorithms, it is ensured that the model is not only accurate in appearance in a static state, but also can truly reflect the actions and expressions of the target person in dynamic performance.

在一具体实施例中,执行对待处理数字人进行模型渲染优化步骤的过程可以具体包括如下步骤:In a specific embodiment, the process of performing the model rendering optimization step on the digital human to be processed may specifically include the following steps:

(1)对待处理数字人进行光照模型构建,得到目标光照模型;(1) constructing a lighting model for the digital human to be processed and obtaining a target lighting model;

(2)通过光线追踪算法对目标光照模型进行光照缺失区域分析,得到光照缺失区域;(2) Analyze the illumination missing area of the target illumination model through a ray tracing algorithm to obtain the illumination missing area;

(3)对光照缺失区域进行光照增强,得到增强数字人模型;(3) Enhance the illumination of the area lacking illumination to obtain an enhanced digital human model;

(4)对增强数字人模型进行阴影边缘检测,得到阴影轮廓;(4) Perform shadow edge detection on the enhanced digital human model to obtain the shadow outline;

(5)根据目标光照模型,对阴影轮廓进行阴影光照参数匹配,得到阴影光照参数集;(5) According to the target illumination model, shadow illumination parameters are matched on the shadow contour to obtain a shadow illumination parameter set;

(6)基于阴影光照参数集对增强数字人模型进行参数配置,得到配置数字人模型;(6) configuring parameters of the enhanced digital human model based on the shadow and illumination parameter set to obtain a configured digital human model;

(7)基于优化细节数据集对配置数字人模型进行模型渲染优化,得到目标数字人模型。(7) Based on the optimized detail data set, the configured digital human model is optimized for model rendering to obtain the target digital human model.

具体的,对待处理数字人进行光照模型构建。模拟真实世界中光源对对象的照射效果,包括光源的方向、强度、颜色以及光线在不同材质表面上的反射、折射特性等。例如,构建一个场景光照模型时,可能需要考虑场景中的多个光源,如太阳光、室内灯光等,以及它们与数字人物相互作用的效果。通过光线追踪算法对目标光照模型进行分析,特别是针对光照缺失区域的识别。光线追踪算法是一种模拟光线传播和物体互动的技术,能精确地计算光线如何从光源出发,经过反射、折射等过程,最终到达观察者的眼睛。这个过程中,算法会识别出哪些区域未被充分照亮,即光照缺失区域。对光照缺失区域进行光照增强。通过调整光照模型或直接在这些区域增加虚拟光源来改善照明效果,使得数字人物的整体和局部都能得到均匀合理的照明。进行阴影边缘检测,识别增强数字人模型中的阴影边界,即阴影轮廓。阴影是光线被物体遮挡后形成的暗区,它的形状、大小和边缘清晰度有助于加强场景的立体感和深度感。通过阴影边缘检测,获得更准确的阴影轮廓信息。根据目标光照模型,对阴影轮廓进行阴影光照参数匹配。分析阴影轮廓与光照模型的关系,调整阴影的深浅、模糊度等属性,以使阴影效果与整体光照效果协调一致。通过匹配,得到阴影光照参数集。基于阴影光照参数集对增强数字人模型进行参数配置,包括调整模型表面的反光度、透明度和其他材质属性,以适应光照模型和阴影效果。基于优化细节数据集对配置后的数字人模型进行模型渲染优化,最终得到目标数字人模型。将之前所有优化成果融合应用到数字人模型上,使用渲染技术来实现最终的视觉效果。包括应用先进的着色技术、纹理映射、细节加强和光照效果调整等,以确保模型不仅在静态状态下看起来真实,而且在动态演示中也能保持高度的真实感和连贯性。Specifically, a lighting model is constructed for the digital human to be processed. The illumination effect of light sources on objects in the real world is simulated, including the direction, intensity, color of the light source, and the reflection and refraction characteristics of light on different material surfaces. For example, when constructing a scene illumination model, it may be necessary to consider multiple light sources in the scene, such as sunlight, indoor lights, etc., and the effects of their interaction with digital characters. The target illumination model is analyzed through the ray tracing algorithm, especially for the identification of areas with missing illumination. The ray tracing algorithm is a technology that simulates the propagation of light and the interaction of objects. It can accurately calculate how light starts from the light source, passes through reflection, refraction and other processes, and finally reaches the observer's eyes. In this process, the algorithm will identify which areas are not fully illuminated, that is, areas with missing illumination. The illumination of the areas with missing illumination is enhanced. The lighting effect is improved by adjusting the illumination model or directly adding virtual light sources to these areas, so that the overall and local digital characters can be evenly and reasonably illuminated. Shadow edge detection is performed to identify and enhance the shadow boundary in the digital human model, that is, the shadow outline. Shadow is a dark area formed when light is blocked by an object. Its shape, size and edge clarity help to enhance the three-dimensional sense and depth of the scene. Through shadow edge detection, more accurate shadow outline information is obtained. According to the target illumination model, shadow illumination parameters are matched for the shadow outline. The relationship between the shadow outline and the illumination model is analyzed, and the shadow depth, blur and other properties are adjusted to make the shadow effect consistent with the overall illumination effect. Through matching, a shadow illumination parameter set is obtained. Based on the shadow illumination parameter set, the parameters of the enhanced digital human model are configured, including adjusting the reflectivity, transparency and other material properties of the model surface to adapt to the illumination model and shadow effect. Based on the optimized detail data set, the configured digital human model is optimized for model rendering, and finally the target digital human model is obtained. All previous optimization results are integrated and applied to the digital human model, and rendering technology is used to achieve the final visual effect. This includes the application of advanced shading technology, texture mapping, detail enhancement and illumination effect adjustment to ensure that the model not only looks real in a static state, but also maintains a high degree of realism and coherence in dynamic demonstrations.

在一具体实施例中,执行基于优化细节数据集对配置数字人模型进行模型渲染优化步骤的过程可以具体包括如下步骤:In a specific embodiment, the process of performing the model rendering optimization step on the configured digital human model based on the optimized detail data set may specifically include the following steps:

(1)对优化细节数据集进行动作类型分析,得到动作类型集合;(1) Analyze the action types of the optimization detail data set to obtain an action type set;

(2)基于动作类型集合,对优化细节数据集进行数据分组,得到多组动作细节数据;(2) Based on the action type set, the optimized detail data set is grouped to obtain multiple groups of action detail data;

(3)对每组动作细节数据进行动作参数分析,得到每组动作细节数据的动作参数集;(3) performing action parameter analysis on each set of action detail data to obtain an action parameter set for each set of action detail data;

(4)基于每组动作细节数据的动作参数据对配置数字人模型进行模型渲染优化,得到目标数字人模型。(4) Based on the action parameter data of each set of action detail data, the configured digital human model is optimized for model rendering to obtain the target digital human model.

具体的,对优化细节数据集进行动作类型分析,识别和分类目标人物的一系列动作,比如走路、跳跃、挥手等。基于动作类型集合,将优化细节数据集进行数据分组。将所有的动作数据按照已识别的动作类型进行分类,形成多组动作细节数据。每组数据专注于一个特定类型的动作,分类有助于后续的动作参数分析,也便于针对不同类型的动作采取最适合的渲染和优化策略。例如,对于两种完全不同类型的动作,它们的动作细节和所需的渲染技术可能会有显著不同。对每组动作细节数据进行动作参数分析,得到每个动作的具体参数,如动作的速度、幅度、持续时间以及身体各部位之间的相互关系。动作参数集是对动作细节数据的量化表示,它为后续的模型渲染优化提供了关键信息。例如,在分析跳跃动作时,除了考虑跳跃的高度和持续时间外,还需要分析膝盖弯曲的角度和着陆时的姿态,这些参数共同决定了动作的自然度和真实感。基于每组动作细节数据的动作参数,对配置数字人模型进行模型渲染优化,得到目标数字人模型。将动作参数集应用到数字人模型的渲染过程中,通过高级渲染技术和算法调整模型的动态表现,以确保每个动作不仅在视觉上准确无误,而且在动态演示中也能够真实反映目标人物的动作特征。包括调整模型的骨骼动画、肌肉变形和面部表情等,以适应动作的具体要求。Specifically, the optimized detail data set is analyzed for action types to identify and classify a series of actions of the target person, such as walking, jumping, waving, etc. Based on the action type set, the optimized detail data set is grouped. All action data are classified according to the identified action types to form multiple groups of action detail data. Each group of data focuses on a specific type of action. Classification helps the subsequent action parameter analysis and facilitates the adoption of the most suitable rendering and optimization strategies for different types of actions. For example, for two completely different types of actions, their action details and required rendering techniques may be significantly different. Action parameter analysis is performed on each group of action detail data to obtain specific parameters of each action, such as the speed, amplitude, duration of the action, and the relationship between various parts of the body. The action parameter set is a quantitative representation of the action detail data, which provides key information for subsequent model rendering optimization. For example, when analyzing a jumping action, in addition to considering the height and duration of the jump, it is also necessary to analyze the angle of knee bending and the posture when landing. These parameters together determine the naturalness and realism of the action. Based on the action parameters of each group of action detail data, the configuration digital human model is optimized for model rendering to obtain the target digital human model. The action parameter set is applied to the rendering process of the digital human model, and the dynamic performance of the model is adjusted through advanced rendering technology and algorithms to ensure that each action is not only visually accurate, but also can truly reflect the action characteristics of the target person in the dynamic demonstration. This includes adjusting the model's skeletal animation, muscle deformation, and facial expressions to meet the specific requirements of the action.

上面对本申请实施例中元宇宙数字人的渲染方法进行了描述,下面对本申请实施例中元宇宙数字人的渲染装置进行描述,请参阅图2,本申请实施例中元宇宙数字人的渲染装置一个实施例包括:The above describes the rendering method of the Metaverse Digital Human in the embodiment of the present application. The following describes the rendering device of the Metaverse Digital Human in the embodiment of the present application. Please refer to FIG. 2. An embodiment of the rendering device of the Metaverse Digital Human in the embodiment of the present application includes:

扫描模块201,用于通过预置的高分辨率三维扫描装置对目标人物进行三维数据扫描,得到点云数据;The scanning module 201 is used to perform three-dimensional data scanning on the target person through a preset high-resolution three-dimensional scanning device to obtain point cloud data;

构建模块202,用于根据所述点云数据在元宇宙中对所述目标人物进行数字人构建,得到初始数字人模型;A construction module 202 is used to construct a digital human of the target person in the metaverse according to the point cloud data to obtain an initial digital human model;

配置模块203,用于对所述初始数字人进行纹理属性配置,得到待处理数字人模型;Configuration module 203, used to configure texture attributes of the initial digital human to obtain a digital human model to be processed;

优化模块204,用于采集所述目标人物的动作数据,并根据所述动作数据对所述待处理数字人进行模型渲染优化,得到目标数字人模型。The optimization module 204 is used to collect the motion data of the target person, and perform model rendering optimization on the digital human to be processed according to the motion data to obtain a target digital human model.

通过上述各个组成部分的协同合作,通过使用预置的高分辨率三维扫描装置进行目标人物的三维数据扫描,能够捕捉到人物细微的几何特征,确保了数字人模型的高度真实性和精确性。这为构建高质量的元宇宙数字人提供了坚实的基础。通过对初始点云数据进行下采样、构建图结构、提取结构数据、进行噪声点识别和去噪处理,以及数据对齐,有效地优化了点云数据的质量。这不仅提升了数据处理的效率,还保证了最终数字人模型的几何细节的清晰度和准确性。采用自适应网格重建算法和泊松表面重建算法,结合细节增强处理和多视角融合优化,能够根据点云数据的分布自动调整网格,填充空洞缺陷,并增强模型细节。这一系列优化措施大大提高了数字人模型的视觉真实性和美观度。通过纹理图像的采集、细节提取、几何特征分析和纹理映射算法,能够精确地配置数字人模型的纹理属性。特别是结合高频与低频细节数据,以及模型曲率分析,保证了纹理的自然贴合和视觉效果的一致性。采集并分析目标人物的动作数据,通过时序匹配、关键动作提取、动作细节扫描和连贯性分析,优化了动作细节的自然度和流畅性。这确保了数字人在元宇宙中的动作表现既真实又自然。构建目标光照模型并通过光线追踪技术优化光照效果,结合阴影轮廓检测和光照参数匹配,进一步增强了数字人模型的立体感和在场感。通过模型渲染优化,确保了数字人模型在各种光照条件下均展现出高度真实和吸引人的视觉效果。Through the cooperation of the above components, by using the preset high-resolution 3D scanning device to scan the 3D data of the target person, the subtle geometric features of the person can be captured, ensuring the high authenticity and accuracy of the digital human model. This provides a solid foundation for building high-quality metaverse digital humans. The quality of the point cloud data is effectively optimized by downsampling the initial point cloud data, building a graph structure, extracting structural data, identifying and denoising noise points, and aligning data. This not only improves the efficiency of data processing, but also ensures the clarity and accuracy of the geometric details of the final digital human model. Adopting the adaptive mesh reconstruction algorithm and the Poisson surface reconstruction algorithm, combined with detail enhancement processing and multi-view fusion optimization, the mesh can be automatically adjusted according to the distribution of the point cloud data, the void defects can be filled, and the model details can be enhanced. This series of optimization measures greatly improves the visual authenticity and aesthetics of the digital human model. Through the acquisition of texture images, detail extraction, geometric feature analysis and texture mapping algorithms, the texture properties of the digital human model can be accurately configured. In particular, the combination of high-frequency and low-frequency detail data, as well as the model curvature analysis, ensures the natural fit of the texture and the consistency of the visual effect. The target person's motion data is collected and analyzed. Through timing matching, key motion extraction, motion detail scanning and coherence analysis, the naturalness and smoothness of the motion details are optimized. This ensures that the digital human's motion performance in the metaverse is both realistic and natural. The target lighting model is constructed and the lighting effect is optimized through ray tracing technology. Combined with shadow contour detection and lighting parameter matching, the three-dimensional sense and presence of the digital human model are further enhanced. Through model rendering optimization, it is ensured that the digital human model shows highly realistic and attractive visual effects under various lighting conditions.

本申请还提供一种元宇宙数字人的渲染设备,所述元宇宙数字人的渲染设备包括存储器和处理器,存储器中存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行上述各实施例中的所述元宇宙数字人的渲染方法的步骤。The present application also provides a rendering device for a metaverse digital human, which includes a memory and a processor. The memory stores computer-readable instructions. When the computer-readable instructions are executed by the processor, the processor executes the steps of the rendering method for a metaverse digital human in the above-mentioned embodiments.

本申请还提供一种计算机可读存储介质,该计算机可读存储介质可以为非易失性计算机可读存储介质,该计算机可读存储介质也可以为易失性计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述指令在计算机上运行时,使得计算机执行所述元宇宙数字人的渲染方法的步骤。The present application also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium or a volatile computer-readable storage medium. Instructions are stored in the computer-readable storage medium. When the instructions are executed on a computer, the computer executes the steps of the method for rendering a metaverse digital human.

所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,系统和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working processes of the above-described systems, systems and units can refer to the corresponding processes in the aforementioned method embodiments and will not be repeated here.

所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random acceS memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including a number of instructions to enable a computer device (which can be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), disk or optical disk and other media that can store program codes.

以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。As described above, the above embodiments are only used to illustrate the technical solutions of the present application, rather than to limit them. Although the present application has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that they can still modify the technical solutions described in the aforementioned embodiments, or make equivalent replacements for some of the technical features therein. However, these modifications or replacements do not deviate the essence of the corresponding technical solutions from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

Translated fromChinese
1.一种元宇宙数字人的渲染方法,其特征在于,所述元宇宙数字人的渲染方法包括:1. A method for rendering a digital human of the Metaverse, characterized in that the method for rendering a digital human of the Metaverse comprises:通过预置的高分辨率三维扫描装置对目标人物进行三维数据扫描,得到点云数据;The target person is scanned in three dimensions by a preset high-resolution three-dimensional scanning device to obtain point cloud data;根据所述点云数据在元宇宙中对所述目标人物进行数字人构建,得到初始数字人模型;Constructing a digital human of the target person in the metaverse according to the point cloud data to obtain an initial digital human model;对所述初始数字人进行纹理属性配置,得到待处理数字人模型;Performing texture attribute configuration on the initial digital human to obtain a digital human model to be processed;采集所述目标人物的动作数据,并根据所述动作数据对所述待处理数字人进行模型渲染优化,得到目标数字人模型。The motion data of the target person is collected, and model rendering optimization is performed on the digital human to be processed according to the motion data to obtain a target digital human model.2.根据权利要求1所述的元宇宙数字人的渲染方法,其特征在于,所述通过预置的高分辨率三维扫描装置对目标人物进行三维数据扫描,得到点云数据,包括:2. The method for rendering a digital human of the Metaverse according to claim 1, characterized in that the step of performing a three-dimensional data scan of the target person by a preset high-resolution three-dimensional scanning device to obtain point cloud data comprises:通过所述高分辨率三维扫描装置对所述目标人物进行三维数据扫描,得到初始点云数据;Performing three-dimensional data scanning on the target person by means of the high-resolution three-dimensional scanning device to obtain initial point cloud data;对所述初始点云数据进行下采样,得到采样点云数据;Downsampling the initial point cloud data to obtain sampled point cloud data;根据所述采样点云数据构建图结构,并提取所述图结构的结构数据;Constructing a graph structure according to the sampling point cloud data, and extracting structural data of the graph structure;将所述结构数据输入预置的图神经网络进行噪声点识别,得到噪声点数据;Inputting the structural data into a preset graph neural network to perform noise point recognition to obtain noise point data;基于所述噪声点数据对所述采样点云数据进行去噪,得到去噪点云数据;De-noising the sampling point cloud data based on the noise point data to obtain de-noised point cloud data;对所述去噪点云数据进行数据对齐处理,得到所述点云数据。Performing data alignment processing on the denoised point cloud data to obtain the point cloud data.3.根据权利要求1所述的元宇宙数字人的渲染方法,其特征在于,所述根据所述点云数据在元宇宙中对所述目标人物进行数字人构建,得到初始数字人模型,包括:3. The method for rendering a digital human in the Metaverse according to claim 1, wherein the step of constructing a digital human of the target person in the Metaverse according to the point cloud data to obtain an initial digital human model comprises:对所述点云数据进行点云分布区域拆分,得到多个点云分布区域;Splitting the point cloud data into point cloud distribution areas to obtain a plurality of point cloud distribution areas;分别对每个所述点云分布区域进行点云分布密度分析,得到每个所述点云分布区域的点云分布密度;Performing point cloud distribution density analysis on each of the point cloud distribution areas respectively to obtain the point cloud distribution density of each of the point cloud distribution areas;将每个所述点云分布区域的点云分布密度输入预置的自适应网格重建算法进行网格属性匹配,得到网格属性数据;Inputting the point cloud distribution density of each point cloud distribution area into a preset adaptive grid reconstruction algorithm for grid attribute matching to obtain grid attribute data;基于所述网格属性数据,通过预置的泊松表面重建算法在所述元宇宙中对所述目标人物进行数字人模型构建,得到第一数字人模型;Based on the grid attribute data, constructing a digital human model of the target person in the metaverse by using a preset Poisson surface reconstruction algorithm to obtain a first digital human model;对所述第一数字人模型进行空洞缺陷填充,得到第二数字人模型;Filling the first digital human model with void defects to obtain a second digital human model;对所述第二数字人模型进行细节增强处理,得到第三数字人模型;performing detail enhancement processing on the second digital human model to obtain a third digital human model;对所述第三数字人模型进行多视角融合优化,得到所述初始数字人模型。The third digital human model is optimized through multi-view fusion to obtain the initial digital human model.4.根据权利要求1所述的元宇宙数字人的渲染方法,其特征在于,所述对所述初始数字人进行纹理属性配置,得到待处理数字人模型,包括:4. The method for rendering a Metaverse digital human according to claim 1, wherein the step of configuring texture attributes of the initial digital human to obtain a digital human model to be processed comprises:对所述初始数字人进行纹理图像采集,得到模型纹理图像数据;Collecting texture images of the initial digital human to obtain model texture image data;对所述模型纹理图像数据进行高频细节提取,得到高频细节数据;Extracting high-frequency details from the model texture image data to obtain high-frequency detail data;对所述模型纹理图像数据进行低频细节提取,得到低频细节数据;Extracting low-frequency details from the model texture image data to obtain low-frequency detail data;对所述初始数字人进行几何特征分析,得到所述初始数字人的几何特征集;Performing geometric feature analysis on the initial digital human to obtain a geometric feature set of the initial digital human;基于所述高频细节数据以及所述低频细节数据,对所述初始数字人进行模型曲率分析,得到模型曲率数据;Based on the high-frequency detail data and the low-frequency detail data, performing a model curvature analysis on the initial digital human to obtain model curvature data;将所述几何特征集以及所述模型曲率数据输入预置的纹理映射算法进行纹理属性映射,得到目标纹理属性;Inputting the geometric feature set and the model curvature data into a preset texture mapping algorithm for texture attribute mapping to obtain target texture attributes;基于所述目标纹理属性,对所述初始数字人进行纹理属性配置,得到待处理数字人模型。Based on the target texture attributes, the texture attributes of the initial digital human are configured to obtain a digital human model to be processed.5.根据权利要求1所述的元宇宙数字人的渲染方法,其特征在于,所述采集所述目标人物的动作数据,并根据所述动作数据对所述待处理数字人进行模型渲染优化,得到目标数字人模型,包括:5. The method for rendering a Metaverse digital human according to claim 1, characterized in that the step of collecting the action data of the target person and performing model rendering optimization on the digital human to be processed according to the action data to obtain a target digital human model comprises:采集所述目标人物的动作数据,并对所述动作数据进行时序数据匹配,得到时序数据集;Collecting the action data of the target person, and performing time series data matching on the action data to obtain a time series data set;基于所述时序数据集对所述动作数据进行关键动作提取,得到关键动作数据;Extract key actions from the action data based on the time series data set to obtain key action data;对所述关键动作数据进行动作细节扫描,得到动作细节数据集;Scanning the key action data for action details to obtain an action detail data set;对所述动作细节数据集进行细节连贯性分析,得到细节连贯性评价值;Performing detail coherence analysis on the action detail data set to obtain a detail coherence evaluation value;基于所述细节连贯性评价值对所述动作细节数据集进行细节优化,得到优化细节数据集;Optimizing the action detail data set in detail based on the detail coherence evaluation value to obtain an optimized detail data set;基于所述优化细节数据集对所述待处理数字人进行模型渲染优化,得到目标数字人模型。Model rendering optimization is performed on the digital human to be processed based on the optimized detail data set to obtain a target digital human model.6.根据权利要求5所述的元宇宙数字人的渲染方法,其特征在于,所述基于所述优化细节数据集对所述待处理数字人进行模型渲染优化,得到目标数字人模型,包括:6. The method for rendering a Metaverse digital human according to claim 5, characterized in that the step of performing model rendering optimization on the digital human to be processed based on the optimized detail data set to obtain a target digital human model comprises:对所述待处理数字人进行光照模型构建,得到目标光照模型;Constructing a lighting model for the digital human to be processed to obtain a target lighting model;通过光线追踪算法对所述目标光照模型进行光照缺失区域分析,得到光照缺失区域;Performing illumination missing area analysis on the target illumination model by using a ray tracing algorithm to obtain illumination missing areas;对所述光照缺失区域进行光照增强,得到增强数字人模型;Performing illumination enhancement on the illumination-deficient area to obtain an enhanced digital human model;对所述增强数字人模型进行阴影边缘检测,得到阴影轮廓;Performing shadow edge detection on the enhanced digital human model to obtain a shadow outline;根据所述目标光照模型,对所述阴影轮廓进行阴影光照参数匹配,得到阴影光照参数集;According to the target illumination model, shadow illumination parameters are matched on the shadow outline to obtain a shadow illumination parameter set;基于所述阴影光照参数集对所述增强数字人模型进行参数配置,得到配置数字人模型;Performing parameter configuration on the enhanced digital human model based on the shadow illumination parameter set to obtain a configured digital human model;基于所述优化细节数据集对所述配置数字人模型进行模型渲染优化,得到所述目标数字人模型。Model rendering optimization is performed on the configured digital human model based on the optimized detail data set to obtain the target digital human model.7.根据权利要求6所述的元宇宙数字人的渲染方法,其特征在于,所述基于所述优化细节数据集对所述配置数字人模型进行模型渲染优化,得到所述目标数字人模型,包括:7. The method for rendering a Metaverse digital human according to claim 6, wherein the step of performing model rendering optimization on the configured digital human model based on the optimized detail data set to obtain the target digital human model comprises:对所述优化细节数据集进行动作类型分析,得到动作类型集合;Performing action type analysis on the optimization detail data set to obtain an action type set;基于所述动作类型集合,对所述优化细节数据集进行数据分组,得到多组动作细节数据;Based on the action type set, grouping the optimized detail data set to obtain multiple groups of action detail data;对每组所述动作细节数据进行动作参数分析,得到每组所述动作细节数据的动作参数集;Performing action parameter analysis on each group of action detail data to obtain an action parameter set for each group of action detail data;基于每组所述动作细节数据的动作参数据对所述配置数字人模型进行模型渲染优化,得到所述目标数字人模型。The configured digital human model is subjected to model rendering optimization based on the action parameter data of each group of the action detail data to obtain the target digital human model.8.一种元宇宙数字人的渲染装置,其特征在于,所述元宇宙数字人的渲染装置包括:8. A rendering device for a Metaverse digital human, characterized in that the rendering device for a Metaverse digital human comprises:扫描模块,用于通过预置的高分辨率三维扫描装置对目标人物进行三维数据扫描,得到点云数据;A scanning module is used to perform three-dimensional data scanning on a target person through a preset high-resolution three-dimensional scanning device to obtain point cloud data;构建模块,用于根据所述点云数据在元宇宙中对所述目标人物进行数字人构建,得到初始数字人模型;A construction module, used to construct a digital human of the target person in the metaverse according to the point cloud data to obtain an initial digital human model;配置模块,用于对所述初始数字人进行纹理属性配置,得到待处理数字人模型;A configuration module, used to configure the texture attributes of the initial digital human to obtain a digital human model to be processed;优化模块,用于采集所述目标人物的动作数据,并根据所述动作数据对所述待处理数字人进行模型渲染优化,得到目标数字人模型。The optimization module is used to collect the motion data of the target person, and perform model rendering optimization on the digital human to be processed according to the motion data to obtain a target digital human model.9.一种元宇宙数字人的渲染设备,其特征在于,所述元宇宙数字人的渲染设备包括:存储器和至少一个处理器,所述存储器中存储有指令;9. A rendering device for a Metaverse digital human, characterized in that the rendering device for a Metaverse digital human comprises: a memory and at least one processor, wherein instructions are stored in the memory;所述至少一个处理器调用所述存储器中的所述指令,以使得所述元宇宙数字人的渲染设备执行如权利要求1-7中任一项所述的元宇宙数字人的渲染方法。The at least one processor calls the instructions in the memory so that the rendering device of the Metaverse digital human executes the rendering method of the Metaverse digital human as described in any one of claims 1-7.10.一种计算机可读存储介质,所述计算机可读存储介质上存储有指令,其特征在于,所述指令被处理器执行时实现如权利要求1-7中任一项所述的元宇宙数字人的渲染方法。10. A computer-readable storage medium having instructions stored thereon, wherein when the instructions are executed by a processor, the method for rendering a metaverse digital human as described in any one of claims 1 to 7 is implemented.
CN202410450527.8A2024-04-152024-04-15 Metaverse digital human rendering method, device, equipment and storage mediumActiveCN118196268B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202410450527.8ACN118196268B (en)2024-04-152024-04-15 Metaverse digital human rendering method, device, equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202410450527.8ACN118196268B (en)2024-04-152024-04-15 Metaverse digital human rendering method, device, equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN118196268Atrue CN118196268A (en)2024-06-14
CN118196268B CN118196268B (en)2025-01-24

Family

ID=91392877

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202410450527.8AActiveCN118196268B (en)2024-04-152024-04-15 Metaverse digital human rendering method, device, equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN118196268B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118898673A (en)*2024-07-192024-11-05如你所视(北京)科技有限公司 Point cloud-based model rendering method, device and computer-readable storage medium
CN118918254A (en)*2024-07-172024-11-08广西电网有限责任公司百色供电局GIS equipment operation fault analysis method and system based on three-dimensional modeling
CN119295483A (en)*2024-12-112025-01-10杭州欣禾圣世科技有限公司 A three-dimensional clothing segmentation method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114998890A (en)*2022-05-272022-09-02长春大学 A 3D Point Cloud Object Detection Algorithm Based on Graph Neural Network
CN116385619A (en)*2023-05-262023-07-04腾讯科技(深圳)有限公司Object model rendering method, device, computer equipment and storage medium
CN117391122A (en)*2023-10-252024-01-12山西智慧科技有限公司3D digital human-assisted chat method established in meta universe
CN117539349A (en)*2023-11-092024-02-09九耀天枢(北京)科技有限公司Meta universe experience interaction system and method based on blockchain technology
CN117745915A (en)*2024-02-072024-03-22西交利物浦大学 A model rendering method, device, equipment and storage medium
CN117808945A (en)*2024-03-012024-04-02北京烽火万家科技有限公司Digital person generation system based on large-scale pre-training language model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114998890A (en)*2022-05-272022-09-02长春大学 A 3D Point Cloud Object Detection Algorithm Based on Graph Neural Network
CN116385619A (en)*2023-05-262023-07-04腾讯科技(深圳)有限公司Object model rendering method, device, computer equipment and storage medium
CN117391122A (en)*2023-10-252024-01-12山西智慧科技有限公司3D digital human-assisted chat method established in meta universe
CN117539349A (en)*2023-11-092024-02-09九耀天枢(北京)科技有限公司Meta universe experience interaction system and method based on blockchain technology
CN117745915A (en)*2024-02-072024-03-22西交利物浦大学 A model rendering method, device, equipment and storage medium
CN117808945A (en)*2024-03-012024-04-02北京烽火万家科技有限公司Digital person generation system based on large-scale pre-training language model

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118918254A (en)*2024-07-172024-11-08广西电网有限责任公司百色供电局GIS equipment operation fault analysis method and system based on three-dimensional modeling
CN118918254B (en)*2024-07-172025-06-03广西电网有限责任公司百色供电局 A GIS equipment operation fault analysis method and system based on three-dimensional modeling
CN118898673A (en)*2024-07-192024-11-05如你所视(北京)科技有限公司 Point cloud-based model rendering method, device and computer-readable storage medium
CN119295483A (en)*2024-12-112025-01-10杭州欣禾圣世科技有限公司 A three-dimensional clothing segmentation method and device

Also Published As

Publication numberPublication date
CN118196268B (en)2025-01-24

Similar Documents

PublicationPublication DateTitle
US10846828B2 (en)De-noising images using machine learning
CN118196268B (en) Metaverse digital human rendering method, device, equipment and storage medium
WO2009100020A2 (en)Facial performance synthesis using deformation driven polynomial displacement maps
CN114730480B (en) Machine learning based on volume capture and mesh tracking
CN118071953B (en) Three-dimensional geographic information model rendering method and system
CN112102480A (en)Image data processing method, apparatus, device and medium
CN119129019B (en)Design scheme confirmation method and system based on 3D model
CN118587345B (en) Real-time character modeling method based on light reflection and global illumination
CN112002019B (en)Method for simulating character shadow based on MR mixed reality
CN119131252A (en) Three-dimensional digital auxiliary system based on ceramic painting creation
CN116883550B (en)Three-dimensional virtual live-action animation display method
CN101510317A (en)Method and apparatus for generating three-dimensional cartoon human face
CN117788670A (en)Digital artistic creation auxiliary system
CN114913281A (en) A method and apparatus for model processing
Sun et al.Single‐view procedural braided hair modeling through braid unit identification
PaulsCreating 2.5 D visualizations of 2D artworks using Deep Learning techniques
CN117292067B (en)Virtual 3D model method and system based on scanning real object acquisition
Zeng et al.3D plants reconstruction based on point cloud
CN119006709B (en)Sparse view face reconstruction method and device based on three-dimensional Gaussian
CN119363956B (en) A method and device for generating spatial visual interactive medium based on spatial calculation
Xiao et al.Optimization of 3D Animation Design Based on Support Vector Machine Algorithm
CN118537485A (en)Mapping method of head model
Messelt et al.Enhancing Deep Learning-Based 3D Face Reconstructions with Consumer-Grade Depth Data
Kaur et al.Bringing Faces to Life: A Survey on Realistic Facial Expressions in 3D Virtual Characters
An et al.Automatic 2.5 d cartoon modelling

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp