








技术领域technical field
本公开涉及三维动画制作技术领域,尤其涉及一种动画生成方法、装置、电子设备、存储介质及程序产品。The present disclosure relates to the technical field of three-dimensional animation production, and in particular, to an animation generation method, device, electronic device, storage medium and program product.
背景技术Background technique
随着三维动画制作技术的发展,动画角色的动作内容越来越复杂,动画角色通常需要与道具进行互动。进行动画制作时,动画场景中通常会有角色模型及道具模型,如角色模型可以是人物模型,道具模型可以包括杯子、武器等模型,人物模型可以手持杯子、武器等模型进行互动。With the development of 3D animation production technology, the action content of animated characters becomes more and more complex, and animated characters usually need to interact with props. During animation production, there are usually character models and prop models in the animation scene. For example, the character model can be a character model, the prop model can include models such as cups and weapons, and the character models can interact with models such as cups and weapons.
在进行上述包含动画角色与道具互动的动画制作时,通常会涉及骨骼动画制作。在骨骼动画中,模型具有互相连接的“骨骼”组成的骨架结构,可以通过改变骨骼的朝向和位置来为模型生成动画。传统方案中,上述骨骼动画的制作通常是手动设置骨骼动画的动作参数,制作出大量关键帧动画,对制作者的专业技能要求较高,制作成本较高,且效率较低。Skeletal animation is usually involved when doing the above animations involving the interaction of animated characters and props. In skeletal animation, a model has a skeletal structure of interconnected "bones" that can be animated by changing the orientation and position of the bones. In the traditional solution, the above-mentioned skeletal animation is usually produced by manually setting the action parameters of the skeletal animation to produce a large number of key frame animations, which requires high professional skills of the producer, high production cost, and low efficiency.
发明内容SUMMARY OF THE INVENTION
本公开提供一种动画生成方法、装置、电子设备、存储介质及程序产品,以至少解决传统方案中动画制作要求较高,且效率较低的问题。本公开的技术方案如下:The present disclosure provides an animation generation method, device, electronic device, storage medium and program product, so as to at least solve the problems of high animation production requirements and low efficiency in the traditional solution. The technical solutions of the present disclosure are as follows:
根据本公开实施例的第一方面,提供一种动画生成方法,包括:According to a first aspect of the embodiments of the present disclosure, an animation generation method is provided, including:
基于采集的用户的动作数据,获得第一动作参数;obtaining a first motion parameter based on the collected motion data of the user;
根据所述第一动作参数,对预先搭建的场景模型中的角色模型进行渲染,获得第一动画;rendering the character model in the pre-built scene model according to the first action parameter to obtain the first animation;
根据针对所述第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数;determining a second action parameter according to an adjustment operation for the character model in at least one frame of image in the first animation;
确定针对所述场景模型中的道具模型的交互参数;determining interaction parameters for prop models in the scene model;
根据所述第二动作参数,所述第一动作参数以及所述交互参数,对所述角色模型及道具模型进行渲染,获得所述场景模型对应的第二动画。According to the second action parameter, the first action parameter and the interaction parameter, the character model and the prop model are rendered to obtain a second animation corresponding to the scene model.
可选的,所述交互参数包括交互部位、交互道具及交互时刻;Optionally, the interaction parameters include interaction parts, interaction props and interaction moments;
所述根据所述第二动作参数,所述第一动作参数以及所述交互参数,对所述角色模型及道具模型进行渲染,获得所述场景模型对应的第二动画包括:The rendering of the character model and the prop model according to the second action parameter, the first action parameter and the interaction parameter, and obtaining the second animation corresponding to the scene model includes:
根据所述第一动画中的至少一帧图像对应的第二动作参数及所述第一动画中不包括所述至少一帧图像的其余帧图像对应的第一动作参数,对所述角色模型进行渲染,并按照所述交互时刻,将所述交互道具的道具模型设置为所述角色模型的交互部位的节点,所述交互道具的道具模型跟随所述角色模型移动,获得所述场景模型对应的第二动画。According to the second action parameter corresponding to at least one frame of image in the first animation and the first action parameter corresponding to the remaining frame images in the first animation that do not include the at least one frame of image, the character model is performed render, and set the prop model of the interactive prop as the node of the interactive part of the character model according to the interaction moment, the prop model of the interactive prop moves with the character model, and obtain the corresponding scene model. Second animation.
可选的,所述交互参数还包括过渡时长;Optionally, the interaction parameter further includes a transition duration;
所述所述根据所述第一动画中的至少一帧图像对应的第二动作参数及所述第一动画中不包括所述至少一帧图像的其余帧图像对应的第一动作参数,对所述角色模型进行渲染,并按照所述交互时刻,将所述交互道具的道具模型设置为所述角色模型的交互部位的节点,获得所述场景模型对应的第二动画包括:According to the second action parameter corresponding to at least one frame of image in the first animation and the first action parameter corresponding to other frame images in the first animation that do not include the at least one frame of image, The character model is rendered, and according to the interaction moment, the prop model of the interactive prop is set as the node of the interactive part of the character model, and obtaining the second animation corresponding to the scene model includes:
基于所述第一动画中的至少一帧图像对应的第二动作参数,确定所述第一动画中,所述交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数;determining, based on the second action parameter corresponding to at least one frame of image in the first animation, the second action parameter corresponding to each frame of image in the first animation before and after the interaction moment;
根据所述第一动画中的至少一帧图像及交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数,以及所述第一动画中不包括所述至少一帧图像及交互时刻之前和之后过渡时长内每一帧图像的其余帧图像对应的第一动作参数,对所述角色模型进行渲染,并按照所述交互时刻,将所述交互道具的道具模型设置为所述角色模型的交互部位的节点,获得所述场景模型对应的第二动画。According to at least one frame of image in the first animation and the second action parameter corresponding to each frame of image in the transition time before and after the interaction moment, and the first animation does not include the at least one frame of image and interaction Render the character model with the first action parameters corresponding to the remaining frame images of each frame of images in the transition period before and after the moment, and set the prop model of the interactive prop as the character according to the interaction moment The node of the interactive part of the model obtains the second animation corresponding to the scene model.
可选的,所述交互参数包括交互时刻及过渡时长;Optionally, the interaction parameters include interaction time and transition duration;
所述根据所述第二动作参数,所述第一动作参数以及所述交互参数,对所述角色模型及道具模型进行渲染,获得所述场景模型对应的第二动画包括:The rendering of the character model and the prop model according to the second action parameter, the first action parameter and the interaction parameter, and obtaining the second animation corresponding to the scene model includes:
基于所述第一动画中的至少一帧图像对应的第二动作参数,确定所述第一动画中,所述交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数;determining, based on the second action parameter corresponding to at least one frame of image in the first animation, the second action parameter corresponding to each frame of image in the first animation before and after the interaction moment;
根据所述第一动画中的至少一帧图像及交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数,及所述第一动画中不包括所述至少一帧图像及交互时刻之前和之后过渡时长内每一帧图像的其余帧图像对应的第一动作参数,以及所述交互参数,对所述角色模型及道具模型进行渲染,获得所述场景模型对应的第二动画。According to at least one frame of image in the first animation and the second action parameter corresponding to each frame of image in the transition time before and after the interaction moment, and the first animation does not include the at least one frame of image and interaction The first action parameters corresponding to the remaining frame images of each frame of images in the transition period before and after the moment, and the interaction parameters are used to render the character model and the prop model to obtain the second animation corresponding to the scene model.
可选的,所述基于所述第一动画中的至少一帧图像对应的第二动作参数,确定所述第一动画中,所述交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数包括:Optionally, based on the second action parameter corresponding to at least one frame of image in the first animation, determine, in the first animation, the corresponding value of each frame of image in the transition time before and after the interaction moment. The second action parameters include:
确定所述第一动画中,与所述交互时刻对应的图像对应的所述第一动作参数与所述第二动作参数的参数差;determining the parameter difference between the first action parameter and the second action parameter corresponding to the image corresponding to the interaction moment in the first animation;
按照所述过渡时长内每一帧图像分别对应的时刻与所述交互时刻的时间差,确定每一帧图像分别对应的参数差调整比例,并基于所述参数差调整比例及所述第一动作参数与所述第二动作参数的参数差,确定所述第一动画中,所述交互时刻之前和之后过渡时长内分别与每一帧图像对应的参数差,所述时间差大的图像对应的参数差调整比例小于时间差小的图像对应的参数差调整比例;According to the time difference between the moment corresponding to each frame of image and the interaction moment in the transition duration, determine the adjustment ratio of the parameter difference corresponding to each frame of image, and adjust the ratio based on the parameter difference and the first action parameter and the parameter difference of the second action parameter, determine the parameter difference corresponding to each frame of image in the transition time before and after the interaction moment in the first animation, and the parameter difference corresponding to the image with the large time difference. The adjustment ratio is smaller than the adjustment ratio of the parameter difference corresponding to the image with small time difference;
基于所述第一动画中,所述交互时刻之前和之后过渡时长内分别与每一帧图像对应的第一动作参数及参数差,确定分别与每一帧图像对应的第二动作参数。Based on the first motion parameters and parameter differences corresponding to each frame of images in the transition time before and after the interaction moment in the first animation, the second motion parameters corresponding to each frame of images are determined.
可选的,所述方法还包括:Optionally, the method further includes:
展示所述第一动画;所述第一动画用于指示所述用户进行动作调整。The first animation is displayed; the first animation is used to instruct the user to adjust the action.
根据本公开实施例的第二方面,提供一种动画生成装置,包括:According to a second aspect of the embodiments of the present disclosure, an animation generating apparatus is provided, including:
第一渲染模块,被配置为根据所述第一动作参数,对预先搭建的场景模型中的角色模型进行渲染,获得第一动画;a first rendering module, configured to render the character model in the pre-built scene model according to the first action parameter to obtain a first animation;
第一确定模块,被配置为根据针对所述第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数;a first determination module, configured to determine a second action parameter according to an adjustment operation for the character model in at least one frame of image in the first animation;
第二确定模块,被配置为确定针对所述场景模型中的道具模型的交互参数;a second determining module, configured to determine interaction parameters for the prop model in the scene model;
第二渲染模块,被配置为根据所述第二动作参数,所述第一动作参数以及所述交互参数,对所述角色模型及道具模型进行渲染,获得所述场景模型对应的第二动画。The second rendering module is configured to render the character model and the prop model according to the second action parameter, the first action parameter and the interaction parameter to obtain a second animation corresponding to the scene model.
可选的,所述交互参数包括包括交互部位、交互道具及交互时刻;所述第二渲染模块,具体被配置为根据所述第一动画中的至少一帧图像对应的第二动作参数及所述第一动画中不包括所述至少一帧图像的其余帧图像对应的第一动作参数,对所述角色模型进行渲染,并按照所述交互时刻,将所述交互道具的道具模型设置为所述角色模型的交互部位的节点,获得所述场景模型对应的第二动画。Optionally, the interaction parameters include interaction parts, interaction props and interaction moments; the second rendering module is specifically configured to be based on the second action parameters corresponding to at least one frame of image in the first animation and all The first animation does not include the first action parameters corresponding to the remaining frame images of the at least one frame image, render the character model, and set the prop model of the interactive prop to the desired value according to the interaction moment. The node of the interaction part of the character model is obtained, and the second animation corresponding to the scene model is obtained.
可选的,所述交互参数还包括过渡时长;所述第二渲染模块,具体被配置为基于所述第一动画中的至少一帧图像对应的第二动作参数,确定所述第一动画中,所述交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数;根据所述第一动画中的至少一帧图像及交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数,以及所述第一动画中不包括所述至少一帧图像及交互时刻之前和之后过渡时长内每一帧图像的其余帧图像对应的第一动作参数,对所述角色模型进行渲染,并按照所述交互时刻,将所述交互道具的道具模型设置为所述角色模型的交互部位的节点,获得所述场景模型对应的第二动画。Optionally, the interaction parameter further includes a transition duration; the second rendering module is specifically configured to determine, based on the second action parameter corresponding to at least one frame of image in the first animation, , the second action parameter corresponding to each frame of image in the transition period before and after the interaction moment; according to at least one frame of image in the first animation and each frame of image in the transition period before and after the interaction moment corresponds to The second action parameter of the first animation does not include the at least one frame of image and the first action parameter corresponding to the remaining frame images of each frame of images in the transition time before and after the interaction moment. Rendering is performed, and according to the interaction moment, the prop model of the interactive prop is set as the node of the interaction part of the character model, and the second animation corresponding to the scene model is obtained.
可选的,所述交互参数包括交互时刻及过渡时长;所述第二渲染模块,具体被配置为基于所述第一动画中的至少一帧图像对应的第二动作参数,确定所述第一动画中,所述交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数;根据所述第一动画中的至少一帧图像及交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数,及所述第一动画中不包括所述至少一帧图像及交互时刻之前和之后过渡时长内每一帧图像的其余帧图像对应的第一动作参数,以及所述交互参数,对所述角色模型及道具模型进行渲染,获得所述场景模型对应的第二动画。Optionally, the interaction parameters include interaction time and transition duration; the second rendering module is specifically configured to determine the first animation based on a second action parameter corresponding to at least one frame of image in the first animation. In the animation, the second action parameter corresponding to each frame of image in the transition period before and after the interaction moment; according to at least one frame of image in the first animation and each frame in the transition period before and after the interaction moment The second action parameter corresponding to the image, and the first action parameter corresponding to the remaining frame images of each frame image in the transition time period before and after the interaction moment excluding the at least one frame image, and the Interaction parameters, render the character model and the prop model, and obtain the second animation corresponding to the scene model.
可选的,所述第二渲染模块,具体被配置为确定所述第一动画中,与所述交互时刻对应的图像对应的所述第一动作参数与所述第二动作参数的参数差;按照所述过渡时长内每一帧图像分别对应的时刻与所述交互时刻的时间差,确定每一帧图像分别对应的参数差调整比例,并基于所述参数差调整比例及所述第一动作参数与所述第二动作参数的参数差,确定所述第一动画中,所述交互时刻之前和之后过渡时长内分别与每一帧图像对应的参数差,所述时间差大的图像对应的参数差调整比例小于时间差小的图像对应的参数差调整比例;基于所述第一动画中,所述交互时刻之前和之后过渡时长内分别与每一帧图像对应的第一动作参数及参数差,确定分别与每一帧图像对应的第二动作参数;根据所述第一动画中的至少一帧图像及交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数,及所述第一动画中不包括所述至少一帧图像及交互时刻之前和之后过渡时长内每一帧图像的其余帧图像对应的第一动作参数,以及所述交互参数,对所述角色模型及道具模型进行渲染,获得所述场景模型对应的第二动画。Optionally, the second rendering module is specifically configured to determine the parameter difference between the first action parameter and the second action parameter corresponding to the image corresponding to the interaction moment in the first animation; According to the time difference between the moment corresponding to each frame of image and the interaction moment in the transition duration, determine the adjustment ratio of the parameter difference corresponding to each frame of image, and adjust the ratio based on the parameter difference and the first action parameter and the parameter difference of the second action parameter, determine the parameter difference corresponding to each frame of image in the transition time before and after the interaction moment in the first animation, and the parameter difference corresponding to the image with the large time difference. The adjustment ratio is smaller than the adjustment ratio of the parameter difference corresponding to the image with a small time difference; based on the first animation, the first action parameter and the parameter difference corresponding to each frame of image in the transition time before and after the interaction moment, respectively, determine the difference between the parameters. a second action parameter corresponding to each frame of image; according to at least one frame of image in the first animation and the second action parameter corresponding to each frame of image in the transition time before and after the interaction moment, and the first The animation does not include the at least one frame of image and the first action parameters corresponding to the remaining frame images of each frame of images in the transition period before and after the interaction moment, and the interaction parameters, and the character model and the prop model are rendered. , to obtain the second animation corresponding to the scene model.
可选的,所述装置还包括:Optionally, the device further includes:
展示模块,被配置为展示所述第一动画;所述第一动画用于指示所述用户进行动作调整。The presentation module is configured to present the first animation; the first animation is used to instruct the user to adjust the action.
根据本公开实施例的第三方面,提供一种电子设备,包括:According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, comprising:
处理器;processor;
用于存储所述处理器可执行指令的存储器;a memory for storing the processor-executable instructions;
其中,所述处理器被配置为执行所述指令,以实现如第一方面所述的动画生成方法。Wherein, the processor is configured to execute the instructions to implement the animation generation method according to the first aspect.
根据本公开实施例的第四方面,提供一种计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得所述电子设备能够执行如第一方面所述的动画生成方法。According to a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, when the instructions in the computer-readable storage medium are executed by a processor of an electronic device, the electronic device can execute the first aspect The animation generation method described.
根据本公开实施例的第五方面,提供一种计算机程序产品,包括计算机指令,所述计算机指令被处理器执行时实现如第一方面所述的动画生成方法。According to a fifth aspect of the embodiments of the present disclosure, there is provided a computer program product including computer instructions, which when executed by a processor implement the animation generation method according to the first aspect.
本公开的实施例提供的技术方案至少带来以下有益效果:The technical solutions provided by the embodiments of the present disclosure bring at least the following beneficial effects:
由于第一动作参数是基于采集的用户的动作数据获得的,无需手动设置,利用该第一动作参数进行渲染获得第一动画,解决了传统方案中需要手动制作大量关键帧动画的问题,降低了制作成本,且提高了制作效率。并且,根据用户针对第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数,并结合上述第一动作参数与第二动作参数及交互参数进行渲染获得第二动画,可以确保角色模型与道具模型的交互效果。Since the first motion parameter is obtained based on the collected motion data of the user, there is no need to manually set it, and the first motion parameter is used for rendering to obtain the first animation, which solves the problem of manually producing a large number of key frame animations in the traditional solution, and reduces the The production cost is increased, and the production efficiency is improved. In addition, according to the user's adjustment operation on the character model in at least one frame of images in the first animation, the second action parameter is determined, and the second animation is obtained by rendering in combination with the above-mentioned first action parameter, the second action parameter and the interaction parameter, It can ensure the interaction effect between the character model and the prop model.
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the present disclosure.
附图说明Description of drawings
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理,并不构成对本公开的不当限定。The accompanying drawings, which are incorporated into and constitute a part of this specification, illustrate embodiments consistent with the present disclosure, and together with the description, serve to explain the principles of the present disclosure and do not unduly limit the present disclosure.
图1是根据一示例性实施例示出的一种动画生成方法的流程图。Fig. 1 is a flow chart of an animation generation method according to an exemplary embodiment.
图2-1是根据一示例性实施例示出的一种场景模型的示意图。Fig. 2-1 is a schematic diagram of a scene model according to an exemplary embodiment.
图2-2是根据一示例性实施例示出的一种动画中多帧图像的示意图。FIG. 2-2 is a schematic diagram illustrating multiple frames of images in an animation according to an exemplary embodiment.
图2-3是根据一示例性实施例示出的一帧图像的示意图。2-3 are schematic diagrams illustrating a frame of images according to an exemplary embodiment.
图2-4是根据另一示例性实施例示出的一种动画中多帧图像的示意图。2-4 are schematic diagrams of multiple frames of images in an animation according to another exemplary embodiment.
图3是根据一示例性实施例示出的一种动作参数调整方法的流程图。Fig. 3 is a flow chart of a method for adjusting an action parameter according to an exemplary embodiment.
图4是根据另一示例性实施例示出的一种动画生成方法的流程图。Fig. 4 is a flowchart of an animation generation method according to another exemplary embodiment.
图5是根据一示例性实施例示出的一种动画生成装置的框图。Fig. 5 is a block diagram of an animation generating apparatus according to an exemplary embodiment.
图6是根据一示例性实施例示出的一种电子设备的框图。Fig. 6 is a block diagram of an electronic device according to an exemplary embodiment.
具体实施方式Detailed ways
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。In order to make those skilled in the art better understand the technical solutions of the present disclosure, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。It should be noted that the terms "first", "second" and the like in the description and claims of the present disclosure and the above drawings are used to distinguish similar objects, and are not necessarily used to describe a specific sequence or sequence. It is to be understood that the data so used may be interchanged under appropriate circumstances such that the embodiments of the disclosure described herein can be practiced in sequences other than those illustrated or described herein. The implementations described in the illustrative examples below are not intended to represent all implementations consistent with this disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as recited in the appended claims.
本公开的方案适用于三维角色动画制作场景,尤其适用于三维角色与道具交互的动画制作场景。在进行动画制作时,动画场景中通常会有角色模型及道具模型,且角色模型可以与道具模型进行互动,如角色模型可以是人物模型,道具模型可以包括杯子、武器等模型,人物模型可以手持杯子、武器等模型进行互动。The solution of the present disclosure is suitable for a three-dimensional character animation production scene, and is especially suitable for an animation production scene in which a three-dimensional character interacts with props. During animation production, there are usually character models and prop models in the animation scene, and the character models can interact with the prop models. For example, the character model can be a character model, the prop model can include models such as cups and weapons, and the character model can be held by hand Interact with models such as cups and weapons.
在进行上述包含动画角色与动画道具互动的动画制作时,通常会涉及骨骼动画制作。在骨骼动画中,模型具有互相连接的“骨骼”组成的骨架结构,可以通过改变骨骼的朝向和位置来为模型生成动画。传统方案中,上述骨骼动画的制作通常是手动设置骨骼动画的制作参数,制作出大量关键帧动画,对制作者的专业技能要求较高,制作成本较高,且效率较低。Skeletal animation is usually involved in the above animations involving the interaction of animated characters and animated props. In skeletal animation, a model has a skeletal structure of interconnected "bones" that can be animated by changing the orientation and position of the bones. In the traditional solution, the production of the above-mentioned skeletal animation is usually to manually set the production parameters of the skeletal animation to produce a large number of key frame animations, which requires high professional skills of the producer, high production cost, and low efficiency.
因此,为了解决上述技术问题,发明人提出了本公开的技术方案,提供了一种动画生成方法,包括基于采集的用户的动作数据,获得第一动作参数;根据所述第一动作参数,对预先搭建的场景模型中的角色模型进行渲染,获得第一动画;根据用户针对所述第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数;确定针对所述场景模型中的道具模型的交互参数;根据所述第二动作参数,所述第一动作参数以及所述交互参数,对所述角色模型及道具模型进行渲染,获得所述场景模型对应的第二动画。Therefore, in order to solve the above technical problem, the inventor proposes the technical solution of the present disclosure, and provides an animation generation method, which includes obtaining a first action parameter based on the collected action data of the user; The character model in the pre-built scene model is rendered to obtain a first animation; the second action parameter is determined according to the user's adjustment operation on the character model in at least one frame of the image in the first animation; the second action parameter is determined; The interaction parameters of the prop model in the model; according to the second action parameter, the first action parameter and the interaction parameter, the character model and the prop model are rendered, and the second animation corresponding to the scene model is obtained .
本公开的方案中,预先搭建的场景模型中包括角色模型及道具模型,可以基于采集的用户动作数据获得第一动作参数,并利用该第一动作参数对上述角色模型进行渲染,以获得包含角色模型动作的第一动画,以及根据用户针对第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数,之后根据第二动作参数,第一动作参数及针对道具模型的交互参数对上述角色模型及道具模型进行渲染,可以获得场景模型对应的包含角色模型动作及角色模型与道具模型互动的第二动画。上述第一动作参数是基于采集的用户的动作数据获得的,无需手动设置,利用该第一动作参数进行渲染获得第一动画,解决了传统方案中需要手动制作大量关键帧动画的问题,降低了制作成本,且提高了制作效率。并且,根据用户针对第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数,并结合上述第一动作参数与第二动作参数及交互参数进行渲染获得第二动画,可以确保角色模型与道具模型的交互效果。In the solution of the present disclosure, the pre-built scene model includes a character model and a prop model, a first action parameter can be obtained based on the collected user action data, and the above-mentioned character model can be rendered by using the first action parameter, so as to obtain a character model including a character model. The first animation of the model action, and the second action parameter is determined according to the user's adjustment operation on the character model in at least one frame of the image in the first animation, and then according to the second action parameter, the first action parameter and the prop model The interaction parameter renders the character model and the prop model, and a second animation corresponding to the scene model including the action of the character model and the interaction between the character model and the prop model can be obtained. The above-mentioned first action parameter is obtained based on the collected action data of the user, and does not need to be manually set. The first animation is obtained by rendering using the first action parameter, which solves the problem that a large number of key frame animations need to be manually produced in the traditional solution, and reduces the The production cost is increased, and the production efficiency is improved. In addition, according to the user's adjustment operation on the character model in at least one frame of images in the first animation, the second action parameter is determined, and the second animation is obtained by rendering in combination with the above-mentioned first action parameter, the second action parameter and the interaction parameter, It can ensure the interaction effect between the character model and the prop model.
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only a part of the embodiments of the present disclosure, but not all of the embodiments. Based on the embodiments in the present disclosure, all other embodiments obtained by those skilled in the art without creative efforts shall fall within the protection scope of the present disclosure.
图1是根据一示例性实施例示出的一种动画生成方法的流程图,如图1所示,可以包括以下步骤。Fig. 1 is a flowchart of an animation generation method according to an exemplary embodiment. As shown in Fig. 1 , the following steps may be included.
在步骤S11中,基于采集的用户的动作数据,获得第一动作参数。In step S11, a first motion parameter is obtained based on the collected motion data of the user.
在动画制作场景中,按照动画制作的具体内容,用户可以执行对应的预设动作。以动画制作的内容是三维角色坐在椅子上拿面前桌子上放置的杯子为例,用户可以执行坐下并向前伸出手臂的动作。In the animation production scene, according to the specific content of animation production, the user can perform corresponding preset actions. Taking the animation of a 3D character sitting on a chair and holding a cup placed on the table in front of it as an example, the user can perform an action of sitting down and extending his arms forward.
可选的,可以利用视频采集装置对上述用户执行的动作进行拍摄,获得用户的动作数据。其中,该视频采集装置可以包括摄像机、录像机等具有采集功能的设备,可以根据实际应用场景进行设置,本公开对此不作具体限制。Optionally, the above-mentioned actions performed by the user may be photographed by using a video capture device to obtain action data of the user. Wherein, the video collection device may include a camera, a video recorder, or other equipment with a collection function, which may be set according to an actual application scenario, which is not specifically limited in the present disclosure.
为了便于处理,视频采集装置采集获得的用户的动作数据可以实现为视频流的形式。In order to facilitate processing, the user's action data collected and obtained by the video collection device may be implemented in the form of a video stream.
根据上述采集获得的用户的动作数据,可以按照预设算法对其进行处理,可以获得对应的动作参数。其中,该动作参数可以用于对三维角色模型进行渲染,获得包含角色模型动作的动画。该动作参数可以包括角色模型各关节,如手肘关节、手指关节等的旋转欧拉角值等,不作赘述,该预设算法可以包括人体肢体动作与三维角色关节旋转欧拉角的转换算法等,可以根据实际应用场景进行设置,不作具体限制。According to the user's motion data obtained by the above collection, it can be processed according to a preset algorithm, and corresponding motion parameters can be obtained. Wherein, the action parameter can be used to render the three-dimensional character model to obtain an animation including the action of the character model. The action parameters may include various joints of the character model, such as the rotation Euler angle values of elbow joints, finger joints, etc., which will not be repeated. , which can be set according to actual application scenarios without specific restrictions.
其中,基于采集的用户的动作数据,获得第一动作参数还可以有其它实现方式,将在后续实施例中进行说明,此处不进行赘述。Wherein, there may also be other implementation manners for obtaining the first motion parameter based on the collected motion data of the user, which will be described in subsequent embodiments, and will not be repeated here.
在步骤S12中,根据第一动作参数,对预先搭建的场景模型中的角色模型进行渲染,获得第一动画。In step S12, the character model in the pre-built scene model is rendered according to the first action parameter to obtain a first animation.
本实施例中,可以预先搭建场景模型,并将角色模型及道具模型放置在该场景模型中,以进行渲染。其中,场景模型可以包括房屋模型、桌椅模型等,角色模型可以包括人物模型、动物模型等,道具模型可以包括杯子、武器等模型等,不作具体限制。In this embodiment, a scene model can be built in advance, and the character model and the prop model can be placed in the scene model for rendering. The scene model may include a house model, a table and chair model, etc., the character model may include a character model, an animal model, etc., and the prop model may include models such as cups, weapons, etc., without specific limitations.
可选的,可以利用渲染引擎执行上述渲染操作。其中,渲染引擎可以根据实际应用场景进行设置,如可以包括虚幻引擎(UNREAL ENGINE,简称Unreal),Unity引擎等,不作具体限制。具体的,可以在渲染引擎中预先搭建场景模型,并将角色模型及道具模型放置在场景模型中。Optionally, a rendering engine may be used to perform the above rendering operation. The rendering engine may be set according to actual application scenarios, for example, it may include Unreal Engine (UNREAL ENGINE, Unreal for short), Unity engine, etc., without specific limitation. Specifically, the scene model may be pre-built in the rendering engine, and the character model and the prop model may be placed in the scene model.
为了提高渲染效果,在场景模型中放置上述角色模型及道具模型时,可以按照动画制作的具体内容,将该角色模型及道具模型放置在预设位置。以动画制作的内容是角色坐在椅子上拿起桌子上放置的杯子为例,如图2-1所示,是根据一示例性实施例示出的一种场景模型中放置角色模型及道具模型的示意图。其中,场景模型包括桌子模型M和椅子模型N,角色模型A放置于桌子模型M和椅子模型N之间,杯子模型B放置于桌子模型M上。In order to improve the rendering effect, when placing the above character models and prop models in the scene model, the character models and prop models may be placed in preset positions according to the specific content of animation production. Taking the content of animation production as an example where a character sits on a chair and picks up a cup placed on a table, as shown in Figure 2-1, the character model and the prop model are placed in a scene model according to an exemplary embodiment. Schematic. The scene model includes a table model M and a chair model N, the character model A is placed between the table model M and the chair model N, and the cup model B is placed on the table model M.
可选的,将上述角色模型及道具模型放置在场景模型中的预设位置后,还可以调整角色模型或道具模型的朝向。如图2-1所示,角色模型A放置于桌子模型M和椅子模型N之间,且角色模型A朝向桌子模型M。具体的,上述角色模型及道具模型的调整可以由用户手动操作实现,也可以基于朝向调整指令进行自动调整,其中,朝向调整指令可以包括调整角度,如将角色模型A逆时针旋转15度等,可以根据实际应用场景设置,不作具体限制。Optionally, after placing the above-mentioned character model and prop model in a preset position in the scene model, the orientation of the character model or the prop model can also be adjusted. As shown in Figure 2-1, the role model A is placed between the table model M and the chair model N, and the role model A faces the table model M. Specifically, the above-mentioned adjustment of the character model and the prop model may be realized by manual operation by the user, or may be automatically adjusted based on the orientation adjustment instruction, wherein the orientation adjustment instruction may include adjusting the angle, such as rotating the character model A by 15 degrees counterclockwise, etc. It can be set according to the actual application scenario without specific restrictions.
上述场景模型,角色模型及道具模型预先设置完成后,可以根据动作参数,对角色模型进行渲染,获得动画。其中,该动画中可以包括角色模型动作的多帧图像。以动画制作的内容是角色坐在椅子上拿桌子上放置的杯子为例,如图2-2所示,是根据一示例性实施例示出的一种动画中多帧图像的示意图。其中,角色模型A坐在椅子模型上,a帧图像中,角色模型A朝向桌子模型上放置的杯子模型B所在的方向伸出右手,b帧图像中,角色模型A的右手朝向杯子模型B移动,与杯子模型B的距离缩短,c帧图像中,角色模型A的右手移动至杯子模型B的位置,与杯子模型B接触,并握住杯子模型B。After the above-mentioned scene model, character model and prop model are preset, the character model can be rendered according to the action parameters to obtain animation. Wherein, the animation may include multiple frames of images of the action of the character model. Taking the content of animation production as an example of a character sitting on a chair and holding a cup placed on a table, as shown in FIG. 2-2 , it is a schematic diagram of multiple frames of images in an animation according to an exemplary embodiment. Among them, the character model A is sitting on the chair model. In the a-frame image, the character model A stretches out his right hand toward the direction of the cup model B placed on the table model. In the b-frame image, the character model A's right hand moves toward the cup model B. , the distance from the cup model B is shortened. In the c frame image, the right hand of the character model A moves to the position of the cup model B, contacts with the cup model B, and holds the cup model B.
在步骤S13中,根据用户针对第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数。In step S13, the second action parameter is determined according to the user's adjustment operation on the character model in at least one frame of image in the first animation.
实际应用中,上述渲染获得的动画所包括的多帧图像中,可能会存在角色模型与道具模型接触时,两个模型之间存在间隙,或者重叠等情况,贴合程度较差。如图2-2中,c帧图像中,角色模型A的右手与杯子模型B接触,并握住杯子模型B,但角色模型A的右手手指并未与杯子模型B的杯壁贴合,二者之间具有间隙,动画中角色模型与道具模型的交互效果较差。In practical applications, in the multi-frame images included in the animation obtained by the above rendering, when the character model and the prop model are in contact, there may be a gap or overlap between the two models, and the degree of fit is poor. As shown in Figure 2-2, in the image of frame c, the right hand of character model A is in contact with cup model B and holds cup model B, but the fingers of character model A's right hand do not fit with the cup wall of cup model B. Two There is a gap between them, and the interaction between the character model and the prop model in the animation is poor.
因此,可以对上述图像中的角色模型动作进行调整。具体的,可以从动画包括的多帧图像中,确定出角色模型与道具模型接触但贴合程度较差的至少一帧图像。Therefore, adjustments can be made to the action of the character model in the above image. Specifically, from the multiple frames of images included in the animation, at least one frame of image in which the character model is in contact with the prop model but has a poor degree of fit can be determined.
可选的,动画包括的多帧图像中,角色模型与道具模型接触但贴合程度较差的图像可以只有一帧,如图2-2所示的三帧图像中,a帧图像与b帧图像中,角色模型均未与道具模型接触,c帧图像中,角色模型与道具模型接触,且贴合程度较差。可选的,动画包括的多帧图像中,角色模型与道具模型接触但贴合程度较差的图像也可以有多帧,如以动画制作的内容是角色用右手拿杯子后又换用左手拿杯子为例,可以包括角色模型的右手及左手分别与杯子模型接触但贴合程度较差的两帧图像,不作具体限制。Optionally, among the multiple frames of images included in the animation, there may be only one frame of the image in which the character model is in contact with the prop model but has a poor degree of fit. Among the three frames of images shown in Figure 2-2, frame a and frame b are the same. In the image, the character model is not in contact with the prop model. In the c-frame image, the character model is in contact with the prop model, and the fit is poor. Optionally, among the multi-frame images included in the animation, the image in which the character model is in contact with the prop model but has a poor degree of fit may also have multiple frames. For example, the content of the animation is that the character holds the cup with the right hand and then uses the left hand. Taking a cup as an example, it can include two frames of images in which the right hand and the left hand of the character model are in contact with the cup model respectively, but the degree of fit is poor, and no specific limitation is imposed.
确定出上述至少一帧图像后,可以对该至少一帧图像中的角色模型动作进行调整,确定出新的动作参数,使得利用新的动作参数对角色模型进行渲染时,角色模型可以与道具模型接触且提高贴合程度。为了便于描述,可以将前述提到的动作参数称为第一动作参数,将对至少一帧图像中的角色模型动作调整后获得的动作参数称为第二动作参数。After the at least one frame of image is determined, the action of the character model in the at least one frame of image can be adjusted to determine new action parameters, so that when the character model is rendered using the new action parameters, the character model can be combined with the prop model. Contact and improve fit. For convenience of description, the aforementioned action parameter may be referred to as the first action parameter, and the action parameter obtained after adjusting the action of the character model in at least one frame of image is referred to as the second action parameter.
具体的,根据用户针对上述角色模型的调整操作,可以获取用户提供的针对上述角色模型的动作调整参数,如角色模型中相应关节的欧拉角,确定上述第二动作参数。其中,该动作调整参数可以根据实际应用场景设置。以图2-2所示的c帧图像为例,用户提供的针对角色模型A的动作调整参数可以是右手手指的动作调整参数,如右手手指各关节的欧拉角,基于该动作调整参数及原第一动作参数中角色模型A中其它动作参数,如手腕动作参数、手臂动作参数等,可以确定第二动作参数。利用该第二动作参数进行渲染,可以实现角色模型与道具模型接触时具有较高的贴合程度。图2-3示出了利用第二动作参数对角色模型进行渲染获得的一帧图像的示意图,该第二动作参数为针对图2-2所示的c帧图像中角色模型A的调整操作确定。如图2-3所示,角色模型A的右手与杯子模型B接触,并握住杯子模型B,且角色模型A的右手手指与杯子模型B的杯壁贴合,二者之间不存在间隙或重叠,交互效果较好。Specifically, according to the user's adjustment operation for the above-mentioned character model, the above-mentioned second action parameter can be determined by obtaining the action adjustment parameters for the above-mentioned character model provided by the user, such as Euler angles of the corresponding joints in the character model. The action adjustment parameters may be set according to actual application scenarios. Taking the c-frame image shown in Figure 2-2 as an example, the action adjustment parameters for character model A provided by the user can be the action adjustment parameters of the right fingers, such as the Euler angles of the joints of the right fingers, based on the action adjustment parameters and Other motion parameters in the character model A in the original first motion parameters, such as wrist motion parameters, arm motion parameters, etc., can determine the second motion parameters. Using the second action parameter for rendering can achieve a higher degree of fit between the character model and the prop model when they are in contact. Fig. 2-3 shows a schematic diagram of a frame of image obtained by rendering the character model by using the second action parameter, where the second action parameter is determined for the adjustment operation of the character model A in the c-frame image shown in Fig. 2-2 . As shown in Figure 2-3, the right hand of character model A is in contact with cup model B and holds cup model B, and the fingers of character model A's right hand are in contact with the cup wall of cup model B, and there is no gap between them Or overlapping, the interaction effect is better.
在步骤S14中,确定针对场景模型中的道具模型的交互参数。In step S14, the interaction parameters for the prop model in the scene model are determined.
上述第一动作参数及第二动作参数是针对场景中的角色模型,在渲染获得场景对应的包含角色模型及道具模型的交互动画时,还需要确定针对场景模型中的道具模型的交互参数,如道具标识等。以动画制作的内容是角色拿桌子上放置的杯子为例,场景模型中桌子上可以放置有多个杯子,且每个杯子都具有各自的标识,根据交互参数中包括的道具标识,可以确定与角色交互的目标杯子。The above-mentioned first action parameter and second action parameter are for the character model in the scene. When rendering the interactive animation corresponding to the scene including the character model and the prop model, it is also necessary to determine the interaction parameters for the prop model in the scene model, such as Item identification, etc. Taking the animation content as an example of a character holding a cup placed on a table, there can be multiple cups placed on the table in the scene model, and each cup has its own logo. According to the prop logo included in the interaction parameters, the The target cup for character interaction.
其中,交互参数还可以有其它的实现方式,将在后续实施例中进行说明,此处不进行赘述。The interaction parameter may also have other implementation manners, which will be described in the subsequent embodiments, and will not be repeated here.
在步骤S15中,根据第二动作参数,第一动作参数以及交互参数,对角色模型及道具模型进行渲染,获得场景模型对应的第二动画。In step S15, the character model and the prop model are rendered according to the second action parameter, the first action parameter and the interaction parameter, and a second animation corresponding to the scene model is obtained.
上述交互参数确定后,可以根据第二动作参数,第一动作参数及交互参数,对场景模型中的角色模型及道具模型进行渲染,获得与场景模型对应的包含角色模型及道具模型交互的动画。为了便于描述,可以将前述利用第一动作参数对角色模型渲染获得的动画称为第一动画,将此处渲染获得的包含角色模型及道具模型交互的动画称为第二动画。After the above-mentioned interaction parameters are determined, the character model and the prop model in the scene model can be rendered according to the second action parameter, the first action parameter and the interaction parameter, and an animation corresponding to the scene model including the interaction of the character model and the prop model can be obtained. For ease of description, the animation obtained by rendering the character model using the first action parameter may be referred to as the first animation, and the animation obtained by rendering here including the interaction between the character model and the prop model is referred to as the second animation.
以动画制作的内容是角色坐在椅子上用右手拿起桌子上放置的杯子为例,如图2-4所示,是根据另一示例性实施例的一种动画中多帧图像的示意图。其中,角色模型A坐在椅子模型上,杯子模型B放置在桌子模型上,第一帧图像中,角色模型A朝向杯子模型B所在的方向伸出右手,第二帧图像中,角色模型A的右手朝向杯子模型B移动,与杯子模型B的距离缩短,第三帧图像中,角色模型A的右手移动至杯子模型B的位置,与杯子模型B接触,并握住杯子模型B,第四帧图像中,角色模型A的右手将杯子模型B拿起,杯子模型B离开桌子模型,第五帧图像中,角色模型A的右手收回,与杯子模型B一起远离桌子模型。且上述多帧图像中,角色模型A的右手与杯子模型B接触时,二者之间不存在间隙或重叠等情况,贴合程度较好。Taking the content of animation production as an example of a character sitting on a chair and picking up a cup placed on a table with his right hand, as shown in FIG. 2-4 , it is a schematic diagram of a multi-frame image in an animation according to another exemplary embodiment. Among them, the character model A is sitting on the chair model, and the cup model B is placed on the table model. In the first frame image, the character model A extends his right hand toward the direction of the cup model B. The right hand moves towards the cup model B, and the distance from the cup model B is shortened. In the third frame image, the right hand of the character model A moves to the position of the cup model B, contacts with the cup model B, and holds the cup model B, the fourth frame In the image, character model A's right hand picks up cup model B, and cup model B leaves the table model. In the fifth frame image, character model A's right hand retracts and moves away from the table model together with cup model B. And in the above-mentioned multi-frame images, when the right hand of the character model A is in contact with the cup model B, there is no gap or overlap between the two, and the degree of fit is good.
本实施例中,预先搭建的场景模型中包括角色模型及道具模型,可以基于采集的用户动作数据获得第一动作参数,并利用该第一动作参数对上述角色模型进行渲染,以获得包含角色模型动作的第一动画,以及根据用户针对第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数,之后根据第二动作参数,第一动作参数及针对道具模型的交互参数对上述角色模型及道具模型进行渲染,可以获得场景模型对应的包含角色模型动作及角色模型与道具模型互动的第二动画。上述第一动作参数是基于采集的用户的动作数据获得的,无需手动设置,利用该第一动作参数进行渲染获得第一动画,解决了传统方案中需要手动制作大量关键帧动画的问题,降低了制作成本,且提高了制作效率。并且,根据用户针对第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数,并结合上述第一动作参数与第二动作参数及交互参数进行渲染获得第二动画,可以确保角色模型与道具模型的交互效果。In this embodiment, the pre-built scene model includes a character model and a prop model, a first action parameter can be obtained based on the collected user action data, and the above-mentioned character model can be rendered by using the first action parameter, so as to obtain a character model including a character model. The first animation of the action, and the second action parameter is determined according to the user's adjustment operation on the character model in at least one frame of the image in the first animation, and then based on the second action parameter, the first action parameter and the interaction with the prop model The above-mentioned character model and prop model are rendered by parameters, and a second animation corresponding to the scene model including the action of the character model and the interaction between the character model and the prop model can be obtained. The above-mentioned first action parameter is obtained based on the collected action data of the user, and does not need to be manually set. The first animation is obtained by using the first action parameter for rendering, which solves the problem that a large number of key frame animations need to be manually produced in the traditional solution, and reduces the The production cost is increased, and the production efficiency is improved. In addition, according to the user's adjustment operation on the character model in at least one frame of images in the first animation, the second action parameter is determined, and the second animation is obtained by rendering in combination with the above-mentioned first action parameter, the second action parameter and the interaction parameter, The interaction effect between the character model and the prop model can be ensured.
实际应用中,上述基于采集的用户的动作数据,获得第一动作参数可以由神经网络实现。具体的,可以将利用视频采集装置采集获得的用户的动作数据发送至神经网络的输入层,经神经网络处理后,输出上述第一动作参数。其中,该神经网络可以根据预先采集的包括用户动作数据的训练样本,以及对应的第一动作参数训练获得,或者可以从其它开源程序中获得,不作具体限制。In practical applications, the above-mentioned acquisition of the first motion parameter based on the collected motion data of the user may be implemented by a neural network. Specifically, the user's motion data collected by the video acquisition device can be sent to the input layer of the neural network, and after being processed by the neural network, the above-mentioned first motion parameters are output. Wherein, the neural network may be obtained by training according to pre-collected training samples including user action data and corresponding first action parameters, or may be obtained from other open source programs, with no specific limitation.
可选的,可以将上述神经网络进行封装,作为渲染引擎中的插件,从而可以实现在利用视频采集装置采集用户的动作数据的同时,运行渲染引擎,并调用封装的神经网络插件,对采集的用户的动作数据进行处理,获得第一动作参数。渲染引擎基于该第一动作参数,可以对角色模型进行渲染。通过将神经网络封装为渲染引擎中的插件,可以实现同步进行上述采集用户动作数据与渲染角色模型操作,提高动画生成效率。Optionally, the above-mentioned neural network can be encapsulated as a plug-in in the rendering engine, so that while the video capture device is used to collect the user's action data, the rendering engine can be run, and the encapsulated neural network plug-in can be called. The user's action data is processed to obtain the first action parameter. The rendering engine may render the character model based on the first action parameter. By encapsulating the neural network as a plug-in in the rendering engine, the above operations of collecting user action data and rendering the character model can be performed synchronously, thereby improving the efficiency of animation generation.
进一步地,上述渲染方法还可以包括展示渲染获得的第一动画,该第一动画可以用于指示用户进行动作调整。实际应用中,用户进行预设动作时,只涉及用户自身的动作,并未与道具进行交互,且考虑到角色模型与用户自身肢体比例不同等因素,在进行上述预设动作时,无法对动作程度进行准确判断。以预设动作是向前伸出手臂为例,无法对手臂的伸出长度进行准确判断,将会影响渲染效果。Further, the above rendering method may further include displaying a first animation obtained by rendering, and the first animation may be used to instruct the user to adjust the action. In practical applications, when the user performs the preset actions, it only involves the user's own actions, and does not interact with the props. Considering the different proportions of the character model and the user's own body, etc., when the above preset actions are performed, the actions cannot be adjusted. degree of accurate judgment. Taking the preset action of extending the arm forward as an example, the extension length of the arm cannot be accurately judged, which will affect the rendering effect.
通过同步进行采集用户动作数据与渲染角色模型操作,并将渲染获得的第一动画进行展示,可以实现根据角色模型的渲染情况对用户的动作进行实时反馈,使用户明确动作的准确程度,指示用户进行动作调整,提高动画生成的准确性。以预设动作是向前伸出手臂为例,用户执行向前伸出手臂的动作时,展示的第一动画中角色模型也向前伸出手臂,且根据角色模型伸出手臂的程度,调整自身手臂的伸出程度,如角色模型伸出的手臂与墙壁接触,无需继续执行该动作,用户可以停止向前伸出自身手臂。通过实时反馈,可以提高渲染效果。By synchronizing the operation of collecting user motion data and rendering the character model, and displaying the first animation obtained by rendering, real-time feedback of the user's actions can be realized according to the rendering situation of the character model, so that the user can clarify the accuracy of the action and instruct the user Make motion adjustments to improve the accuracy of animation generation. Taking the preset action of extending the arm forward as an example, when the user performs the action of extending the arm forward, the character model in the displayed first animation also extends the arm forward, and according to the degree to which the character model extends the arm, adjust the The extent of the extension of the own arm, such as the extended arm of the character model is in contact with the wall, without continuing to perform this action, the user can stop extending his own arm forward. With real-time feedback, rendering can be improved.
实际应用中,交互参数可以包括交互部位、交互道具及交互时刻。其中,交互部位可以指角色模型中与道具模型接触的部位,如可以包括左手,右手等。为了便于理解,以交互部位是角色模型的右手为例,可以将道具模型设置为角色模型的右手手腕中的一个子节点,跟随角色模型的右手手腕进行动作,如平移,旋转等。交互道具可以包括杯子、武器等,交互时刻可以指道具模型与角色模型接触,并设置为角色模型的交互部位的节点的时刻,如第一动画中与交互时刻对应的图像可以指第一动画包括的多帧图像中角色模型与道具模型接触的至少一帧图像。In practical applications, the interaction parameters may include interaction parts, interaction props, and interaction moments. Wherein, the interactive part may refer to the part in the character model that is in contact with the prop model, for example, it may include the left hand, the right hand, and the like. For ease of understanding, take the interaction part as the right hand of the character model as an example, the prop model can be set as a child node in the right wrist of the character model, and follow the right wrist of the character model to perform actions, such as translation, rotation, etc. The interactive props can include cups, weapons, etc. The interactive moment can refer to the moment when the prop model contacts the character model and is set as the node of the interactive part of the character model. For example, the image corresponding to the interactive moment in the first animation can refer to the first animation including At least one frame of the image in which the character model is in contact with the prop model in the multi-frame images.
下面结合上述交互参数,对渲染获得场景模型对应的第二动画的过程进行说明。The following describes the process of rendering and obtaining the second animation corresponding to the scene model in combination with the above interaction parameters.
具体的,可以根据确定的第一动作参数及第二动作参数,对场景模型中的角色模型进行渲染,并按照上述交互时刻,将交互道具的道具模型设置为角色模型的交互部位的节点,以获得场景模型对应的包含角色模型与道具模型交互的第二动画。结合图2-4所示的示意图,利用确定的第一动作参数及第二动作参数对角色模型A进行渲染,若交互部位是角色模型A的右手,交互道具是杯子模型B,交互时刻对应的图像为多帧图像中的第三帧图像,则在第三帧图像中,将杯子模型B作为角色模型A的右手手腕中一个子节点,在之后的各帧图像中,跟随角色模型A的右手一起动作,如在第四帧图像中,杯子模型B跟随角色模型A的右手一起离开桌子模型,第五帧图像中,跟随角色模型A的右手继续远离桌子模型。Specifically, the character model in the scene model can be rendered according to the determined first action parameter and the second action parameter, and the prop model of the interactive prop can be set as the node of the interactive part of the character model according to the above interaction moment, so as to A second animation corresponding to the scene model and including the interaction between the character model and the prop model is obtained. Combined with the schematic diagram shown in Figure 2-4, use the determined first action parameters and second action parameters to render the character model A. If the interactive part is the right hand of the character model A, the interactive prop is the cup model B, and the interaction moment corresponds to The image is the third frame image in the multi-frame image, then in the third frame image, the cup model B is used as a child node in the right wrist of the character model A, and in each subsequent frame of images, follow the right hand of the character model A. Act together. For example, in the fourth frame image, the cup model B follows the character model A's right hand to leave the table model. In the fifth frame image, the cup model B follows the character model A's right hand and continues to move away from the table model.
实际应用中,利用上述第一动作参数及第二动作参数对角色模型进行渲染,提高了角色模型与道具模型接触时,二者之间的贴合程度。为了增强渲染获得动画的多帧图像中,各帧图像之间的连贯性,提高渲染效果,还可以对交互时刻之前和之后的至少一帧图像所对应的角色模型的动作参数进行调整。In practical applications, the above-mentioned first action parameter and second action parameter are used to render the character model, which improves the degree of fit between the character model and the prop model when they are in contact. In order to enhance the coherence between the frames of images obtained by rendering the animation and improve the rendering effect, it is also possible to adjust the action parameters of the character model corresponding to at least one frame of images before and after the interaction moment.
因此,在某些实施例中,交互参数还可以包括过渡时长,可以对交互时刻之前和之后过渡时长内的图像所对应的角色模型的动作参数进行调整。其中,交互时刻之前和之后过渡时长内的图像可以指交互时刻之前某一预设时间段以及之后某一预设时间段内的图像。如以动画包括10帧图像为例,交互时刻所对应的图像可以是第6帧图像,交互时刻之前过渡时长内的图像可以包括第4帧图像,第5帧图像,交互时刻之后过渡时长内的图像可以包括第7帧图像与第8帧图像。过渡时长可以根据实际应用场景进行设置,不作具体限制。Therefore, in some embodiments, the interaction parameter may further include a transition duration, and the action parameters of the character model corresponding to the images in the transition duration before and after the interaction moment can be adjusted. The images in the transition period before and after the interaction moment may refer to images in a certain preset time period before and after the interaction moment. For example, taking the animation including 10 frames of images as an example, the image corresponding to the interaction moment may be the 6th frame of image, and the images in the transition period before the interaction moment may include the 4th frame of images, the fifth frame of images, and the images in the transition period after the interaction moment. The image may include the seventh frame image and the eighth frame image. The transition duration can be set according to the actual application scenario, and there is no specific limitation.
下面结合图3所示的示意图,对上述动作参数调整过程进行说明。如图3所示,是根据一示例性实施例示出的一种动作参数调整方法的流程图,可以包括以下步骤。The above-mentioned action parameter adjustment process will be described below with reference to the schematic diagram shown in FIG. 3 . As shown in FIG. 3 , it is a flowchart of an action parameter adjustment method according to an exemplary embodiment, which may include the following steps.
在步骤S31中,确定第一动画中,与交互时刻对应的图像对应的第一动作参数与第二动作参数的参数差。In step S31, it is determined that in the first animation, the parameter difference between the first action parameter and the second action parameter corresponding to the image corresponding to the interaction moment is determined.
上述第一动画包括的多帧图像中,每一帧图像都有与该帧图像中角色模型的动作对应的第一动作参数,可以用M[t]表示,t=1,2,···,T,t表示帧数。Among the multiple frames of images included in the above-mentioned first animation, each frame of image has a first action parameter corresponding to the action of the character model in the frame of image, which can be represented by M[t], t=1, 2, . . . , T, t represents the number of frames.
其中,对于与交互时刻对应的图像,经过根据用户针对该图像中角色模型的调整操作,该图像可以对应第一动作参数及第二动作参数。以交互时刻是t0为例,t0帧图像对应的第一动作参数可以用M[t0]表示,t0帧图像对应的第二动作参数可以用Mg表示。Wherein, for the image corresponding to the interaction moment, the image may correspond to the first action parameter and the second action parameter after adjustment operations by the user on the character model in the image. Taking the interaction time t0 as an example, the first action parameter corresponding to the t0 frame image may be represented by M[t0], and the second action parameter corresponding to the t0 frame image may be represented by Mg.
根据上述与交互时刻对应的图像对应的第一动作参数及第二动作参数,可以计算二者之间的参数差,该参数差可以用Mg-M[t0]表示。According to the first action parameter and the second action parameter corresponding to the image corresponding to the interaction moment, the parameter difference between the two can be calculated, and the parameter difference can be represented by Mg-M[t0].
在步骤S32中,按照预设的调整规则,确定第一动画中,交互时刻之前和之后过渡时长内分别与每一帧图像对应的参数差。In step S32, according to a preset adjustment rule, in the first animation, the parameter difference corresponding to each frame of image in the transition time period before and after the interaction moment, respectively, is determined.
上述参数差确定之后,可以按照调整规则,确定交互时刻之前和之后过渡时长内分别与每一帧图像对应的参数差。其中,该调整规则可以根据实际应用场景进行设置,可以有多种实现方式。After the above parameter difference is determined, the parameter difference corresponding to each frame of image in the transition time before and after the interaction moment can be determined according to the adjustment rule. Among them, the adjustment rule can be set according to the actual application scenario, and can be implemented in various ways.
作为一种可选的实现方式,可以将上述参数差与过渡时长进行求商计算,获得过渡时长内每一帧图像对应的参数差,此时,每一帧图像对应的参数差相同。以过渡时长是T0为例,过渡时长内每一帧图像对应的参数差可以用m表示,m=(Mg-M[t0])/T0。As an optional implementation, quotient calculation can be performed between the above parameter difference and the transition duration to obtain the parameter difference corresponding to each frame of images within the transition duration. In this case, the parameter differences corresponding to each frame of image are the same. Taking the transition duration as T0 as an example, the parameter difference corresponding to each frame of image in the transition duration may be represented by m, where m=(Mg-M[t0])/T0.
作为另一种可选的实现方式,可以按照过渡时长内每一帧图像分别对应的时刻与交互时刻的时间差,设置每一帧图像对应的参数差调整比例,并基于上述参数差及参数差调整比例,计算获得每一帧图像对应的参数差。其中,时间差大的图像对应的参数差调整比例可以小于时间差小的图像对应的参数差调整比例。如,可以设置过渡时长内与交互时刻时间差最大的时刻对应的图像的参数差调整比例是1/T0,以及与交互时刻时间差次大的时刻对应的图像的参数差调整比例是2/T0等,将上述参数差及各参数差调整比例进行乘积计算,获得各帧图像对应的参数差。如,计算过渡时长内与交互时刻时间差最大的时刻对应的图像的参数差为(Mg-M[t0])*1/T0,以及与交互时刻时间差次大的时刻对应的图像的参数差为(Mg-M[t0])*2/T0等。As another optional implementation, the adjustment ratio of the parameter difference corresponding to each frame of image can be set according to the time difference between the time corresponding to each frame of image and the interactive time in the transition duration, and the adjustment ratio based on the above-mentioned parameter difference and parameter difference can be set. The ratio is calculated to obtain the parameter difference corresponding to each frame of image. The adjustment ratio of the parameter difference corresponding to the image with a large time difference may be smaller than the adjustment ratio of the parameter difference corresponding to the image with a small time difference. For example, it is possible to set the parameter difference adjustment ratio of the image corresponding to the moment with the largest interaction time difference within the transition duration to 1/T0, and the parameter difference adjustment ratio of the image corresponding to the second largest interaction time time difference to be 2/T0, etc. The above parameter difference and the adjustment ratio of each parameter difference are multiplied and calculated to obtain the parameter difference corresponding to each frame of image. For example, the parameter difference of the image corresponding to the moment with the largest time difference between the interaction moments in the transition duration is (Mg-M[t0])*1/T0, and the parameter difference of the image corresponding to the moment with the second largest time difference between the interaction moments is ( Mg-M[t0])*2/T0, etc.
在步骤S33中,基于第一动画中,交互时刻之前和之后过渡时长内分别与每一帧图像对应的第一动作参数及参数差,确定分别与每一帧图像对应的第二动作参数。In step S33 , based on the first motion parameters and parameter differences corresponding to each frame of images in the transition time before and after the interaction moment in the first animation, respectively, second motion parameters corresponding to each frame of image are determined.
根据上述计算获得的过渡时长内每一帧图像对应的参数差,可以将每一帧图像对应的第一动作参数与参数差进行加和计算,获得该过渡时长内每一帧图像对应的动作参数。According to the parameter difference corresponding to each frame of image in the transition period obtained by the above calculation, the first action parameter corresponding to each frame of image and the parameter difference can be added and calculated to obtain the action parameter corresponding to each frame of image in the transition period .
以交互时刻是t0,t0帧图像对应的第一动作参数是M[t0],第二动作参数是Mg,过渡时长是T0,交互时刻t0之前和之后过渡时长T0内每一帧图像所对应的第一动作参数可以表示为:M[t0-T0+1],M[t0-T0+2],…,M[t0-1],M[t0],M[t0+1],…,M[t0+T0-2],M[t0+T0-1]。其中,M[t0-T0+1]表示交互时刻前过渡时长T0内,与交互时刻时间差最大的时刻对应的图像的第一动作参数,M[t0-T0+2]表示交互时刻前过渡时长T0内,与交互时刻时间差次大的时刻对应的图像的第一动作参数,不再进行赘述。The interaction time is t0, the first action parameter corresponding to the t0 frame image is M[t0], the second action parameter is Mg, and the transition time is T0. Before and after the interaction time t0, each frame of image corresponding to the transition time T0 The first action parameter can be expressed as: M[t0-T0+1], M[t0-T0+2], ..., M[t0-1], M[t0], M[t0+1], ..., M [t0+T0-2], M[t0+T0-1]. Among them, M[t0-T0+1] represents the first action parameter of the image corresponding to the moment with the largest time difference between the interaction moments within the transition time T0 before the interaction time, and M[t0-T0+2] represents the transition time T0 before the interaction time The first action parameter of the image corresponding to the moment with the next largest time difference between the interaction moments will not be repeated here.
若按照过渡时长内每一帧图像所对应的时刻与交互时刻的时间差,设置每一帧图像对应的参数差调整比例,并计算获得每一帧图像对应的参数差,t0帧图像对应的第二动作参数是Mg,则交互时刻t0前后过渡时长T0内每一帧图像所对应的第二动作参数可以表示为:M[t0-T0+1]+(Mg-M[t0])*1/T0,M[t0-T0+2]+(Mg-M[t0])*2/T0,…,M[t0-1]+(Mg-M[t0])*(T0-1)/T0,Mg,M[t0+1]+(Mg-M[t0])*(T0-1)/T0,…,M[t0+T0-2]+(Mg-M[t0])*2/T0,M[t0+T0-1]+(Mg-M[t0])*1/T0。According to the time difference between the time corresponding to each frame of image and the interaction time in the transition period, the parameter difference adjustment ratio corresponding to each frame of image is set, and the parameter difference corresponding to each frame of image is calculated and obtained. The second corresponding to t0 frame image If the action parameter is Mg, the second action parameter corresponding to each frame of image in the transition time T0 before and after the interaction time t0 can be expressed as: M[t0-T0+1]+(Mg-M[t0])*1/T0 , M[t0-T0+2]+(Mg-M[t0])*2/T0, ..., M[t0-1]+(Mg-M[t0])*(T0-1)/T0, Mg , M[t0+1]+(Mg-M[t0])*(T0-1)/T0, ..., M[t0+T0-2]+(Mg-M[t0])*2/T0, M [t0+T0-1]+(Mg-M[t0])*1/T0.
其中,第一动画中其它帧图像对应的第一动作参数不变。The first action parameters corresponding to other frame images in the first animation remain unchanged.
本实施例中,通过确定第一动画中,与交互时刻对应的图像对应的第一动作参数与第二动作参数的参数差,并以该参数差作为调整基础,结合预设的调整规则,确定第一动画中,交互时刻之前和之后过渡时长内分别与每一帧图像对应的参数差,以及基于每一帧图像对应的第一动作参数及参数差确定对应的第二动作参数,实现了对交互时刻之前和之后过渡时长内各帧图像对应的动作参数进行调整,提高了动作参数的调整准确性,从而在利用第二动作参数及第一动作参数对角色模型进行渲染时,进一步增强角色模型的动作连贯性,提高渲染效果。In this embodiment, the parameter difference between the first action parameter and the second action parameter corresponding to the image corresponding to the interaction moment in the first animation is determined, and the parameter difference is used as the adjustment basis, combined with the preset adjustment rule, to determine In the first animation, the parameter difference corresponding to each frame of image in the transition time before and after the interaction moment, and the corresponding second action parameter is determined based on the first action parameter and parameter difference corresponding to each frame of image, which realizes The action parameters corresponding to each frame of images in the transition time before and after the interaction time are adjusted, which improves the adjustment accuracy of the action parameters, so that when the character model is rendered using the second action parameter and the first action parameter, the character model is further enhanced. The consistency of the action and the rendering effect are improved.
如图4所示,是根据另一示例性实施例示出的一种动画生成方法的流程图,可以包括以下步骤。As shown in FIG. 4 , it is a flowchart of an animation generation method according to another exemplary embodiment, which may include the following steps.
在步骤S41中,基于采集的用户的动作数据,获得第一动作参数。In step S41, a first motion parameter is obtained based on the collected motion data of the user.
在步骤S42中,根据第一动作参数,对预先搭建的场景模型中的角色模型进行渲染,获得第一动画。In step S42, the character model in the pre-built scene model is rendered according to the first action parameter to obtain the first animation.
在步骤S43中,根据针对第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数。In step S43, the second action parameter is determined according to the adjustment operation on the character model in at least one frame of the first animation.
在步骤S44中,确定针对场景模型中的道具模型的交互参数,该交互参数包括交互部位、交互道具、交互时刻及过渡时长。In step S44, an interaction parameter for the prop model in the scene model is determined, and the interaction parameter includes an interaction part, an interaction prop, an interaction time and a transition time.
在步骤S45中,确定第一动画中,与交互时刻对应的图像对应的第一动作参数与第二动作参数的参数差。In step S45, it is determined that in the first animation, the parameter difference between the first action parameter and the second action parameter corresponding to the image corresponding to the interaction moment is determined.
在步骤S46中,按照预设的调整规则,确定第一动画中,交互时刻之前和之后过渡时长内分别与每一帧图像对应的参数差。In step S46, according to the preset adjustment rule, in the first animation, the parameter difference corresponding to each frame of image in the transition time period before and after the interaction moment, respectively, is determined.
在步骤S47中,基于第一动画中,交互时刻之前和之后过渡时长内分别与每一帧图像对应的第一动作参数及参数差,确定分别与每一帧图像对应的第二动作参数。In step S47 , based on the first motion parameters and parameter differences corresponding to each frame of images in the transition time before and after the interaction moment in the first animation, second motion parameters corresponding to each frame of image are determined.
在步骤S48中,根据第二动作参数及第一动作参数,对角色模型进行渲染,并按照交互时刻,将交互道具的道具模型设置为角色模型的交互部位的节点,获得场景模型对应的第二动画。In step S48, the character model is rendered according to the second action parameter and the first action parameter, and according to the interaction time, the prop model of the interactive prop is set as the node of the interactive part of the character model, and the second corresponding to the scene model is obtained. animation.
本实施例中,各步骤的实现过程可以参考前述实施例中具体的实现过程,此处不再进行赘述。In this embodiment, for the implementation process of each step, reference may be made to the specific implementation process in the foregoing embodiment, which will not be repeated here.
下面结合图4所示的实施例,对制作角色坐在椅子上伸出右手拿起桌子上放置的杯子的动画场景进行说明。The following describes an animation scene in which a character sits on a chair and stretches out his right hand to pick up a cup placed on a table with reference to the embodiment shown in FIG. 4 .
根据该动画制作内容,在渲染引擎中导入场景模型,包括桌子模型和椅子模型,并将角色模型和杯子模型放置在场景模型中,调整角色模型和杯子模型的放置位置和朝向,使角色模型位于桌子模型和椅子模型之间,朝向桌子模型方向,杯子模型放置于桌子模型上。将预先训练获得的神经网络进行封装,作为上述渲染引擎的插件。其中,该神经网络可以在输入层接收视频采集装置采集获得的包含用户动作数据的视频流,经分析层处理后,在输出层动作参数,即角色模型各关节,如手肘关节、手指关节等的旋转欧拉角。According to the animation content, import the scene model into the rendering engine, including the table model and the chair model, place the character model and the cup model in the scene model, and adjust the placement and orientation of the character model and the cup model so that the character model is located in the scene model. Between the table model and the chair model, towards the table model, the cup model is placed on the table model. The neural network obtained by pre-training is encapsulated as a plug-in for the above rendering engine. Among them, the neural network can receive the video stream containing user motion data collected by the video acquisition device in the input layer, and after processing in the analysis layer, the action parameters in the output layer, that is, the joints of the character model, such as elbow joints, finger joints, etc. Rotation Euler angles.
上述准备工作完成后,用户按照动画制作内容,执行预设动作,视频采集装置采集用户动作,将采集获得的视频流传输至神经网络,经神经网络处理后,获得第一动作参数,即角色骨骼各关节的旋转欧拉角值。渲染引擎基于该第一动作参数,对角色模型进行渲染,获得包含角色模型动作的第一动画。在此过程中,可以将第一动画进行展示,以指示用户进行动作调整。该第一动画中包括多帧图像,从中确定出角色模型及道具模型接触,但贴合程度较差的至少一帧图像。根据用户针对该至少一帧图像中的角色模型的调整操作,确定第二动作参数。After the above preparatory work is completed, the user performs preset actions according to the animation content, the video capture device captures the user actions, and transmits the captured video stream to the neural network. After processing by the neural network, the first action parameter, that is, the character skeleton is obtained. Rotation Euler angle value for each joint. The rendering engine renders the character model based on the first action parameter to obtain a first animation including the action of the character model. During this process, the first animation may be displayed to instruct the user to adjust the action. The first animation includes multiple frames of images, from which at least one frame of images in which the character model and the prop model are in contact but with a poor degree of fit is determined. The second action parameter is determined according to the user's adjustment operation on the character model in the at least one frame of image.
之后,确定针对场景模型中的道具模型的交互参数,包括交互部位,交互道具,交互时刻及过渡时长。其中,交互部位为角色模型的右手,交互道具为杯子,根据第一动画中,与交互时刻对应的图像对应的第一动作参数与第二动作参数的参数差,确定出第一动画中,交互时刻之前和之后过渡时长内每一帧图像对应的第二动作参数,其它帧图像的第一动作参数,利用该第一动作参数,第二动作参数,对角色模型进行渲染,并按照交互时刻,将杯子模型设置为角色模型的右手的节点,从而获得场景模型对应的包含角色模型与杯子模型交互的第二动画,从而实现动画制作。After that, determine the interaction parameters for the prop model in the scene model, including the interaction part, the interaction prop, the interaction time and the transition time. The interactive part is the right hand of the character model, and the interactive prop is a cup. According to the parameter difference between the first action parameter and the second action parameter corresponding to the image corresponding to the interaction moment in the first animation, it is determined that the interactive The second action parameter corresponding to each frame of image in the transition time before and after the moment, the first action parameter of other frame images, use the first action parameter and the second action parameter to render the character model, and according to the interaction time, The cup model is set as the node of the right hand of the character model, so as to obtain the second animation corresponding to the scene model and including the interaction between the character model and the cup model, thereby realizing animation production.
图5是根据一示例性实施例示出的一种动画生成装置框图。参照图5,该装置包括获取模块501,第一渲染模块502,第一确定模块503,第二确定模块504和第二渲染模块505。Fig. 5 is a block diagram of an animation generating apparatus according to an exemplary embodiment. Referring to FIG. 5 , the apparatus includes an
该获取模块501可以被配置为基于采集的用户的动作数据,获得第一动作参数。The obtaining
该第一渲染模块502可以被配置为根据第一动作参数,对预先搭建的场景模型中的角色模型进行渲染,获得第一动画。The
该第一确定模块503可以被配置为根据针对第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数。The
该第二确定模块504可以被配置为确定针对场景模型中的道具模型的交互参数。The
该第二渲染模块505可以被配置为根据第二动作参数,第一动作参数以及交互参数,对角色模型及道具模型进行渲染,获得场景模型对应的第二动画。The
本实施例中,该动画生成装置可以实现图1所示的动画生成方法,预先搭建的场景模型中包括角色模型及道具模型,可以基于采集的用户动作数据获得第一动作参数,并利用该第一动作参数对上述角色模型进行渲染,以获得包含角色模型动作的第一动画,以及根据用户针对第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数,之后根据第二动作参数,第一动作参数及针对道具模型的交互参数对上述角色模型及道具模型进行渲染,可以获得场景模型对应的包含角色模型动作及角色模型与道具模型互动的第二动画。上述第一动作参数是基于采集的用户的动作数据获得的,无需手动设置,利用该第一动作参数进行渲染获得第一动画,解决了传统方案中需要手动制作大量关键帧动画的问题,降低了制作成本,且提高了制作效率。并且,根据用户针对第一动画中的至少一帧图像中的角色模型的调整操作,确定第二动作参数,并结合上述第一动作参数与第二动作参数及交互参数进行渲染获得第二动画,可以确保角色模型与道具模型的交互效果。In this embodiment, the animation generation device can implement the animation generation method shown in FIG. 1 , the pre-built scene model includes a character model and a prop model, the first action parameter can be obtained based on the collected user action data, and the first action parameter can be obtained by using the first action parameter. An action parameter is used to render the above-mentioned character model to obtain a first animation including the action of the character model, and the second action parameter is determined according to the user's adjustment operation on the character model in at least one frame of the first animation, and then according to The second action parameter, the first action parameter and the interaction parameter for the prop model Render the above character model and the prop model, and a second animation corresponding to the scene model including the action of the character model and the interaction between the character model and the prop model can be obtained. The above-mentioned first action parameter is obtained based on the collected action data of the user, and does not need to be manually set. The first animation is obtained by rendering using the first action parameter, which solves the problem that a large number of key frame animations need to be manually produced in the traditional solution, and reduces the The production cost is increased, and the production efficiency is improved. In addition, according to the user's adjustment operation on the character model in at least one frame of images in the first animation, the second action parameter is determined, and the second animation is obtained by rendering in combination with the above-mentioned first action parameter, the second action parameter and the interaction parameter, It can ensure the interaction effect between the character model and the prop model.
在某些实施例中,该第二渲染模块505具体可以被配置为根据第一动画中的至少一帧图像对应的第二动作参数及第一动画中不包括所述至少一帧图像的其余帧图像对应的第一动作参数,对角色模型进行渲染,并按照交互时刻,将交互道具的道具模型设置为角色模型的交互部位的节点,获得场景模型对应的第二动画。In some embodiments, the
在某些实施例中,该第二渲染模块505具体可以被配置为基于第二动作参数,确定第一动画中,交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数;根据第一动画中的至少一帧图像及交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数,以及第一动画中不包括至少一帧图像及交互时刻之前和之后过渡时长内每一帧图像的其余帧图像对应的第一动作参数,对角色模型进行渲染,并按照交互时刻,将交互道具的道具模型设置为角色模型的交互部位的节点,获得场景模型对应的第二动画。In some embodiments, the
在某些实施例中,该第二渲染模块505具体可以被配置为基于第一动画中的至少一帧图像对应的第二动作参数,确定第一动画中,交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数;根据第一动画中的至少一帧图像及交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数,及第一动画中不包括至少一帧图像及交互时刻之前和之后过渡时长内每一帧图像的其余帧图像对应的第一动作参数,以及交互参数,对角色模型及道具模型进行渲染,获得场景模型对应的第二动画。In some embodiments, the
在某些实施例中,该第二渲染模块505具体可以被配置为确定第一动画中,与交互时刻对应的图像对应的第一动作参数与第二动作参数的参数差;根据第一动作参数与第二动作参数的参数差,按照过渡时长内每一帧图像分别对应的时刻与交互时刻的时间差,确定每一帧图像分别对应的参数差调整比例,并基参数差调整比例及第一动作参数与第二动作参数的参数差,确定第一动画中,交互时刻之前和之后过渡时长内分别与每一帧图像对应的参数差,时间差大的图像对应的参数差调整比例小于时间差小的图像对应的参数差调整比例;基于第一动画中,交互时刻之前和之后过渡时长内分别与每一帧图像对应的第一动作参数及参数差,确定分别与每一帧图像对应的第二动作参数;根据所述第一动画中的至少一帧图像及交互时刻之前和之后过渡时长内的每一帧图像对应的第二动作参数,及所述第一动画中不包括所述至少一帧图像及交互时刻之前和之后过渡时长内每一帧图像的其余帧图像对应的第一动作参数,以及所述交互参数,对所述角色模型及道具模型进行渲染,获得所述场景模型对应的第二动画。In some embodiments, the
在某些实施例中,该获取模块501具体可以被配置为基于采集的用户的动作数据,调用封装于渲染引擎中的神经网络模型插件,利用神经网络模型获得第一动作参数,神经网络模型预先设置。In some embodiments, the obtaining
在某些实施例中,该装置还可以包括展示模块,可以被配置为展示第一动画;第一动画用于指示用户进行动作调整。In some embodiments, the apparatus may further include a presentation module, which may be configured to present a first animation; the first animation is used to instruct the user to adjust the action.
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。Regarding the apparatus in the above-mentioned embodiment, the specific manner in which each module performs operations has been described in detail in the embodiment of the method, and will not be described in detail here.
图6是根据一示例性实施例示出的一种电子设备的框图,包括处理器601以及用于存储处理器601可执行指令的存储器602。FIG. 6 is a block diagram of an electronic device according to an exemplary embodiment, which includes a
其中,该处理器601可以被配置为执行指令,以实现图1,图3及图4任一图示的动画生成方法。Wherein, the
在示例性实施例中,还提供了一种包括指令的计算机可读存储介质,例如包括指令的存储器,上述指令可由电子设备的处理器执行以完成上述方法。可选地,计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。In an exemplary embodiment, there is also provided a computer-readable storage medium, such as a memory including instructions, executable by a processor of an electronic device to perform the above method. Alternatively, the computer-readable storage medium may be ROM, random access memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
在示例性实施例中,还提供了一种计算机程序产品,包括计算机指令,该计算机指令被处理器执行时,可以实现图1,图3及图4任一图示的动画生成方法。In an exemplary embodiment, a computer program product is also provided, including computer instructions, which, when executed by a processor, can implement the animation generation method illustrated in any of FIG. 1 , FIG. 3 and FIG. 4 .
本领域技术人员在考虑说明书及实践这里公开的发明后,将容易想到本公开的其它实施方案。本公开旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。Other embodiments of the present disclosure will readily occur to those skilled in the art upon consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of this disclosure that follow the general principles of this disclosure and include common general knowledge or techniques in the technical field not disclosed by this disclosure . The specification and examples are to be regarded as exemplary only, with the true scope and spirit of the disclosure being indicated by the following claims.
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。It is to be understood that the present disclosure is not limited to the precise structures described above and illustrated in the accompanying drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111619203.5ACN114463469B (en) | 2021-12-27 | 2021-12-27 | Animation generation method, device, electronic device, storage medium and program product |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111619203.5ACN114463469B (en) | 2021-12-27 | 2021-12-27 | Animation generation method, device, electronic device, storage medium and program product |
| Publication Number | Publication Date |
|---|---|
| CN114463469Atrue CN114463469A (en) | 2022-05-10 |
| CN114463469B CN114463469B (en) | 2025-05-09 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111619203.5AActiveCN114463469B (en) | 2021-12-27 | 2021-12-27 | Animation generation method, device, electronic device, storage medium and program product |
| Country | Link |
|---|---|
| CN (1) | CN114463469B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120021828A1 (en)* | 2010-02-24 | 2012-01-26 | Valve Corporation | Graphical user interface for modification of animation data using preset animation samples |
| CN107154069A (en)* | 2017-05-11 | 2017-09-12 | 上海微漫网络科技有限公司 | A kind of data processing method and system based on virtual role |
| CN111598983A (en)* | 2020-05-18 | 2020-08-28 | 北京乐元素文化发展有限公司 | Animation system, animation method, storage medium, and program product |
| CN112669194A (en)* | 2021-01-06 | 2021-04-16 | 腾讯科技(深圳)有限公司 | Animation processing method, device and equipment in virtual scene and storage medium |
| CN112742027A (en)* | 2020-12-29 | 2021-05-04 | 珠海金山网络游戏科技有限公司 | Game picture rendering method and device |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120021828A1 (en)* | 2010-02-24 | 2012-01-26 | Valve Corporation | Graphical user interface for modification of animation data using preset animation samples |
| CN107154069A (en)* | 2017-05-11 | 2017-09-12 | 上海微漫网络科技有限公司 | A kind of data processing method and system based on virtual role |
| CN111598983A (en)* | 2020-05-18 | 2020-08-28 | 北京乐元素文化发展有限公司 | Animation system, animation method, storage medium, and program product |
| CN112742027A (en)* | 2020-12-29 | 2021-05-04 | 珠海金山网络游戏科技有限公司 | Game picture rendering method and device |
| CN112669194A (en)* | 2021-01-06 | 2021-04-16 | 腾讯科技(深圳)有限公司 | Animation processing method, device and equipment in virtual scene and storage medium |
| Title |
|---|
| M NEFF: "Gesture modeling and animation based on a probabilistic re-creation of speaker style", 《ACM TRANSACTIONS ON GRAPHICS》, 31 December 2008 (2008-12-31)* |
| 王天翼: "实时虚拟预演在电影中的应用研究", 《现代电影技术》, 31 August 2021 (2021-08-31)* |
| Publication number | Publication date |
|---|---|
| CN114463469B (en) | 2025-05-09 |
| Publication | Publication Date | Title |
|---|---|---|
| WO2021063271A1 (en) | Human body model reconstruction method and reconstruction system, and storage medium | |
| WO2023109753A1 (en) | Animation generation method and apparatus for virtual character, and storage medium and terminal | |
| CN107231531A (en) | A kind of networks VR technology and real scene shooting combination production of film and TV system | |
| WO2024012459A1 (en) | Method and system for terminal-cloud combined virtual concert rendering for vr terminal | |
| CN104915978A (en) | Realistic animation generation method based on somatosensory camera Kinect | |
| CN116206370B (en) | Driving information generation method, driving device, electronic equipment and storage medium | |
| CN115331265A (en) | Training method of posture detection model and driving method and device of digital person | |
| CN108668050A (en) | Video shooting method and device based on virtual reality | |
| CN109035415B (en) | Virtual model processing method, device, equipment and computer readable storage medium | |
| CN106272446A (en) | The method and apparatus of robot motion simulation | |
| CN115018959A (en) | Drive processing method, device, equipment and storage medium for three-dimensional virtual model | |
| CN102572391A (en) | Method and device for genius-based processing of video frame of camera | |
| CN119131211A (en) | A 3D animation production method based on AI deductive real-time rendering output | |
| CN106228590B (en) | A kind of human body attitude edit methods in image | |
| CN104616336B (en) | A kind of animation construction method and device | |
| CN115797851A (en) | Animation video processing method and system | |
| CN114463469B (en) | Animation generation method, device, electronic device, storage medium and program product | |
| CN118799497A (en) | A dynamic 4D scene reconstruction method and system based on monocular video | |
| Zhu | Application of motion capture technology in 3D animation creation | |
| CN113379903B (en) | Data migration method and device, electronic device, and readable storage medium | |
| CN115797559A (en) | Virtual reality-based non-material cultural heritage simulation experience interaction method and device | |
| CN114549706A (en) | Animation generation method and animation generation device | |
| CN114550293A (en) | Motion correction method and device, storage medium, and electronic device | |
| CN113781611A (en) | Animation production method and device, electronic equipment and storage medium | |
| CN119888022A (en) | Data processing method, device, equipment and storage medium for camera control |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |