Movatterモバイル変換


[0]ホーム

URL:


CN112634416B - Method and device for generating virtual image model, electronic equipment and storage medium - Google Patents

Method and device for generating virtual image model, electronic equipment and storage medium
Download PDF

Info

Publication number
CN112634416B
CN112634416BCN202011536809.8ACN202011536809ACN112634416BCN 112634416 BCN112634416 BCN 112634416BCN 202011536809 ACN202011536809 ACN 202011536809ACN 112634416 BCN112634416 BCN 112634416B
Authority
CN
China
Prior art keywords
avatar
target
sub
dimensional model
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011536809.8A
Other languages
Chinese (zh)
Other versions
CN112634416A (en
Inventor
孙佳佳
刘晓强
马里千
金博
张博宁
王众怡
王可欣
张国鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co LtdfiledCriticalBeijing Dajia Internet Information Technology Co Ltd
Priority to CN202011536809.8ApriorityCriticalpatent/CN112634416B/en
Publication of CN112634416ApublicationCriticalpatent/CN112634416A/en
Application grantedgrantedCritical
Publication of CN112634416BpublicationCriticalpatent/CN112634416B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The disclosure relates to a method, a device, an electronic device and a storage medium for generating an avatar model, which belong to the technical field of computers, and the method comprises the following steps: when the three-dimensional model of the original avatar is required to be edited, the original avatar can be rapidly edited through the avatar editing system only by inputting the image of the target avatar and the three-dimensional model of the original avatar into the avatar editing system, so that the personalized three-dimensional model of the target avatar is generated. Meanwhile, in the process of generating the three-dimensional model, a user does not need to have related knowledge of three-dimensional model generation, and the whole process is completed by the virtual image editing system, so that the cost of generating the three-dimensional model is reduced, the operation of the user is simplified, and the efficiency of human-computer interaction is improved.

Description

Method and device for generating virtual image model, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a method and a device for generating an avatar model, an electronic device and a storage medium.
Background
With the development of network technology, more and more users can play entertainment by watching live broadcast. In order to enhance the live effect, many live platforms provide an avatar to assist the anchor in live. The avatar can perform in the live broadcasting room of the host, and the live broadcasting effect can be improved through the performance of the avatar.
In the related art, the avatar has only one or several fixed avatars, and one or several fixed avatars sometimes cannot meet the live broadcasting needs of the anchor. If the anchor wants to live with an avatar of another avatar, it is often necessary to ask the designer to recreate the avatar or to modify the existing avatar of the avatar. In this case, the efficiency of generating the avatar is low and the cost is high.
Disclosure of Invention
The present disclosure provides a method, apparatus, electronic device, and storage medium for generating an avatar model to improve efficiency of generating an avatar. The technical scheme of the present disclosure is as follows:
in one aspect, there is provided a method of generating an avatar model, including:
acquiring an image of a target avatar, the image comprising a plurality of layers, each layer corresponding to one or more parts of the target avatar;
inputting a three-dimensional model of an original avatar and the image into an avatar editing system, wherein the original avatar and the target avatar are the same type of avatar, the avatar editing system is used for editing the three-dimensional model of the original avatar, and the following steps are executed through the avatar editing system:
Dividing the three-dimensional model of the original avatar into a plurality of sub-models respectively corresponding to the plurality of layers based on one or more parts of the target avatar corresponding to each layer;
acquiring one or more maps corresponding to one or more parts of the target avatar from each map layer, and updating textures of the corresponding multiple sub-models based on the one or more maps;
and combining the multiple sub-models after the texture updating to obtain the three-dimensional model of the target virtual image.
In one possible embodiment, the dividing the three-dimensional model of the original avatar into a plurality of sub-models respectively corresponding to the plurality of layers based on one or more parts of the target avatar corresponding to each of the layers includes:
clustering a plurality of vertexes in the three-dimensional model of the original avatar based on one or more parts corresponding to each layer to obtain vertex boundaries, wherein the vertex boundaries are used for dividing the three-dimensional model of the original avatar;
and dividing the three-dimensional model of the original avatar into the plurality of sub-models respectively corresponding to the plurality of layers based on the vertex boundary.
In one possible implementation manner, each layer includes a position prompt message, where the position prompt message is used to indicate a position of a map corresponding to a different location in the layer, and the obtaining, from each layer, one or more maps corresponding to one or more locations of the target avatar includes:
acquiring the position prompt information from the plurality of layers;
and according to the position prompt information, acquiring one or more maps corresponding to one or more parts of the target avatar from each map layer.
In one possible embodiment, the combining the plurality of sub-models after the texture updating to obtain the three-dimensional model of the target avatar includes:
and based on the combination relation of the corresponding parts in the three-dimensional model of the original virtual image, splicing the multiple sub-models after the texture updating to obtain the three-dimensional model of the target virtual image.
In one possible embodiment, before the obtaining the three-dimensional model of the target avatar, the method further includes:
and performing linear interpolation processing on the splicing positions of the plurality of sub-models after the texture updating.
In one possible embodiment, the dividing the three-dimensional model of the original avatar into a plurality of sub-models respectively corresponding to the plurality of layers based on one or more parts of the target avatar corresponding to each of the layers includes:
and in response to the identification and the number of the plurality of layers of the image meeting target conditions, dividing the three-dimensional model of the original avatar into a plurality of sub-models respectively corresponding to the plurality of layers based on one or more parts of the target avatar corresponding to each layer.
In one possible embodiment, after the obtaining the three-dimensional model of the target avatar, the method further includes:
responding to a selection instruction of any part of the target virtual image, and acquiring a texture updating layer of any part, wherein the texture updating layer is used for replacing a layer corresponding to any part in the plurality of layers;
updating the texture of the sub model corresponding to any part based on the texture updating layer;
and replacing the corresponding sub model in the three-dimensional model of the target virtual image by adopting the sub model corresponding to any part after the texture updating.
In one possible embodiment, after the obtaining the three-dimensional model of the target avatar, the method further includes:
responding to a selection instruction of any sub-model in a three-dimensional model of the target avatar, and displaying a bone key point corresponding to the any sub-model and a skin weight between the any sub-model and the corresponding bone key point;
in response to an adjustment operation on the skin weights, the skin weights are updated based on values indicated by the adjustment operation.
In one possible embodiment, the target avatar includes a plurality of sub avatars, the method further comprising:
obtaining deformation degree parameters and movement speed parameters of the sub-virtual images, wherein the deformation degree parameters are used for representing the maximum deformation amplitude of the sub-virtual images, and the movement speed parameters are used for representing the state change speed of the sub-virtual images;
and responding to the change of the position of any vertex connected with any sub-avatar in the three-dimensional model of the target avatar, and adjusting the position of any sub-avatar based on the deformation degree parameter, the movement speed parameter and the position of any vertex after the change.
In one possible embodiment, after the obtaining the three-dimensional model of the target avatar, the method further includes at least one of:
responding to a width adjustment instruction of the mouth of the target virtual image, and adjusting the position of the vertex corresponding to the mouth in the three-dimensional model of the target virtual image according to the width indicated by the width adjustment instruction;
and acquiring the maximum opening amplitude of the mouth of the target avatar, and setting the maximum allowable movement distance for the vertex corresponding to the mouth in the three-dimensional model of the target avatar based on the maximum opening amplitude.
In one aspect, there is provided an avatar model generating apparatus including:
an image acquisition module configured to perform acquiring an image of a target avatar, the image including a plurality of layers, each layer corresponding to one or more parts of the target avatar;
an input module configured to perform inputting a three-dimensional model of an original avatar and the image into an avatar editing system, the original avatar and the target avatar being the same type of avatar, the avatar editing system being configured to edit the three-dimensional model of the original avatar, the following steps being performed by the avatar editing system:
Dividing the three-dimensional model of the original avatar into a plurality of sub-models respectively corresponding to the plurality of layers based on one or more parts of the target avatar corresponding to each layer;
acquiring one or more maps corresponding to one or more parts of the target avatar from each map layer, and updating textures of the corresponding multiple sub-models based on the one or more maps;
and combining the multiple sub-models after the texture updating to obtain the three-dimensional model of the target virtual image.
In a possible implementation manner, the input module is configured to perform clustering on a plurality of vertexes in the three-dimensional model of the original avatar based on one or more parts corresponding to each layer to obtain vertex boundaries, wherein the vertex boundaries are used for dividing the three-dimensional model of the original avatar; and dividing the three-dimensional model of the original avatar into the plurality of sub-models respectively corresponding to the plurality of layers based on the vertex boundary.
In a possible implementation manner, each layer includes position prompt information, the position prompt information is used for indicating positions of maps corresponding to different parts in the layers, and the input module is configured to obtain the position prompt information from the layers; and according to the position prompt information, acquiring one or more maps corresponding to one or more parts of the target avatar from each map layer.
In a possible implementation manner, the input module is configured to perform stitching of the plurality of sub-models after the texture update based on a combination relation of corresponding parts in the three-dimensional model of the original avatar, so as to obtain the three-dimensional model of the target avatar.
In a possible implementation manner, the input module is configured to perform linear interpolation processing on the spliced positions of the plurality of sub-models after the texture is updated.
In one possible embodiment, the apparatus further comprises:
and an image verification module configured to perform segmentation of the three-dimensional model of the original avatar into a plurality of sub-models respectively corresponding to the plurality of layers based on one or more parts of the target avatar corresponding to each of the layers in response to both the identification and the number of the plurality of layers of the image meeting a target condition.
In a possible implementation manner, the input module is further configured to execute a selection instruction for any part of the target avatar, and acquire a texture update layer of any part, where the texture update layer is used for replacing a layer corresponding to any part in the multiple layers; updating the texture of the sub model corresponding to any part based on the texture updating layer; and replacing the corresponding sub model in the three-dimensional model of the target virtual image by adopting the sub model corresponding to any part after the texture updating.
In one possible embodiment, the apparatus further comprises:
a display module configured to execute a selection instruction in response to any one of the sub-models in the three-dimensional model of the target avatar, to display a bone keypoint corresponding to the any one of the sub-models and a skin weight between the any one of the sub-models and the corresponding bone keypoint;
a skinning weight update module configured to perform an adjustment operation responsive to the skinning weight, update the skinning weight based on a value indicated by the adjustment operation.
In one possible embodiment, the target avatar includes a plurality of sub avatars, and the apparatus further includes:
a parameter acquisition module configured to perform acquisition of a deformation degree parameter for representing a maximum deformation amplitude of the sub-avatar and a movement speed parameter for representing a state change speed of the sub-avatar;
and a position adjustment module configured to perform adjustment of the position of any one of the sub-avatars based on the deformation degree parameter, the movement speed parameter, and the position of any one of the vertices after the change in response to the change in the position of any one of the vertices connected to any one of the sub-avatars in the three-dimensional model of the target avatar.
In one possible embodiment, after the obtaining the three-dimensional model of the target avatar, the apparatus further includes at least one of the following modules:
a width adjustment module configured to perform adjustment of a position of a vertex corresponding to a mouth in a three-dimensional model of the target avatar according to a width indicated by a width adjustment instruction in response to the width adjustment instruction for the mouth of the target avatar;
and the amplitude adjustment module is configured to acquire the maximum opening amplitude of the mouth of the target avatar, and set the maximum allowed movement distance for the vertex corresponding to the mouth in the three-dimensional model of the target avatar based on the maximum opening amplitude.
In one aspect, there is provided an electronic device comprising:
one or more processors;
a memory for storing the processor-executable program code;
wherein the processor is configured to execute the program code to implement the above-described avatar model generation method.
In one aspect, a storage medium is provided, which when executed by a processor of an electronic device, enables the electronic device to perform the above-described avatar model generation method.
In one aspect, a computer program product is provided, the computer program product comprising a computer program executable by a processor of an electronic device to perform the method of generating an avatar model described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
when the three-dimensional model of the original avatar is required to be edited, a user only needs to input the image of the target avatar and the three-dimensional model of the original avatar to the avatar editing system through the terminal, and the three-dimensional model of the target avatar can be quickly generated through the avatar editing system, so that the efficiency of generating the three-dimensional model is high. Meanwhile, in the process of generating the three-dimensional model, a user does not need to have related knowledge of three-dimensional model generation, and the whole process is completed by the virtual image editing system, so that the cost of generating the three-dimensional model is reduced, the operation of the user is simplified, and the efficiency of human-computer interaction is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a schematic view of an avatar shown according to an exemplary embodiment;
FIG. 2 is a schematic view of an implementation environment of a method of generating an avatar model according to an exemplary embodiment;
FIG. 3 is a schematic view of an avatar shown according to an exemplary embodiment;
fig. 4 is a schematic view of an avatar shown according to an exemplary embodiment;
FIG. 5 is a flowchart illustrating a method of generating an avatar model according to an exemplary embodiment;
FIG. 6 is a flowchart illustrating a method of generating an avatar model according to an exemplary embodiment;
FIG. 7 is a schematic diagram of an interface shown in accordance with an exemplary embodiment;
FIG. 8 is a schematic diagram of an interface shown according to an exemplary embodiment;
FIG. 9 is a schematic diagram illustrating a model segmentation according to an example embodiment;
fig. 10 is a schematic view of an avatar shown according to an exemplary embodiment;
FIG. 11 is a schematic view of an avatar shown according to an exemplary embodiment;
FIG. 12 is a block diagram illustrating an avatar model generating apparatus according to an exemplary embodiment;
FIG. 13 is a block diagram of a terminal shown in accordance with an exemplary embodiment;
Fig. 14 is a block diagram of a server, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The user information referred to in the present disclosure may be information authorized by the user or sufficiently authorized by each party.
First, terms related to embodiments of the present disclosure will be described.
Virtual image: a character object, which may be called a virtual doll or an animated character, is created in the form of a drawing, an animation, a computer animation (Computer Graphics, CG) or the like, and is activated in a virtual scene such as the internet, but does not exist in a physical form. Referring to fig. 1, the left side is a planar image of the avatar, and the right side is a three-dimensional image of the avatar.
Fig. 2 is an implementation environment diagram of a method of generating an avatar model according to an exemplary embodiment, as shown in fig. 2, including a terminal 201 and a server 202.
Optionally, the terminal 201 is at least one of a smart phone, a smart watch, a desktop computer, a laptop computer, and a laptop portable computer. An application program supporting online live broadcast can be installed and run on the terminal 201, and a user can log in the application program through the terminal 201 to conduct live broadcast. The terminal 201 may be connected to the server 202 through a wireless network or a wired network.
Alternatively, the terminal 201 is one of a plurality of terminals, and the present embodiment is merely exemplified by the terminal 201. Those skilled in the art will appreciate that the number of terminals described above can be greater or fewer. For example, the number of the terminals 201 can be only several, or the number of the terminals 201 can be tens or hundreds, or more, the number and the device type of the terminals 201 are not limited in the embodiment of the present disclosure.
Optionally, the server 202 is at least one of a server, a plurality of servers, a cloud computing platform, and a virtualization center. The server 202 can be used for live broadcast.
Alternatively, the number of servers 202 is greater or lesser, which is not limited by the disclosed embodiments. Of course, the server 202 may optionally include other functional servers to provide more comprehensive and diverse services.
The method for generating the avatar model provided by the embodiment of the present disclosure can be applied to various scenes, and the scenes will be described below.
The technical scheme provided by the embodiment of the disclosure can be applied to a scene of live broadcasting by a host. In a live broadcast scene, a terminal acquires live broadcast video of a host broadcast through a camera, encodes the acquired live broadcast video, and sends the encoded live broadcast video to a server. The server decodes and secondarily encodes the encoded live video, transmits the live video with different code rates to a terminal used by the audience, and the terminal used by the audience decodes the live video transmitted by the server and presents the decoded live video to the audience. In order to improve the live effect, some live broadcast platforms provide a live broadcast mode for a host by adopting an avatar, in this mode, the host broadcasts live broadcast as usual, but viewers see the live broadcast video instead of the host. For example, if the anchor wants to use the avatar to perform live broadcast, the anchor invokes a model of the avatar on the live broadcast software, the terminal collects live video of the anchor, performs image recognition on the live video, and obtains a plurality of skeletal key points of the anchor in the live video. The terminal controls the virtual image to move according to the action of the anchor according to the corresponding relation between the bone key points of the anchor and the bone key points of the virtual image. From the viewer's perspective, i.e., the anchor plays live in the form of an avatar.
In a live broadcast scene, live broadcast software often provides one or more fixed-image avatars for a host, and the host can select an avatar to be live broadcast by himself. After the technical scheme provided by the embodiment of the disclosure is adopted, the terminal can automatically generate a brand-new avatar through the multi-layer image of the target avatar during live broadcast, and the anchor can adopt the brand-new avatar to carry out live broadcast. In addition, if the anchor wants to replace a certain part of the existing avatar, the technical scheme provided by the embodiment of the disclosure can also be used for replacing the part by adopting the part after local update, so that the local replacement of the part can be completed. In some embodiments, referring to fig. 3 and 4, fig. 3 is a later avatar, and fig. 4 is a new avatar automatically generated after adopting the technical scheme provided by the embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can also be applied to the animation production process. In the process of animation production, animation production personnel firstly produce models of animation figures, and after the models are produced, the pictures are added on the models, so that the production of the animation figures is completed. After the animation persona is produced, if the produced animation persona needs to be replaced or edited, through the technical scheme provided by the embodiment of the disclosure, a brand new animation persona can be automatically generated or the designated part of the animation persona can be automatically updated through the multi-layer image of the target animation persona.
After the implementation environment and the application scenario of the present disclosure are introduced, the technical solutions provided by the embodiments of the present disclosure are described below.
Fig. 5 is a flowchart illustrating a method of generating an avatar model according to an exemplary embodiment, as shown in fig. 5, including the following steps.
In step S501, an image of a target avatar is acquired, the image including a plurality of layers, each layer corresponding to one or more parts of the target avatar.
In step S502, the three-dimensional model and image of the original avatar are input to an avatar editing system for editing the three-dimensional model of the original avatar, the original avatar and the target avatar being the same type of avatar, and the following steps S503 to S505 are performed through the avatar editing system.
In step S503, the three-dimensional model of the original avatar is divided into a plurality of sub-models corresponding to the plurality of layers, respectively, based on one or more parts of the target avatar corresponding to each layer.
In step S504, one or more maps corresponding to one or more parts of the target avatar are acquired from each layer, and textures of the corresponding plurality of sub-models are updated based on the one or more maps.
In step S505, the plurality of sub-models with updated textures are combined to obtain a three-dimensional model of the target avatar.
According to the technical scheme provided by the embodiment of the disclosure, when the three-dimensional model of the original avatar is required to be edited, a user only needs to input the image of the target avatar and the three-dimensional model of the original avatar to the avatar editing system through the terminal, and the three-dimensional model of the target avatar can be quickly generated through the avatar editing system, so that the efficiency of generating the three-dimensional model is high. Meanwhile, in the process of generating the three-dimensional model, a user does not need to have related knowledge of three-dimensional model generation, and the whole process is completed by the virtual image editing system, so that the cost of generating the three-dimensional model is reduced, the operation of the user is simplified, and the efficiency of human-computer interaction is improved.
In one possible embodiment, dividing the three-dimensional model of the original avatar into a plurality of sub-models respectively corresponding to the plurality of layers based on one or more parts of the target avatar corresponding to each layer includes:
and clustering a plurality of vertexes in the three-dimensional model of the original avatar based on one or more parts corresponding to each layer to obtain a vertex boundary line, wherein the vertex boundary line is used for dividing the three-dimensional model of the original avatar.
Based on the vertex boundary, the three-dimensional model of the original avatar is divided into a plurality of sub-models corresponding to the plurality of layers, respectively.
In one possible embodiment, each layer includes a position indication message, the position indication message is used to indicate a position of a map corresponding to a different part in the layer, and the obtaining, from each layer, one or more maps corresponding to one or more parts of the target avatar includes:
and acquiring position prompt information from the multiple layers.
And according to the position prompt information, one or more maps corresponding to one or more parts of the target avatar are obtained from each map layer.
In one possible embodiment, combining the plurality of sub-models after the texture update to obtain the three-dimensional model of the target avatar includes:
and based on the combination relation of the corresponding parts in the three-dimensional model of the original virtual image, splicing the multiple sub-models with updated textures to obtain the three-dimensional model of the target virtual image.
In one possible embodiment, before obtaining the three-dimensional model of the target avatar, the method further includes:
and performing linear interpolation processing on the splicing positions of the plurality of sub-models after the texture updating.
In one possible embodiment, dividing the three-dimensional model of the original avatar into a plurality of sub-models respectively corresponding to the plurality of layers based on one or more parts of the target avatar corresponding to each layer includes:
in response to the identification and the number of the plurality of layers of the image both meeting the target condition, the three-dimensional model of the original avatar is divided into a plurality of sub-models respectively corresponding to the plurality of layers based on one or more parts of the target avatar corresponding to each layer.
In one possible embodiment, after obtaining the three-dimensional model of the target avatar, the method further includes:
and responding to a selection instruction of any part of the target virtual image, acquiring a texture updating layer of any part, wherein the texture updating layer is used for replacing a layer corresponding to any part in the multiple layers.
Based on the texture updating layer, the texture of the sub-model corresponding to any part is updated.
And replacing the corresponding sub-model in the three-dimensional model of the target virtual image by adopting the sub-model corresponding to any part after the texture updating.
In one possible embodiment, after obtaining the three-dimensional model of the target avatar, the method further includes:
And displaying the skeletal keypoints corresponding to any sub-model and the skin weights between any sub-model and the corresponding skeletal keypoints in response to a selection instruction of any sub-model in the three-dimensional model of the target avatar.
In response to an adjustment operation on the skin weight, the skin weight is updated based on the value indicated by the adjustment operation.
In one possible embodiment, the target avatar includes a plurality of sub avatars, the method further comprising:
and obtaining deformation degree parameters and movement speed parameters of the multiple sub-avatars, wherein the deformation degree parameters are used for representing the maximum deformation amplitude of the sub-avatars, and the movement speed parameters are used for representing the state change speed of the sub-avatars.
And adjusting the position of any one of the sub-avatars based on the deformation degree parameter, the movement speed parameter and the position of any one of the changed vertexes in response to the change of the position of any one of the vertexes connected with any one of the sub-avatars in the three-dimensional model of the target avatar.
In one possible embodiment, after obtaining the three-dimensional model of the target avatar, the method further includes at least one of:
and responding to a width adjustment instruction for the mouth of the target avatar, and adjusting the position of the vertex corresponding to the mouth in the three-dimensional model of the target avatar according to the width indicated by the width adjustment instruction.
And acquiring the maximum opening amplitude of the mouth of the target avatar, and setting the maximum allowable movement distance for the vertex corresponding to the mouth in the three-dimensional model of the target avatar based on the maximum opening amplitude.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
Fig. 5 is merely a basic flow of the present disclosure, and a scheme provided by the present disclosure will be further described based on a specific embodiment, and fig. 6 is a flowchart illustrating a method for generating an avatar model according to an exemplary embodiment. Taking an example in which the electronic device is implemented as a terminal, referring to fig. 6, the method includes:
in step S601, an image of a target avatar is acquired, the image including a plurality of layers, each layer corresponding to one or more parts of the target avatar.
In the live scene, the target avatar is an avatar mainly intended for live broadcasting, and in the animation scene, the target avatar is an animated character intended for use. The image of the target avatar is an image comprising a plurality of layers, which in some embodiments is a PSD (Photoshop Document, photo factory file) formatted image. The layers of the image have different names, one layer corresponding to one or more locations, for example, the layer named "head" in the image corresponds to the head of the three-dimensional model of the original avatar and the layer named "eye" corresponds to the left and right eyes of the original avatar. In the following process, the user in the live scene is referred to as a main cast, and the user in the animation scene is referred to as an animator.
In one possible implementation, in a live scene, if the anchor wants to live with its own designed target avatar, the anchor can make an image of the target avatar by itself, and during the making of the image, the anchor needs to store different body parts of the target avatar on different layers.
In the implementation mode, the anchor can design the target virtual image according to own preference, and the target virtual image designed by the anchor can be adopted for live broadcast in the follow-up process, so that the live broadcast effect is better. In addition, when the host broadcast adopts the target virtual image designed by the host, the host broadcast does not need to design a three-dimensional model by the host, and the host broadcast can automatically generate the three-dimensional model of the target virtual image based on the plane image designed by the host only by providing the plane image of the target virtual image.
The above embodiments are described below by way of two examples.
Example 1, a host can draw an image of a target avatar by image drawing software, and the host draws a head of the target avatar on a first layer of the image drawing software. After the head drawing is completed, the anchor establishes a layer on the image drawing software, and draws the trunk of the target avatar on the established second layer. After the trunk of the target avatar is drawn, the anchor creates a layer again on the image drawing software, draws the double arms of the target avatar on the third layer created, and so on until the drawing of the target avatar is completed. Of course, in the above example process, the head of the target avatar is drawn by the anchor, then the trunk of the target avatar is drawn, and then the arms of the target avatar are drawn are described as examples, and in other possible embodiments, the anchor can draw the target avatar in other sequences, so long as it is ensured that different parts are drawn on different layers.
Example 2, an operator of live broadcast software can provide a platform for a host to make an image of a target avatar, on the platform, a large number of optional part images are provided for the host, and the host can select different parts to combine by himself through the platform, so as to obtain the image of the target avatar. For example, referring to fig. 7, the platform provides an interface 701 for a host, an identifier 702 of a plurality of parts is provided for the host on the interface 701, a plurality of parts corresponding to the identifier are provided for the host under the identifier 702, in addition, a preview area 703 is provided for the host on the interface 701, the host can see the target avatar created by himself in real time through the preview area 703, that is, by clicking any part under the identifier 702, the part can be displayed in real time in the preview area 703, and when the host selects all parts, a preview image of the target avatar is displayed on the preview area 703. When the anchor is satisfied with the preview image of the target avatar displayed on the preview area 703, the image of the target avatar can be obtained by clicking the output button 704 on the interface 701.
In one possible embodiment, in a live scene, the anchor can acquire an image of the target avatar from the network through the terminal, and perform subsequent model editing based on the image of the target avatar.
Under the implementation mode, the anchor does not need to make a three-dimensional model of the target virtual image by itself or draw an image of the target virtual image by itself, and only needs to download the image through a network, thereby greatly simplifying the difficulty of making the model of the target virtual image by the anchor.
For example, an operator of the anchor software may provide an anchor with a forum for sharing images of the target avatar, which allows the user to post images of the target avatar made by himself, from which the anchor can acquire images of avatars made by different users for free or for a fee.
In one possible implementation, in an animation scenario, if an animator wants to replace a finished animator with a target animator, the animator can redraw the image of the target animator, or call up the image of the finished target animator directly from the terminal.
In step S602, the three-dimensional model and image of the original avatar are input to an avatar editing system for editing the three-dimensional model of the original avatar, the original avatar and the target avatar being the same type of avatar, and the following steps S603-S607 are performed through the avatar editing system.
The original virtual image is the virtual image which is already manufactured, and the original virtual image is the virtual image provided by live broadcast software for the anchor in the live broadcast scene; in an animation scene, the original avatar is the finished animated character. Optionally, the avatar editing system is an application integrating multiple functions, and when the terminal runs the application, a host or an animator can edit the three-dimensional model conveniently and rapidly through the avatar editing system. The original avatar and the target avatar being the same type of avatar means that if the original avatar is a cartoon character, the target avatar is also a cartoon character, and if the original avatar is a cat, the target avatar is also a cat.
In one possible embodiment, the avatar edit system can provide a portal through which a user can input a three-dimensional model of the original avatar and the image into the avatar edit system.
For example, referring to fig. 8, the terminal displays an interface 801, and a model import button 802 is displayed on the interface 801, and a user can import a file storing a three-dimensional model of an original avatar into the avatar editing system through the button 802. Also displayed on the interface 801 is an image import button 803, through which the user can import an image of the target avatar into the avatar editing system.
This example will be described below in terms of live and animate scenes.
In a live broadcast scene, live broadcast software provides a calling interface for the virtual image editing system, and the terminal can directly import the three-dimensional model of the virtual image in the live broadcast software into the virtual image editing system through the calling interface. In some embodiments, the invocation interface is bound to the button 802, and the terminal triggers a model import instruction in response to detecting a click operation on the button 802. In response to the model import instruction, the terminal imports the three-dimensional model of the avatar from the live software through the call interface to the avatar editing system. In response to detecting a click operation on the button 803, the terminal triggers an image import instruction. In response to the image import instruction, the terminal displays an image selection interface. The anchor selects the image of the target avatar through the image selection interface, and the terminal imports the image selected by the anchor into the avatar editing system.
In an animation scenario, after the animator clicks button 802, the terminal triggers a model import instruction. In response to the model import instruction, the terminal displays a model file selection interface through which the animator can select a three-dimensional model file of the original animated character. The terminal can import the file selected by the animator into the avatar editing system. In response to detecting a click operation on the button 803, the terminal triggers an image import instruction. In response to the image import instruction, the terminal displays an image selection interface. The animator selects the image of the target animated character through the image selection interface, and the terminal imports the image selected by the animator into the avatar editing system.
Alternatively, after performing step S602, the terminal can perform both step S603 and step S604, which is not limited by the embodiment of the present disclosure.
In step S603, the image is checked by the avatar editing system, and in response to the identification and the number of the plurality of layers of the image each satisfying the target condition, the following step S604 is performed.
The identifiers and the number of the layers of the image meet the target condition, namely, the number of the layers in the image is the same as the reference number set in the virtual image editing system, and the identifiers of the layers are the same as the reference identifiers set in the virtual image editing system.
In one possible embodiment, the terminal compares the number of layers in the image with the reference number set in the avatar editing system through the avatar editing system. When the number of the layers in the image is determined to be the same as the reference number, the terminal compares the identifiers of the layers in the image with the reference identifiers through the virtual image editing system. When it is determined that the identifications of the plurality of layers in the image are identical to the reference identifications, the terminal performs the following step S604 through the avatar editing system. When it is determined that the number of the plurality of layers in the image is different from the reference number or the identifiers of the plurality of layers in the image are different from the reference identifier, the terminal is not performing step S604 described below, and in some embodiments, the terminal can display an error message for reminding that the image fails to pass the verification.
In the embodiment, before the three-dimensional model of the original avatar is edited by the avatar editing system, the terminal can check the image of the target avatar, so that a plurality of image layers in the image of the target object are ensured to meet the requirement of the avatar editing system, errors are avoided in the process of editing the three-dimensional model, and the success rate of the avatar editing system for editing the three-dimensional model is improved.
In step S604, the three-dimensional model of the original avatar is divided into a plurality of sub-models corresponding to the plurality of layers, respectively, based on one or more parts of the target avatar corresponding to each layer, by the avatar editing system.
In one possible implementation, the terminal clusters a plurality of vertices in the three-dimensional model of the original avatar based on one or more positions corresponding to each layer through the avatar editing system to obtain vertex boundaries, and the vertex boundaries are used for dividing the three-dimensional model of the original avatar. The terminal divides the three-dimensional model of the original avatar into a plurality of sub-models corresponding to the plurality of parts respectively based on the vertex dividing line through the avatar editing system, and different sub-models correspond to different parts of the original avatar.
In this embodiment, the terminal can cluster vertices in the three-dimensional model through the avatar editing system, and obtain vertex boundaries between different classes based on the result of vertex clustering. The terminal divides the three-dimensional model of the original virtual image into a plurality of sub-models based on the vertex boundary through the virtual image editing system, so that the texture adjustment can be respectively carried out based on the plurality of sub-models, the plurality of sub-models cannot influence each other in the process of the texture adjustment, and the success rate of editing the three-dimensional model is improved.
In one possible implementation, the avatar editing system integrates a vertex classification model that is trained with multiple sample vertices labeled with types, with the ability to classify vertices. The terminal can input the three-dimensional model of the original avatar into the vertex classification model, and classify a plurality of vertexes in the three-dimensional model of the original avatar according to the relative position relationship between different vertexes in the three-dimensional model of the original avatar by the vertex classification model, wherein the vertexes at the connection positions between the different categories form a vertex boundary. The terminal divides the three-dimensional model of the original avatar into a plurality of sub-models based on the vertex boundary line through the avatar editing system, and the different sub-models correspond to different parts of the original avatar.
In this embodiment, the terminal can classify the plurality of vertices in the three-dimensional model through the vertex classification model integrated by the avatar editing system, and the classification efficiency is high.
In one possible embodiment, the terminal displays the three-dimensional model of the original avatar through the avatar editing system, the user can draw a vertex boundary on the three-dimensional model of the original avatar, and the terminal divides the three-dimensional model of the original avatar into a plurality of sub-models based on the vertex boundary through the avatar editing system.
In the embodiment, the user can cut the three-dimensional model of the original virtual image according to own ideas, and a plurality of sub-models obtained by cutting are more in line with the preference of the user, so that the user experience is better.
In one possible embodiment, the terminal is capable of splitting the three-dimensional model of the original avatar into a plurality of sub-models according to a plurality of virtual bones in the three-dimensional model of the original avatar through the avatar editing system.
In the embodiment, the terminal can quickly cut the three-dimensional model of the original avatar according to the virtual bones in the three-dimensional model of the original avatar, and the three-dimensional model cutting efficiency is high.
For example, the virtual skeleton is bound to at least one vertex in the three-dimensional model of the original avatar, and the terminal can control the three-dimensional model of the original avatar to perform an activity by controlling the virtual skeleton. The terminal can divide the vertexes corresponding to the same virtual skeleton into a sub-model according to the correspondence between the plurality of virtual skeletons and different vertexes through the virtual image editing system.
It should be noted that the terminal can divide the three-dimensional model of the original avatar into a plurality of sub-models by any of the above-described methods, which is not limited in the embodiments of the present disclosure. In some embodiments, referring to fig. 9, the terminal divides a sub-model 901 corresponding to the head of the original avatar, a sub-model 902 corresponding to the torso, a sub-model 903 corresponding to the left hair tip, a sub-model 904 corresponding to the right hair tip, and a hair body 905 from the three-dimensional model of the original avatar through the avatar editing system.
In step S605, one or more maps respectively corresponding to one or more parts of the target avatar are acquired from each layer by the avatar editing system.
In one possible implementation, each layer includes a position prompt message, where the position prompt message is used to indicate positions of the maps corresponding to different positions in the layer, the terminal obtains the position prompt message from multiple layers through the avatar editing system, and obtains one or more maps corresponding to one or more positions of the target avatar from each layer according to the position prompt message.
In this embodiment, the terminal can quickly acquire the map corresponding to the location based on the position prompt information of each layer, and the map acquisition efficiency is high.
For example, each layer in the image of the target avatar may include two maps, such as a map corresponding to the left eye and a map corresponding to the right eye, and each layer stores position prompt information in addition to the maps. The terminal can determine the positions of the maps corresponding to different parts based on the position prompt information in the map layer through the virtual image editing system, and acquire the corresponding maps from the map layer. For example, the map layer includes a map corresponding to the left eye and a map corresponding to the right eye, the position prompt information includes coordinates of the map corresponding to the left eye in the map layer and coordinates of the map corresponding to the right eye in the map layer, and the terminal obtains the coordinates corresponding to the left eye and the coordinates corresponding to the right eye from the corresponding coordinates of the map layer through the avatar editing system.
In one possible implementation, the avatar editing system integrates an image segmentation model trained with multiple sample maps labeled with types, with the ability to classify the maps. The terminal can input different layers in the image of the target virtual image into the image segmentation model, and the image segmentation model is used for carrying out convolution processing, full connection processing and normalization processing on the layers to obtain the mapping types of different mapping in the layers, wherein the different mapping types correspond to different positions.
In this embodiment, the terminal can classify the map in the map layer through the map classification model integrated by the avatar editing system, and the classification efficiency is high.
In step S606, textures of the corresponding plurality of sub-models are updated by the avatar editing system based on the one or more maps.
In one possible embodiment, the terminal replaces the map on the corresponding plurality of sub-models with one or more maps through the avatar editing system to update textures of the corresponding plurality of sub-models.
Under the implementation mode, the terminal can replace the mapping on the sub-model through the virtual image editing system, so that the rapid update of the texture of the sub-model is realized, and the efficiency of model editing is higher.
In step S607, the multiple sub-models with updated textures are combined by the avatar editing system to obtain a three-dimensional model of the target avatar.
In one possible implementation manner, the terminal splices the multiple sub-models after the texture update through the avatar editing system, and performs linear interpolation processing on the spliced positions to obtain the three-dimensional model of the target avatar.
Under the implementation mode, the terminal can conduct linear interpolation on the spliced positions of the plurality of sub-models through the virtual image editing system, and the linear interpolation can enable the transition of the two sub-models at the spliced positions to be smoother, so that the display effect of the target virtual image is more real.
For example, there are a sub-model a, which is the left arm of the target avatar, and a sub-model B, which is the torso of the target avatar. The terminal splices the sub model A and the sub model B based on the vertex boundary between the sub model A and the sub model B through the virtual image editing system, namely splices the left arm and the trunk of the target virtual image. The terminal carries out linear interpolation on the pixel values of the triangular surfaces at the two sides of the vertex boundary by taking the vertex boundary as the center through the virtual figure editing system so as to lead the transition of the pixel values of the triangular surfaces at the two sides of the vertex boundary to be gentle. For example, there is a triangular surface A1 belonging to the sub-model a on one side of the vertex boundary, a triangular surface B1 belonging to the sub-model B on the other side of the vertex boundary, and three pixel points a, B and c gradually far from the vertex boundary exist on the triangular surface A1, and the pixel values of a, B and c are 100, 80 and 160, respectively. Three pixel points d, e and f gradually far from the vertex boundary exist on the triangular surface A2, and the pixel values of d, e and f are 210, 160 and 110 respectively. The terminal can acquire the section of the linear difference process (160, 80, 100, 210, 160, 110) in the direction from the triangle face a to the triangle face B by the avatar editing system. The terminal can perform a linear difference between the first value 160 and the last value 110 in the interval, replacing the value between 160 and 110 with the value after the linear difference. For example, the terminal can determine 4 values 150, 140, 130 and 120 between the values 160 and 110 according to the average difference method by using the avatar editing system, and replace the values of the corresponding positions in the interval to obtain new pixel values 160, 150, 140, 130, 120 and 110, that is, update the pixel values of the pixel points b, c, d and e to 150, 140, 130 and 120 respectively.
It should be noted that, the above is described by taking the terminal as an example to perform linear interpolation on the splicing position through the avatar editing system, and in other possible implementations, the terminal may also perform mean filtering or median filtering on the splicing position through the avatar editing system, which is not limited in this embodiment of the disclosure.
Alternatively, after step S607, the terminal can also perform any of the following steps through the avatar editing system.
And step A, responding to a selection instruction of any part in the three-dimensional model of the target virtual image, and acquiring a texture updating layer of any part by the terminal, wherein the texture updating layer is used for replacing a layer corresponding to any part in the multiple layers. The terminal updates the texture of the sub-model corresponding to any part based on the texture updating layer. And the terminal replaces the corresponding sub-model in the three-dimensional model of the target virtual image by adopting the sub-model corresponding to any part after the texture updating.
By executing the step A, the terminal can not only carry out integral editing on the three-dimensional model of the original virtual image through the virtual image editing system, but also carry out local editing on the three-dimensional model of the target virtual image on the basis of obtaining the three-dimensional model of the target virtual image, the three-dimensional model editing mode is more flexible, and the three-dimensional model editing efficiency is improved.
For example, after the three-dimensional model of the target avatar is obtained, if the user is not satisfied with any part of the three-dimensional model of the target avatar, it is also possible to replace the corresponding layer in the image of the target avatar with the texture update layer of the part. The terminal updates the texture of the sub-model corresponding to the part based on the texture updating layer of the part through the virtual image editing system. The terminal replaces the corresponding sub-model in the three-dimensional model virtually formed by the target by adopting the sub-model after the texture updating through the virtual image editing system. Because each sub-model is mutually independent, the updating of the texture of the sub-model does not affect other sub-models, and the efficiency of editing the three-dimensional model is higher.
On the basis of the step A, the terminal can also perform linear interpolation processing on the splicing positions of the submodel corresponding to any part after the texture updating and the submodel corresponding to other parts through the virtual image editing system. The method of linear interpolation processing in this embodiment belongs to the same inventive concept as the description in step S607, and the embodiment refers to the related description in step S607, and is not described here again.
And B, responding to a selection instruction of any sub-model in the three-dimensional model of the target virtual image, and displaying the skeleton key point corresponding to any sub-model and the skin weight between any sub-model and the corresponding skeleton key point by the terminal. In response to an adjustment operation on the skin weights, the terminal updates the skin weights based on the values indicated by the adjustment operation.
In the disclosed embodiment, the three-dimensional model of the target avatar can perform different actions by the user's driving. Taking a live scene as an example, the terminal can acquire a video stream of a host broadcast through a camera, and drive a three-dimensional model of the target virtual image to execute different actions based on action changes of the host broadcast in the video stream. The movement characteristics of different parts of the target virtual image are different under the drive of a user, and the different movement characteristics are mainly determined by the skin weights. Such as the arms, are significantly different from the hair, the arms are stiffer, and only the elbow joints are allowed to move. Whereas hair is different, relatively soft, and has no apparent articulation. When the anchor wants to adjust the motion characteristics of the three-dimensional model of the target avatar, the anchor can select the sub-model to be adjusted, such as clicking on the left arm of the three-dimensional model of the target avatar. In response to detecting a click operation on the left arm of the three-dimensional model of the target avatar, the terminal displays skeletal keypoints corresponding to the left arm and corresponding skin weights. The user clicks any vertex on the left arm to adjust the skin weight between the vertex and the bone key points, that is, in response to detecting the clicking operation on the vertex, the terminal displays a skin weight adjustment interface, on which the skin weight between the vertex and the different bone key points is displayed, and the user can adjust the skin weight between the vertex and the different bone key points on the skin weight adjustment interface.
Based on the step B, the terminal can also edit the model through the avatar, adjust the positions of the skeleton key points in the three-dimensional model of the target avatar, or increase the skeleton key points, and after adjusting the positions of the skeleton key points or increasing the skeleton key points, the user can set skin weights for the vertexes of different models through the mode.
And C, the terminal acquires deformation degree parameters and movement speed parameters of a plurality of sub-virtual images, wherein the deformation degree parameters are used for representing the maximum deformation amplitude of the sub-virtual images, and the movement speed parameters are used for representing the state change speed of the sub-virtual images. And adjusting the position of any one of the sub-avatars based on the deformation degree parameter, the movement speed parameter and the position of any one of the changed vertexes in response to the change of the position of any one of the vertexes connected with any one of the sub-avatars in the three-dimensional model of the target avatar.
Alternatively, the child avatar is an avatar such as hair of a target avatar, a accessory, and the like. For the sub-avatar, the positions of the skeletal key points corresponding to the model vertices can be determined by the designer according to the needs. For example, 5 skeletal keypoints are defined on the hair of the target avatar, wherein the 1 st skeletal keypoint is the top of the hair, the 5 th skeletal keypoint is the tail of the hair, the 2 nd skeletal keypoint is the midpoint of the hair, and the 3 rd and 4 th skeletal keypoints bisect the hair between the 2 nd skeletal keypoint and the 5 th skeletal keypoint. By controlling the above 5 skeletal key points, the movement of the target avatar hair can be controlled.
For example, in a live scene, if the child avatar is hair of the target avatar, the anchor may set deformation parameters and motion parameters for the hair. As for the deformation parameters, if the deformation parameters are set to be large, the deformation amplitude of the hair is large, the deformation of the hair is easy, the hair is easy to bend and the like in the process of moving the target avatar, and the hair of the target avatar is softer from the viewpoint of the audience. Accordingly, if the setting of the deformation parameter is smaller, the deformation amplitude of the hair is smaller, and the deformation of the hair is more difficult, so that the hair is not easy to bend during the movement of the target avatar, i.e., the hair of the target avatar is harder from the viewpoint of the viewer. For the moving parameters, if the moving parameters are set to be large, the acceleration of the hair is increased during the movement of the target avatar. Taking the tail of the hair as an example, when the head of the target avatar moves to the right, the tail of the right hair can also move to the right at the same or faster speed as the head moves. If the movement parameter setting is small, the acceleration of the hair is small during the movement of the target avatar. Also, taking the tail of the hair as an example, when the head of the target avatar moves to the right, the tail of the left hair moves to the right at a speed slower than the head moving speed. In some embodiments, referring to fig. 10, when the anchor controls the left avatar and the right avatar to make the same action, the movement state of the hair is different at the positions indicated by blocks 1001 and 1002.
The object and direction of the deformation parameter and the motion parameter are mainly to make the motion of the object avatar more realistic. Multiple experiments have found that the objective and direction of parameter adjustment exists objectively. Only the average user may not be easily found. For experienced animators, the adjustment will be much faster and better than for the average user.
And D, responding to a width adjustment instruction for the mouth of the target virtual image, and adjusting the position of the vertex corresponding to the mouth in the three-dimensional model of the target virtual image by the terminal according to the width indicated by the width adjustment instruction.
By executing the step D, the user can quickly and individually adjust the width of the mouth of the target virtual image through the virtual image editing system without regenerating a three-dimensional model, so that the operation of model adjustment is simplified, and the efficiency of model adjustment is improved.
For example, after obtaining the three-dimensional model of the target avatar, the user can also adjust the location of the target avatar, and for the mouth of the target avatar, in response to detecting a click operation on the mouth of the target avatar, the terminal displays a mouth width adjustment interface on which the user inputs a desired mouth width, and the terminal can adjust the vertex position corresponding to the mouth of the target avatar based on the mouth width input by the user through the avatar editing system so that the width of the mouth of the target avatar coincides with the mouth width for input. Of course, in order to ensure that the target avatar does not have obvious deformation, the terminal can also adjust the positions of a plurality of other vertices associated with the mouth of the target avatar based on the association weight by using the avatar editing system, wherein the association weight is inversely related to the distance between the other vertices and the vertex of the mouth of the target avatar, that is, the farther a vertex is from the vertex corresponding to the mouth of the target avatar, the smaller the association weight is, and the smaller the distance the vertex moves when the terminal adjusts the position of the vertex corresponding to the mouth of the target avatar; the closer the other vertex is to the vertex corresponding to the mouth of the target avatar, the greater the associated weight, and the greater the distance the vertex moves when the terminal adjusts the position of the vertex corresponding to the mouth of the target avatar.
And E, the terminal acquires the maximum opening amplitude of the mouth of the target virtual figure, and sets the maximum allowable movement distance for the vertex corresponding to the mouth in the three-dimensional model of the target virtual figure based on the maximum opening amplitude.
By executing the step E, the user can quickly adjust the maximum opening amplitude of the mouth of the target virtual image through the virtual image editing system without regenerating a three-dimensional model, thereby simplifying the operation of model adjustment and improving the efficiency of model adjustment.
For example, after obtaining the three-dimensional model of the target avatar, the user can also adjust the position of the target avatar, and for the mouth of the target avatar, in response to detecting a click operation on the mouth of the target avatar, the terminal displays a mouth opening width adjustment interface, on which the user inputs a desired mouth opening width, and the terminal can set a maximum allowable movement distance for the vertex corresponding to the mouth of the target avatar based on the mouth opening width input by the user through the avatar editing system. Referring to fig. 11, the user sets different maximum opening amplitudes for the mouth of the target avatar 1101 and the mouth of the target avatar 1102 through the terminal, and fig. 11 shows a case where the mouth of the target avatar is opened at the maximum opening amplitude after the maximum opening amplitude is set for the mouth.
According to the technical scheme provided by the embodiment of the disclosure, when the three-dimensional model of the original avatar is required to be edited, a user only needs to input the image of the target avatar and the three-dimensional model of the original avatar to the avatar editing system through the terminal, and the three-dimensional model of the target avatar can be quickly generated through the avatar editing system, so that the efficiency of generating the three-dimensional model is high. Meanwhile, in the process of generating the three-dimensional model, a user does not need to have related knowledge of three-dimensional model generation, and the whole process is completed by the virtual image editing system, so that the cost of generating the three-dimensional model is reduced, the operation of the user is simplified, and the efficiency of human-computer interaction is improved.
Fig. 12 is a block diagram illustrating a generation apparatus of an avatar model according to an exemplary embodiment. Referring to fig. 12, the apparatus includes: the image acquisition module 1201 and the input module 1202.
The image acquisition module 1201 is configured to perform acquiring an image of a target avatar, the image including a plurality of layers, each layer corresponding to one or more parts of the target avatar.
An input module 1202 configured to perform inputting a three-dimensional model and an image of an original avatar into an avatar editing system for editing the three-dimensional model of the original avatar, the original avatar and a target avatar being the same type of avatar, the following steps being performed by the avatar editing system:
The three-dimensional model of the original avatar is divided into a plurality of sub-models corresponding to the plurality of layers, respectively, based on one or more parts of the target avatar corresponding to each layer.
One or more maps corresponding to one or more parts of the target avatar are acquired from each layer, and textures of the corresponding multiple sub-models are updated based on the one or more maps.
And combining the multiple sub-models after the texture updating to obtain the three-dimensional model of the target virtual image.
In one possible embodiment, the input module is configured to perform clustering on a plurality of vertices in the three-dimensional model of the original avatar based on one or more parts corresponding to each layer, to obtain vertex boundaries, where the vertex boundaries are used to segment the three-dimensional model of the original avatar. Based on the vertex boundary, the three-dimensional model of the original avatar is divided into a plurality of sub-models corresponding to the plurality of layers, respectively.
In one possible implementation, each layer includes a location hint information indicating a location of a map corresponding to a different location in the layer, and the input module is configured to perform obtaining the location hint information from the plurality of layers. And according to the position prompt information, one or more maps corresponding to one or more parts of the target avatar are obtained from each map layer.
In one possible implementation, the input module is configured to perform a combination relationship based on the corresponding parts in the three-dimensional model of the original avatar, and splice the multiple sub-models after the texture update to obtain the three-dimensional model of the target avatar.
In one possible implementation, the input module is configured to perform linear interpolation processing on the spliced positions of the plurality of sub-models after the texture update.
In one possible embodiment, the apparatus further comprises:
and an image verification module configured to perform segmentation of the three-dimensional model of the original avatar into a plurality of sub-models respectively corresponding to the plurality of layers based on one or more parts of the target avatar corresponding to each layer in response to the identification and the number of the plurality of layers of the image both conforming to the target condition.
In one possible implementation, the input module is further configured to execute a selection instruction for any part of the target avatar, and acquire a texture update layer of any part, where the texture update layer is used to replace a layer corresponding to any part of the plurality of layers. Based on the texture updating layer, the texture of the sub-model corresponding to any part is updated. And replacing the corresponding sub-model in the three-dimensional model of the target virtual image by adopting the sub-model corresponding to any part after the texture updating.
In one possible embodiment, the apparatus further comprises:
and a display module configured to perform a selection instruction in response to any one of the sub-models in the three-dimensional model of the target avatar, to display a bone key point corresponding to any one of the sub-models and a skin weight between any one of the sub-models and the corresponding bone key point.
And a skin weight updating module configured to perform an adjustment operation responsive to the skin weight, the skin weight being updated based on the value indicated by the adjustment operation.
In one possible embodiment, the target avatar includes a plurality of sub avatars, and the apparatus further includes:
and a parameter acquisition module configured to perform acquisition of a deformation degree parameter for representing a maximum deformation amplitude of the sub-avatar and a movement speed parameter for representing a state change speed of the sub-avatar.
And a position adjustment module configured to perform adjustment of the position of any one of the sub-avatars based on the deformation degree parameter, the movement speed parameter, and the position of any one of the vertices after the change in response to the change in the position of any one of the vertices connected to any one of the sub-avatars in the three-dimensional model of the target avatar.
In one possible embodiment, after obtaining the three-dimensional model of the target avatar, the apparatus further includes at least one of the following modules:
and the width adjustment module is configured to execute a width adjustment instruction for responding to the mouth of the target avatar, and adjust the position of the vertex corresponding to the mouth in the three-dimensional model of the target avatar according to the width indicated by the width adjustment instruction.
And the amplitude adjustment module is configured to acquire the maximum opening amplitude of the mouth of the target avatar, and set the maximum allowed movement distance for the vertex corresponding to the mouth in the three-dimensional model of the target avatar based on the maximum opening amplitude.
According to the technical scheme provided by the embodiment of the disclosure, when the three-dimensional model of the original avatar is required to be edited, a user only needs to input the image of the target avatar and the three-dimensional model of the original avatar to the avatar editing system through the terminal, and the three-dimensional model of the target avatar can be quickly generated through the avatar editing system, so that the efficiency of generating the three-dimensional model is high. Meanwhile, in the process of generating the three-dimensional model, a user does not need to have related knowledge of three-dimensional model generation, and the whole process is completed by the virtual image editing system, so that the cost of generating the three-dimensional model is reduced, the operation of the user is simplified, and the efficiency of human-computer interaction is improved.
In the embodiment of the present disclosure, the electronic device may be implemented as a terminal or a server, and first, a structure of the terminal is described:
fig. 13 is a block diagram of a terminal according to an exemplary embodiment. The terminal fig. 13 shows a block diagram of a terminal 1300 provided in an exemplary embodiment of the present disclosure, and the terminal 1300 may be a terminal used by a user. The terminal 1300 may be: at least one of a smart phone, a smart watch, a desktop computer, a laptop portable computer, and the like. Terminal 1300 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal 1300 includes: a processor 1301, and a memory 1302.
Processor 1301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. Processor 1301 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). Processor 1301 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, processor 1301 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and rendering of content required to be displayed by the display screen. In some embodiments, the processor 1301 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1302 may include one or more computer-readable storage media, which may be non-transitory. Memory 1302 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices.
In some embodiments, the terminal 1300 may further optionally include: a peripheral interface 1303 and at least one peripheral. The processor 1301, the memory 1302, and the peripheral interface 1303 may be connected by a bus or signal lines. The respective peripheral devices may be connected to the peripheral device interface 1303 through a bus, a signal line, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1304, a display screen 1305, a camera assembly 1306, audio circuitry 1307, a positioning assembly 1308, and a power supply 13013.
A peripheral interface 1303 may be used to connect I/O (Input/Output) related at least one peripheral to the processor 1301 and the memory 1302. In some embodiments, processor 1301, memory 1302, and peripheral interface 1303 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1301, the memory 1302, and the peripheral interface 1303 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1304 is used to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1304 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1304 converts an electrical signal to an electromagnetic signal for transmission, or converts a received electromagnetic signal to an electrical signal. Optionally, the radio frequency circuit 1304 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1304 may also include NFC (Near Field Communication, short range wireless communication) related circuits, which are not limited by the present disclosure.
The display screen 1305 is used to display a UI (User Interface). The UI may include images, text, icons, video, and any combination thereof. When the display 1305 is a touch display, the display 1305 also has the ability to capture touch signals at or above the surface of the display 1305. The touch signal may be input to the processor 1301 as a control signal for processing. At this point, the display 1305 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display screen 1305 may be one, providing the front panel of the terminal 1300; in other embodiments, the display 1305 may be at least two, disposed on different surfaces of the terminal 1300 or in a folded configuration; in still other embodiments, the display 1305 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1300. Even more, the display screen 1305 may be configured as a non-rectangular irregular image, i.e., a shaped screen. The display screen 1305 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1306 is used to capture images or video. Optionally, camera assembly 1306 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1306 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 1307 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 1301 for processing, or inputting the electric signals to the radio frequency circuit 1304 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be provided at different portions of the terminal 1300, respectively. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is then used to convert electrical signals from the processor 1301 or the radio frequency circuit 1304 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1307 may also comprise a headphone jack.
The location component 1308 is used to locate the current geographic location of the terminal 1300 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1308 may be a positioning component based on the united states GPS (Global Positioning System ), the beidou system of china, the grainer system of russia, or the galileo system of the european union.
A power supply 1309 is used to power the various components in the terminal 1300. The power supply 1309 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 1309 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1300 also includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to: acceleration sensor 1311, gyroscope sensor 1312, pressure sensor 1313, fingerprint sensor 1314, optical sensor 1315, and proximity sensor 1316.
The acceleration sensor 1311 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1300. For example, the acceleration sensor 1311 may be used to detect components of gravitational acceleration in three coordinate axes. Processor 1301 may control display screen 1305 to display a user interface in either a landscape view or a portrait view based on gravitational acceleration signals acquired by acceleration sensor 1311. The acceleration sensor 1311 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 1312 may detect a body direction and a rotation angle of the terminal 1300, and the gyro sensor 1312 may collect a 3D motion of the user on the terminal 1300 in cooperation with the acceleration sensor 1311. Processor 1301 can implement the following functions based on the data collected by gyro sensor 1312: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 1313 may be disposed on a side frame of terminal 1300 and/or below display screen 1305. When the pressure sensor 1313 is disposed at a side frame of the terminal 1300, a grip signal of the terminal 1300 by a user may be detected, and the processor 1301 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1313. When the pressure sensor 1313 is disposed at the lower layer of the display screen 1305, the processor 1301 realizes control of the operability control on the UI interface according to the pressure operation of the user on the display screen 1305. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1314 is used to collect a fingerprint of the user, and the processor 1301 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1314, or the fingerprint sensor 1314 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by processor 1301 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1314 may be disposed on the front, back, or side of the terminal 1300. When a physical key or vendor Logo is provided on the terminal 1300, the fingerprint sensor 1314 may be integrated with the physical key or vendor Logo.
The optical sensor 1315 is used to collect ambient light intensity. In one embodiment, processor 1301 may control the display brightness of display screen 1305 based on the intensity of ambient light collected by optical sensor 1315. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1305 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1305 is turned down. In another embodiment, processor 1301 may also dynamically adjust the shooting parameters of camera assembly 1306 based on the intensity of ambient light collected by optical sensor 1315.
A proximity sensor 1316, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1300. The proximity sensor 1316 is used to collect the distance between the user and the front of the terminal 1300. In one embodiment, when proximity sensor 1316 detects a gradual decrease in the distance between the user and the front of terminal 1300, processor 1301 controls display screen 1305 to switch from a bright screen state to a inactive screen state; when the proximity sensor 1316 detects that the distance between the user and the front surface of the terminal 1300 gradually increases, the processor 1301 controls the display screen 1305 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 13 is not limiting of terminal 1300 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In the embodiment of the present disclosure, the electronic device may be implemented as a server, and the following describes a structure of the server:
fig. 14 is a block diagram illustrating a server 1400, which server 1400 may vary widely in configuration or performance, and may include one or more processors (Central Processing Units, CPU) 1401 and one or more memories 1402, according to an example embodiment. The memory 1402 stores therein at least one instruction that is loaded and executed by the processor 1401 to implement the avatar model generation method provided by the above-described respective method embodiments.
In an exemplary embodiment, a storage medium including instructions, such as a memory 1402 including instructions, that are executable by the processor 1401 of the server 1400 to complete the above-described avatar model generation method is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product including a computer program executable by a processor of an electronic device to complete the avatar model generating method provided in the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (22)

CN202011536809.8A2020-12-232020-12-23Method and device for generating virtual image model, electronic equipment and storage mediumActiveCN112634416B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011536809.8ACN112634416B (en)2020-12-232020-12-23Method and device for generating virtual image model, electronic equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011536809.8ACN112634416B (en)2020-12-232020-12-23Method and device for generating virtual image model, electronic equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN112634416A CN112634416A (en)2021-04-09
CN112634416Btrue CN112634416B (en)2023-07-28

Family

ID=75321481

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011536809.8AActiveCN112634416B (en)2020-12-232020-12-23Method and device for generating virtual image model, electronic equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN112634416B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2018057272A1 (en)2016-09-232018-03-29Apple Inc.Avatar creation and editing
US12033296B2 (en)2018-05-072024-07-09Apple Inc.Avatar creation user interface
CN113240778B (en)*2021-04-262024-04-12北京百度网讯科技有限公司 Method, device, electronic device and storage medium for generating virtual image
CN117321556A (en)*2021-05-212023-12-29苹果公司Head portrait sticker editor user interface
CN113487710B (en)*2021-07-122025-03-18广州虎牙科技有限公司 Virtual image generation method, device, electronic device and computer-readable storage medium
CN113643412B (en)*2021-07-142022-07-22北京百度网讯科技有限公司 Virtual image generation method, device, electronic device and storage medium
CN114028808A (en)*2021-11-052022-02-11腾讯科技(深圳)有限公司Virtual pet appearance editing method and device, terminal and storage medium
CN114078184B (en)*2021-11-112022-10-21北京百度网讯科技有限公司 Data processing method, apparatus, electronic device and medium
CN114092675A (en)*2021-11-222022-02-25北京百度网讯科技有限公司Image display method, image display device, electronic apparatus, and storage medium
CN114237540A (en)*2021-12-202022-03-25安徽教育出版社Intelligent classroom online teaching interaction method and device, storage medium and terminal
CN116524077B (en)*2022-01-212024-07-23腾讯科技(深圳)有限公司Editing method of virtual object and related equipment
CN114913058B (en)*2022-05-272024-10-01北京字跳网络技术有限公司Display object determining method and device, electronic equipment and storage medium
CN115936970A (en)*2022-06-272023-04-07北京字跳网络技术有限公司 Method, device, electronic device and storage medium for generating virtual facial image
US12417596B2 (en)2022-09-232025-09-16Apple Inc.User interfaces for managing live communication sessions

Citations (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103116902A (en)*2011-11-162013-05-22华为软件技术有限公司Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN107393017A (en)*2017-08-112017-11-24北京铂石空间科技有限公司Image processing method, device, electronic equipment and storage medium
CN107481304A (en)*2017-07-312017-12-15广东欧珀移动通信有限公司 Method and device for constructing virtual image in game scene
CN108510437A (en)*2018-04-042018-09-07科大讯飞股份有限公司A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing
CN108876878A (en)*2017-05-082018-11-23腾讯科技(深圳)有限公司Head portrait generation method and device
CN108961367A (en)*2018-06-212018-12-07珠海金山网络游戏科技有限公司The method, system and device of role image deformation in the live streaming of three-dimensional idol
CN109126136A (en)*2018-07-272019-01-04腾讯科技(深圳)有限公司Generation method, device, equipment and the storage medium of three-dimensional pet
CN109345616A (en)*2018-08-302019-02-15腾讯科技(深圳)有限公司Two dimension rendering map generalization method, equipment and the storage medium of three-dimensional pet
CN109427083A (en)*2017-08-172019-03-05腾讯科技(深圳)有限公司Display methods, device, terminal and the storage medium of three-dimensional avatars
CN109685869A (en)*2018-12-252019-04-26网易(杭州)网络有限公司Dummy model rendering method and device, storage medium, electronic equipment
CN110135226A (en)*2018-02-092019-08-16腾讯科技(深圳)有限公司Expression animation data processing method, device, computer equipment and storage medium
CN110693609A (en)*2019-08-302020-01-17上海杏脉信息科技有限公司Implant intervention simulation method, selection method, medium and device
CN110782515A (en)*2019-10-312020-02-11北京字节跳动网络技术有限公司Virtual image generation method and device, electronic equipment and storage medium
CN111462307A (en)*2020-03-312020-07-28腾讯科技(深圳)有限公司Virtual image display method, device, equipment and storage medium of virtual object

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103116902A (en)*2011-11-162013-05-22华为软件技术有限公司Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN108876878A (en)*2017-05-082018-11-23腾讯科技(深圳)有限公司Head portrait generation method and device
CN107481304A (en)*2017-07-312017-12-15广东欧珀移动通信有限公司 Method and device for constructing virtual image in game scene
CN107393017A (en)*2017-08-112017-11-24北京铂石空间科技有限公司Image processing method, device, electronic equipment and storage medium
CN109427083A (en)*2017-08-172019-03-05腾讯科技(深圳)有限公司Display methods, device, terminal and the storage medium of three-dimensional avatars
CN110135226A (en)*2018-02-092019-08-16腾讯科技(深圳)有限公司Expression animation data processing method, device, computer equipment and storage medium
CN108510437A (en)*2018-04-042018-09-07科大讯飞股份有限公司A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing
CN108961367A (en)*2018-06-212018-12-07珠海金山网络游戏科技有限公司The method, system and device of role image deformation in the live streaming of three-dimensional idol
CN109126136A (en)*2018-07-272019-01-04腾讯科技(深圳)有限公司Generation method, device, equipment and the storage medium of three-dimensional pet
CN109345616A (en)*2018-08-302019-02-15腾讯科技(深圳)有限公司Two dimension rendering map generalization method, equipment and the storage medium of three-dimensional pet
CN109685869A (en)*2018-12-252019-04-26网易(杭州)网络有限公司Dummy model rendering method and device, storage medium, electronic equipment
CN110693609A (en)*2019-08-302020-01-17上海杏脉信息科技有限公司Implant intervention simulation method, selection method, medium and device
CN110782515A (en)*2019-10-312020-02-11北京字节跳动网络技术有限公司Virtual image generation method and device, electronic equipment and storage medium
CN111462307A (en)*2020-03-312020-07-28腾讯科技(深圳)有限公司Virtual image display method, device, equipment and storage medium of virtual object

Also Published As

Publication numberPublication date
CN112634416A (en)2021-04-09

Similar Documents

PublicationPublication DateTitle
CN112634416B (en)Method and device for generating virtual image model, electronic equipment and storage medium
CN112156464B (en)Two-dimensional image display method, device and equipment of virtual object and storage medium
CN111701238A (en)Virtual picture volume display method, device, equipment and storage medium
CN110136236B (en)Personalized face display method, device and equipment for three-dimensional character and storage medium
CN110141857A (en)Facial display methods, device, equipment and the storage medium of virtual role
CN110427110B (en)Live broadcast method and device and live broadcast server
WO2022052620A1 (en)Image generation method and electronic device
CN108694073B (en)Control method, device and equipment of virtual scene and storage medium
CN112533017B (en)Live broadcast method, device, terminal and storage medium
CN110992493A (en)Image processing method, image processing device, electronic equipment and storage medium
CN112337105B (en)Virtual image generation method, device, terminal and storage medium
CN110533585B (en)Image face changing method, device, system, equipment and storage medium
CN112581571B (en)Control method and device for virtual image model, electronic equipment and storage medium
CN111083526B (en)Video transition method and device, computer equipment and storage medium
CN113194329B (en)Live interaction method, device, terminal and storage medium
CN114546227A (en)Virtual lens control method, device, computer equipment and medium
US20230154445A1 (en)Spatial music creation interface
CN113706678A (en)Method, device and equipment for acquiring virtual image and computer readable storage medium
CN112950753A (en)Virtual plant display method, device, equipment and storage medium
CN112581358A (en)Training method of image processing model, image processing method and device
CN113420177A (en)Audio data processing method and device, computer equipment and storage medium
CN117499693A (en)Virtual live video generation method, device, equipment and readable storage medium
CN113763531A (en)Three-dimensional face reconstruction method and device, electronic equipment and storage medium
CN114564101A (en)Three-dimensional interface control method and terminal
CN112967261B (en)Image fusion method, device, equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp