Movatterモバイル変換


[0]ホーム

URL:


CN113327311A - Virtual character based display method, device, equipment and storage medium - Google Patents

Virtual character based display method, device, equipment and storage medium
Download PDF

Info

Publication number
CN113327311A
CN113327311ACN202110585819.9ACN202110585819ACN113327311ACN 113327311 ACN113327311 ACN 113327311ACN 202110585819 ACN202110585819 ACN 202110585819ACN 113327311 ACN113327311 ACN 113327311A
Authority
CN
China
Prior art keywords
dynamic effect
information
instruction
virtual character
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110585819.9A
Other languages
Chinese (zh)
Other versions
CN113327311B (en
Inventor
吴准
邬诗雨
杨瑞
李士岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co LtdfiledCriticalBeijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110585819.9ApriorityCriticalpatent/CN113327311B/en
Publication of CN113327311ApublicationCriticalpatent/CN113327311A/en
Application grantedgrantedCritical
Publication of CN113327311BpublicationCriticalpatent/CN113327311B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The disclosure provides a display method, a display device, display equipment and a storage medium based on virtual roles, and relates to the technical field of artificial intelligence, in particular to the field of computer vision. One embodiment of the method comprises: acquiring a virtual role model; receiving behavior driving information generated by the entity object, and driving the virtual character model based on the behavior driving information; and acquiring a dynamic effect corresponding to the behavior driving information, and displaying the dynamic effect on a display interface of the virtual character model.

Description

Virtual character based display method, device, equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the field of computers, in particular to the technical field of artificial intelligence such as computer vision, and in particular relates to a display method, a display device, display equipment and a storage medium based on virtual roles.
Background
With the rapid development of artificial intelligence technology, virtual characters are gradually and widely used, the virtual characters refer to character images made in the forms of drawing, animation and the like, and currently, the virtual characters are generally used for live broadcasting. However, for the traditional virtual character, the traditional virtual character is mainly realized based on the elements such as the character, the plot development and the interaction mode preset by the system, and therefore, the traditional virtual character cannot interact with the audience in real time.
Disclosure of Invention
The embodiment of the disclosure provides a display method, a display device, display equipment and a storage medium based on virtual roles.
In a first aspect, an embodiment of the present disclosure provides a display method based on a virtual character, including: acquiring a virtual role model; receiving behavior driving information generated by the entity object, and driving the virtual character model based on the behavior driving information; and acquiring a dynamic effect corresponding to the behavior driving information, and displaying the dynamic effect on a display interface of the virtual character model.
In a second aspect, an embodiment of the present disclosure provides a virtual character-based display apparatus, including: an obtaining module configured to obtain a virtual character model; a receiving module configured to receive behavior driving information generated by the entity object and drive the virtual character model based on the behavior driving information; and the display module is configured to acquire the dynamic effect corresponding to the behavior driving information and display the dynamic effect on the display interface of the virtual role model.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in any one of the implementations of the first aspect.
In a fourth aspect, the disclosed embodiments propose a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method as described in any one of the implementations of the first aspect.
In a fifth aspect, the present disclosure provides a computer program product including a computer program, which when executed by a processor implements the method as described in any implementation manner of the first aspect.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
Other features, objects, and advantages of the disclosure will become apparent from a reading of the following detailed description of non-limiting embodiments which proceeds with reference to the accompanying drawings. The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is an exemplary system architecture diagram in which the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a virtual character-based display method according to the present disclosure;
FIG. 3 is a schematic diagram of an application scenario of a virtual character-based display method according to the present disclosure;
FIG. 4 is a flow diagram of yet another embodiment of a virtual character-based display method according to the present disclosure;
FIG. 5 is a flow diagram of another embodiment of a virtual character-based display method according to the present disclosure;
FIG. 6 is a schematic diagram illustrating the structure of one embodiment of a virtual character based display apparatus according to the present disclosure;
fig. 7 is a block diagram of an electronic device for implementing a virtual character-based display method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates anexemplary system architecture 100 to which embodiments of the virtual character-based display method or virtual character-based display apparatus of the present disclosure may be applied.
As shown in fig. 1, thesystem architecture 100 may includeterminal devices 101, 102, 103, anetwork 104, and aserver 105. Thenetwork 104 serves as a medium for providing communication links between theterminal devices 101, 102, 103 and theserver 105.Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use theterminal devices 101, 102, 103 to interact with theserver 105 via thenetwork 104 to receive or send action information or the like. Various client applications, such as a virtual character generation application, etc., may be installed on theterminal devices 101, 102, 103.
Theterminal apparatuses 101, 102, and 103 may be hardware or software. When theterminal devices 101, 102, 103 are hardware, they may be various electronic devices including, but not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When theterminal apparatuses 101, 102, 103 are software, they can be installed in the above-described electronic apparatuses. It may be implemented as multiple pieces of software or software modules, or as a single piece of software or software module. And is not particularly limited herein.
Theserver 105 may provide various services. For example, theserver 105 may analyze and process the behavior driving information acquired from theterminal devices 101, 102, 103 and generate a processing result (e.g., generate a corresponding dynamic effect).
Theserver 105 may be hardware or software. When theserver 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When theserver 105 is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the virtual character-based display method provided by the embodiment of the present disclosure is generally executed by theserver 105, and accordingly, the virtual character-based display device is generally disposed in theserver 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to FIG. 2, aflow 200 of one embodiment of a virtual character based display method in accordance with the present disclosure is illustrated. The display method based on the virtual role comprises the following steps:
step 201, obtaining a virtual character model.
In the present embodiment, the execution subject (for example, theserver 105 shown in fig. 1) of the virtual character-based display method may acquire the virtual character model. In the present embodiment, the virtual character model is generally a 3D (3-dimension) model of a character. The executing agent may directly create a new virtual character model, or may select a virtual character model from an existing virtual character model library. Generally, a pre-constructed basic character model can be obtained, and then personalized modification is performed according to actual requirements, for example, the hairstyle, the face shape, the stature, the clothes and the like of the basic character model are configured, and finally, a required virtual character model is obtained.
It should be noted that the virtual character model in this embodiment may be constructed by using a 3D modeling method in the prior art, which is not described herein again.
Step 202, receiving behavior driving information generated by the entity object, and driving the virtual character model based on the behavior driving information.
In this embodiment, the execution subject may receive behavior driving information generated by the entity object, and drive the virtual character model based on the behavior driving information. In this embodiment, the virtual character model needs to be driven in real time through the entity object, wherein the entity object is acted by a real person.
After receiving the behavior driving information generated by the entity object, the execution body can drive the virtual character model to perform corresponding actions or expressions according to the behavior driving information. The behavior driving information may be collected by some external devices, for example, the external devices may be a limb motion capture device, i.e., a device capable of collecting limb motion information of the target object, a sound capture device, i.e., a device capable of collecting sound information of the target object, a facial expression motion capture device, i.e., a device capable of collecting facial expressions of the target object, and the like.
As one example, a photo of a face of a physical object may be captured using an image capture device, and a facial expression in the photo of the face may be determined through image recognition techniques, and then real-time expression-driven information generated through the facial expression. After receiving the expression driving information, the execution subject may change the facial expression of the virtual character model using the information, so that the facial expression of the virtual character model is consistent with the physical object.
As another example, an image capture device may be used to obtain a picture of a limb of a physical object, and determine a limb movement in the picture of the limb through image recognition techniques, and then generate real-time limb movement driving information through the limb movement. The execution body may change the physical movement of the virtual character model using the physical movement driving information so that the physical movement of the virtual character model is consistent with the physical object, after receiving the physical movement driving information.
By driving the virtual character model in this way, the obtained virtual character can be more real and more easily accepted by the audience.
And 203, acquiring a dynamic effect corresponding to the behavior driving information, and displaying the dynamic effect on a display interface of the virtual character model.
In this embodiment, the execution subject may obtain a dynamic effect corresponding to the behavior driving information, and display the dynamic effect on a display interface of the virtual character model. In this embodiment, a dynamic effect library may be preset, where the dynamic effect library may include a plurality of preset behavior driving information, and match a corresponding dynamic effect for each behavior driving information.
After receiving the behavior driving information, the execution main body can match the behavior driving information with each preset behavior driving information in the dynamic effect library, obtain a corresponding dynamic effect under the condition of successful matching, and display the dynamic effect on a display interface of the virtual character model.
As an example, the dynamic effect matching the behavior driving information in the dynamic effect library may be a sticker, an expression, background music, a UI (User Interface) dynamic effect, a switching background, and the like.
The display method based on the virtual role provided by the embodiment of the disclosure comprises the steps of firstly obtaining a virtual role model; then receiving behavior driving information generated by the entity object, and driving the virtual role model based on the behavior driving information; and finally, acquiring a dynamic effect corresponding to the behavior driving information, and displaying the dynamic effect on a display interface of the virtual character model. The method can drive the virtual character model in real time based on behavior driving information generated by an entity object, and can add a dynamic effect corresponding to the behavior driving information on a display interface of the virtual character model, so that the method can interact with audiences in real time, and the interestingness of live broadcast interaction is increased.
With continued reference to fig. 3, fig. 3 is a schematic diagram of one application scenario of the virtual character-based display method according to the present disclosure. In the application scenario of fig. 3, the executing agent may acquire a virtual character model, while receiving behavior driving information of thephysical object 301 acquired by various external devices, such as a limb motion capture device, a sound capture device, and the like, and drive the virtual character model based on the behavior driving information. In fig. 3, the behavior-driven information of the physical object obtained by the sound capture device is "barycenter". Then, the execution entity obtains the dynamic effect corresponding to the behavior driving information "heart of mind", and displays the dynamic effect of "heart of mind" on thedisplay interface 302 of the virtual character model.
With continued reference to FIG. 4, FIG. 4 illustrates aflow 400 of yet another embodiment of a virtual character based display method according to the present disclosure. The display method based on the virtual role comprises the following steps:
step 401, obtaining a virtual character model.
Step 402, receiving behavior driving information generated by the entity object, and driving the virtual character model based on the behavior driving information.
The steps 401-.
In some optional implementations of the present embodiment, the behavior driving information includes, but is not limited to, at least one of: the voice message, the expression information, the action information, wherein, the action information includes: gesture motion information and/or limb motion information. That is, the behavior driving information generated by the entity object may be received, and the behavior driving information may include voice information, facial expression information, motion information, facial expression information, limb motion information, voice information, gesture motion information, or voice information, limb motion information, and the like.
Step 403, in response to the behavior driving information including the dynamic effect instruction, acquiring a dynamic effect and position information corresponding to the dynamic effect instruction.
In the present embodiment, an execution subject (e.g., theserver 105 shown in fig. 1) of the virtual character-based display method may acquire a dynamic effect and location information corresponding to a dynamic effect instruction in a case where the dynamic effect instruction is included in the behavior drive information.
In this embodiment, a plurality of dynamic effect instructions may be preset, and each dynamic effect instruction has a corresponding dynamic effect and corresponding location information. The execution main body can match the behavior driving information with a preset dynamic effect instruction after receiving the behavior driving information generated by the entity object, and acquire a dynamic effect and position information corresponding to the instruction under the condition of successful matching.
As an example, assuming that the received behavior driving information is "happy," the execution subject matches the behavior driving information with a preset dynamic effect instruction, and if the matching is successful, obtains a dynamic effect corresponding to the "happy," and the "happy" is a facial expression, and the position information of the corresponding dynamic effect is a facial area.
In some optional implementations of this embodiment,step 403 includes: acquiring a dynamic effect corresponding to the dynamic effect instruction; determining the type of the dynamic effect; and determining the position information corresponding to the dynamic effect instruction based on the type of the dynamic effect and the position of the virtual role model on the display interface. That is, the execution main body may obtain a dynamic effect corresponding to the dynamic effect instruction, and then determine the type of the dynamic effect, where the type of the dynamic effect may include a sticker, an expression, background music, a UI animation, a switching background, and the like. And then, determining the position information corresponding to the dynamic effect instruction based on the type of the dynamic effect and the position of the virtual role model on the display interface. For example, if it is determined that the type of "fun" is a sticker, then at the location of the virtual character model on the presentation interface, it can be determined that the location at which the "fun" sticker is displayed should be the face region.
In some optional implementations of this embodiment, step 403 further includes: acquiring an initial dynamic effect corresponding to the dynamic effect instruction; determining the type of the initial dynamic effect; determining position information based on a preset position on a display interface; and adjusting the initial dynamic effect based on the type and the position information to obtain the dynamic effect. That is, the initial dynamic effect may be adjusted based on the type of the initial dynamic effect and the displayed location information, thereby generating an adjusted dynamic effect. For example, after the dynamic effect instruction is obtained, it is acquired that the displayed initial dynamic effect is "like", and it is determined that the type corresponding to the initial dynamic effect is a sticker, and the initial dynamic effect is displayed in the hand area, and then it is adjusted based on the corresponding type and the displayed position information, for example, the initial dynamic effect "like" may be adjusted to be the UI dynamic effect "clap", and the display position may be the original "like" position.
And step 404, determining a display position on a display interface of the virtual character model based on the position information.
In this embodiment, the execution subject may determine a display position on the display interface of the virtual character model based on the position information acquired instep 403. That is, the execution subject determines, based on the location information, an area of the virtual character model corresponding to the location information, and determines the area as a location where the dynamic effect is displayed.
As an example, if the position information of the "happy" dynamic effect is a face region, the execution body may determine that the display position of the "happy" dynamic effect should be the face region of the virtual character model.
Step 405, outputting the dynamic effect on the display position.
In this embodiment, the execution subject may output the dynamic effect at the display position determined instep 404. For example, a dynamic effect corresponding to "happy" may be added to and displayed in a face region of the virtual character model.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, in the display method based on virtual roles in this embodiment, a virtual role model is first obtained; then receiving behavior driving information generated by the entity object, and driving the virtual role model based on the behavior driving information; then, responding to the fact that the behavior driving information comprises a dynamic effect instruction, and obtaining a dynamic effect and position information corresponding to the dynamic effect instruction; determining a display position on a display interface of the virtual character model based on the position information; and finally outputting the dynamic effect on the display position. In this embodiment, in the case that the behavior driving information includes a dynamic effect instruction, the dynamic effect and the position information corresponding to the dynamic effect instruction are obtained, and then the dynamic effect is displayed at the corresponding position on the display interface of the virtual character model. The method can determine the display position of the corresponding dynamic effect based on the behavior driving information, and increases the interestingness of interaction with audiences.
With continued reference to FIG. 5, FIG. 5 illustrates aflow 500 of another embodiment of a virtual character based display method according to the present disclosure. The display method based on the virtual role comprises the following steps:
step 501, obtaining a virtual character model.
Step 501 is substantially the same asstep 401 in the foregoing embodiment, and the specific implementation manner may refer to the foregoing description ofstep 401, which is not described herein again.
Step 502, receiving behavior driving information generated by the entity object, and driving the virtual role model based on the behavior driving information.
In this embodiment, an execution subject (for example, theserver 105 shown in fig. 1) of the virtual character-based display method may receive behavior driving information generated by an entity object, and drive a virtual character model based on the behavior driving information, where the behavior driving information includes a dynamic effect instruction, and the dynamic effect instruction may be an instruction indicating that a dynamic effect is to be added, such as a voice instruction or an action instruction. Step 502 is substantially the same asstep 402 in the foregoing embodiment, and the specific implementation manner may refer to the foregoing description ofstep 402, which is not described herein again.
Step 503, in response to the behavior driving information including the first instruction information, acquiring a dynamic effect corresponding to the first instruction information.
In this embodiment, the execution subject may acquire a dynamic effect corresponding to the first instruction information in a case where the behavior drive information includes the first instruction information.
In some optional implementations of this embodiment, the first instruction information includes a voice instruction. That is, the execution subject may obtain the dynamic effect corresponding to the voice command when the behavior driving information includes the voice command
Correspondingly, a plurality of voice instructions can be preset, and a corresponding dynamic effect can be matched for each voice instruction. The execution main body can perform voice recognition on the voice information after acquiring the voice information to obtain a corresponding voice instruction, then match the voice instruction with a preset voice instruction, and acquire a corresponding dynamic effect under the condition of successful matching. It should be noted that, the voice information in this embodiment may be recognized by using a voice recognition technology in the prior art, which is not described herein again. As an example, when voice information is recognized and the obtained voice command is "pot change", a dynamic effect corresponding to "pot change" may be obtained.
Atstep 504, position information is determined based on second instruction information included in the behavior driving information.
In this embodiment, the execution main body may determine the position information based on the second instruction information included in the behavior drive information.
In some optional implementations of this embodiment, the second instruction information includes action information. That is, the execution body may determine the position information based on the action information in the behavior driving information. The action information and the voice command in this step may be obtained simultaneously. Specifically, a plurality of action instructions may be preset, and corresponding position information may be matched for each action instruction. After the execution main body obtains the action information, the execution main body analyzes the action information to obtain a corresponding action instruction, and then obtains preset position information corresponding to the action instruction.
As an example, when the voice information generated by the entity object and the action information are "make a sound while saying ' pot change '," based on the action information, "make a sound" may determine that the position corresponding to the voice instruction "pot change ' is a predetermined position, for example, the lower left corner of the display screen.
In some optional implementations of this embodiment,step 504 includes: and determining the position information based on the action information and the position key word in response to the voice command comprising the position key word. That is, in the case where the voice command includes the preset position keyword, the execution body may determine the position information based on the motion information and the position keyword. The preset position keyword may be a limb position keyword, such as on a hand, on an arm, or the like. For example, when the voice command is "give me two fire to hand", the execution subject recognizes that the voice command contains a position keyword "on hand", and then determines that the position corresponding to the voice command is a hand based on the position keyword. Meanwhile, the position of the hand of the virtual character on the current display screen can be determined through the action information, and the position is the position for displaying the dynamic effect.
And 505, determining a display position on a display interface of the virtual character model based on the position information.
Step 506, outputting the dynamic effect on the display position.
Thesteps 505 and 506 are substantially the same as thesteps 404 and 405 of the foregoing embodiment, and the specific implementation manner can refer to the foregoing description of thesteps 404 and 405, and will not be described herein again.
As can be seen from fig. 5, compared with the embodiment corresponding to fig. 4, the virtual character-based display method in the present embodiment can drive the virtual character model based on the received voice information and motion information generated by the physical object; on the basis, the dynamic effect corresponding to the voice instruction can be obtained, the position information corresponding to the voice instruction is determined based on the action information, and therefore the dynamic effect is output in the corresponding area of the virtual character model display interface based on the position information. The method can determine the corresponding dynamic effect and the display position through the voice information and the action information, and enriches the diversity of the live broadcast interaction mode.
With further reference to fig. 6, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of a virtual character-based display apparatus, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 6, the virtual character-baseddisplay apparatus 600 of the present embodiment may include: anacquisition module 601, a receivingmodule 602 and adisplay module 603. The obtainingmodule 601 is configured to obtain a virtual character model; areceiving module 602 configured to receive behavior driving information generated by the entity object and drive the virtual character model based on the behavior driving information; thedisplay module 603 is configured to obtain a dynamic effect corresponding to the behavior driving information, and display the dynamic effect on a display interface of the virtual character model.
In the present embodiment, in the virtual character-based display apparatus 600: the specific processing of the obtainingmodule 601, the receivingmodule 602, and the displayingmodule 603 and the technical effects thereof can refer to the related descriptions ofstep 201 and step 203 in the corresponding embodiment of fig. 2, which are not repeated herein.
In some optional implementations of this embodiment, the display module includes: an obtaining submodule configured to obtain a dynamic effect and position information corresponding to the dynamic effect instruction in response to inclusion of the dynamic effect instruction in the behavior driving information; a display sub-module configured to determine a display position on a display interface of the virtual character model based on the position information; an output sub-module configured to output the dynamic effect on the display position.
In some optional implementations of this embodiment, the obtaining sub-module includes: a first obtaining unit configured to obtain a dynamic effect corresponding to first instruction information in response to inclusion of the first instruction information in the behavior driving information; a first determination unit configured to determine the position information based on second instruction information included in the behavior driving information.
In some optional implementation manners of this embodiment, the first instruction information includes a voice instruction, the second instruction information includes action information, and the first determining unit includes: a determination subunit configured to determine, in response to the position keyword being included in the voice instruction, position information based on the action information and the position keyword.
In some optional implementations of this embodiment, the obtaining sub-module includes: a second obtaining unit configured to obtain a dynamic effect corresponding to the dynamic effect instruction; a second determination unit configured to determine a type of the dynamic effect; and the third determining unit is configured to determine the position information corresponding to the dynamic effect instruction based on the type of the dynamic effect and the position of the virtual character model on the display interface.
In some optional implementations of this embodiment, the obtaining sub-module includes: a third obtaining unit configured to obtain an initial dynamic effect corresponding to the dynamic effect instruction; a fourth determination unit configured to determine a type of the initial dynamic effect; a fifth determining unit configured to determine position information based on a preset position on the presentation interface; and the obtaining unit is configured to adjust the initial dynamic effect based on the type and the position information to obtain the dynamic effect.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an exampleelectronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, thedevice 700 comprises acomputing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM)702 or a computer program loaded from astorage unit 708 into a Random Access Memory (RAM) 703. In theRAM 703, various programs and data required for the operation of thedevice 700 can also be stored. Thecomputing unit 701, theROM 702, and theRAM 703 are connected to each other by abus 704. An input/output (I/O)interface 705 is also connected tobus 704.
Various components in thedevice 700 are connected to the I/O interface 705, including: aninput unit 706 such as a keyboard, a mouse, or the like; anoutput unit 707 such as various types of displays, speakers, and the like; astorage unit 708 such as a magnetic disk, optical disk, or the like; and acommunication unit 709 such as a network card, modem, wireless communication transceiver, etc. Thecommunication unit 709 allows thedevice 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 701 may be a variety of general purpose and/or special purpose processing components with processing and computing capabilities. Some examples of thecomputing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. Thecomputing unit 701 executes the respective methods and processes described above, such as the virtual character-based display method. For example, in some embodiments, the avatar-based presentation method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such asstorage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed ontodevice 700 viaROM 702 and/orcommunications unit 709. When the computer program is loaded into theRAM 703 and executed by thecomputing unit 701, one or more steps of the virtual character-based display method described above may be performed. Alternatively, in other embodiments, thecomputing unit 701 may be configured to perform the avatar-based display method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in the conventional physical host and Virtual Private Server (VPS) service.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (16)

CN202110585819.9A2021-05-272021-05-27Virtual character-based display method, device, equipment and storage mediumActiveCN113327311B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110585819.9ACN113327311B (en)2021-05-272021-05-27Virtual character-based display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110585819.9ACN113327311B (en)2021-05-272021-05-27Virtual character-based display method, device, equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN113327311Atrue CN113327311A (en)2021-08-31
CN113327311B CN113327311B (en)2024-03-29

Family

ID=77421677

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110585819.9AActiveCN113327311B (en)2021-05-272021-05-27Virtual character-based display method, device, equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN113327311B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114237396A (en)*2021-12-152022-03-25北京字跳网络技术有限公司 Action adjustment method, device, electronic device, and readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104780338A (en)*2015-04-162015-07-15美国掌赢信息科技有限公司Method and electronic equipment for loading expression effect animation in instant video
CN106569613A (en)*2016-11-142017-04-19中国电子科技集团公司第二十八研究所Multi-modal man-machine interaction system and control method thereof
CN108519816A (en)*2018-03-262018-09-11广东欧珀移动通信有限公司 Information processing method, device, storage medium and electronic equipment
US20190258313A1 (en)*2016-11-072019-08-22Changchun Ruixinboguan Technology Development Co., Ltd.Systems and methods for interaction with an application
CN110223199A (en)*2018-08-012019-09-10郎启红A kind of application system and its implementation based on position simulation real community
US20190279410A1 (en)*2018-03-062019-09-12Didimo, Inc.Electronic Messaging Utilizing Animatable 3D Models
CN111385594A (en)*2018-12-292020-07-07腾讯科技(深圳)有限公司Virtual character interaction method, device and storage medium
US20200306640A1 (en)*2019-03-272020-10-01Electronic Arts Inc.Virtual character generation from image or video data
CN111862280A (en)*2020-08-262020-10-30网易(杭州)网络有限公司 Virtual character control method, system, medium and electronic device
CN111970535A (en)*2020-09-252020-11-20魔珐(上海)信息科技有限公司Virtual live broadcast method, device, system and storage medium
CN112162628A (en)*2020-09-012021-01-01魔珐(上海)信息科技有限公司Multi-mode interaction method, device and system based on virtual role, storage medium and terminal
CN112684894A (en)*2020-12-312021-04-20北京市商汤科技开发有限公司Interaction method and device for augmented reality scene, electronic equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104780338A (en)*2015-04-162015-07-15美国掌赢信息科技有限公司Method and electronic equipment for loading expression effect animation in instant video
US20190258313A1 (en)*2016-11-072019-08-22Changchun Ruixinboguan Technology Development Co., Ltd.Systems and methods for interaction with an application
CN106569613A (en)*2016-11-142017-04-19中国电子科技集团公司第二十八研究所Multi-modal man-machine interaction system and control method thereof
US20190279410A1 (en)*2018-03-062019-09-12Didimo, Inc.Electronic Messaging Utilizing Animatable 3D Models
CN108519816A (en)*2018-03-262018-09-11广东欧珀移动通信有限公司 Information processing method, device, storage medium and electronic equipment
CN110223199A (en)*2018-08-012019-09-10郎启红A kind of application system and its implementation based on position simulation real community
CN111385594A (en)*2018-12-292020-07-07腾讯科技(深圳)有限公司Virtual character interaction method, device and storage medium
US20200306640A1 (en)*2019-03-272020-10-01Electronic Arts Inc.Virtual character generation from image or video data
CN111862280A (en)*2020-08-262020-10-30网易(杭州)网络有限公司 Virtual character control method, system, medium and electronic device
CN112162628A (en)*2020-09-012021-01-01魔珐(上海)信息科技有限公司Multi-mode interaction method, device and system based on virtual role, storage medium and terminal
CN111970535A (en)*2020-09-252020-11-20魔珐(上海)信息科技有限公司Virtual live broadcast method, device, system and storage medium
CN112684894A (en)*2020-12-312021-04-20北京市商汤科技开发有限公司Interaction method and device for augmented reality scene, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114237396A (en)*2021-12-152022-03-25北京字跳网络技术有限公司 Action adjustment method, device, electronic device, and readable storage medium
CN114237396B (en)*2021-12-152023-08-15北京字跳网络技术有限公司 Action adjustment method, device, electronic device and readable storage medium

Also Published As

Publication numberPublication date
CN113327311B (en)2024-03-29

Similar Documents

PublicationPublication DateTitle
US11823306B2 (en)Virtual image generation method and apparatus, electronic device and storage medium
CN113325954B (en)Method, apparatus, device and medium for processing virtual object
CN112527115B (en)User image generation method, related device and computer program product
US20230107213A1 (en)Method of generating virtual character, electronic device, and storage medium
CN113365146B (en)Method, apparatus, device, medium and article of manufacture for processing video
CN114093006A (en)Training method, device and equipment of living human face detection model and storage medium
CN112102449A (en)Virtual character generation method, virtual character display device, virtual character equipment and virtual character medium
CN114187392B (en) Method, device and electronic device for generating virtual idol
CN114429767B (en) Video generation method, device, electronic device, and storage medium
CN113627536B (en) Model training, video classification methods, devices, equipment and storage media
CN111523467B (en) Face tracking method and device
CN114187405A (en)Method, apparatus, device, medium and product for determining an avatar
CN113870399A (en) Expression driving method, device, electronic device and storage medium
CN114549710A (en)Virtual image generation method and device, electronic equipment and storage medium
CN114186681A (en)Method, apparatus and computer program product for generating model clusters
CN113724398A (en)Augmented reality method, apparatus, device and storage medium
CN114792355A (en)Virtual image generation method and device, electronic equipment and storage medium
CN113792876A (en) Backbone network generation method, device, device and storage medium
CN113359995A (en)Man-machine interaction method, device, equipment and storage medium
JP2023070068A (en)Video stitching method, apparatus, electronic device, and storage medium
CN114638919A (en)Virtual image generation method, electronic device, program product and user terminal
CN113327311B (en)Virtual character-based display method, device, equipment and storage medium
CN113240780A (en)Method and device for generating animation
CN113905040A (en) File transfer method, apparatus, system, device and storage medium
CN112801083A (en)Image recognition method, device, equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp