Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described in this specification. In some other embodiments, the methods may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Several concepts of the disclosure are presented herein.
Clothing goods: the apparel goods in the present disclosure may cover various apparel, and may include not only clothes, trousers, shoes, socks, etc., but also jewelry, hair accessories, hanging accessories, accessories (such as handbags), etc.
The human body model comprises: the human body model is a digital 3D model, can be obtained by performing 3D modeling on a real human, and can also be simulated to generate a human body model of a virtual human. The merchant can have the personal human body model, and the user can also define the personalized human body model. The different mannequins may differ in size and/or in the corresponding gender.
Clothing model of clothing commodity: the model is a digital 3D model, and data such as photos, videos and materials of solid clothing commodities can be obtained for 3D modeling. The merchant can save the clothing model of at least part of the clothing commodities of the merchant and the commodity numbers to the database. The merchant can also mark style labels for different clothing commodities, such as sweet, vintage, neutral, leisure and the like. Different style labels are marked with matched fitting and putting-on schemes (such as hairstyle, makeup, other clothing commodities matched with the clothing commodities, and the like) and matched fitting and showing actions (such as waist crossing, circle turning, smiling and jumping). It is easy to understand that one try-on show action can be understood as a sequence of consecutive gestures.
Clothing shape prediction model: the "model" in the clothing shape prediction model is not the same latitude concept as the "model" in the 3D model, as will be understood by those skilled in the art.
Virtual fitting 3D image: the concept of one frame of 3D image is adopted, and it is easy to understand that a plurality of frames of virtual try-on 3D images can form a virtual try-on 3D video. The virtual try-on 2D image is a projection of the virtual try-on 3D image on a plane, and is a result of projecting the virtual try-on 3D image at a certain angle to the plane, different plane projection results (namely the virtual try-on 2D image) can be generated when a user rotates the angle of a combined model (namely the combination of a human body model and a clothing model of clothing goods) in the virtual try-on 3D image, and the projection results are calculated through rendering. The multi-frame virtual fitting 2D images can form a virtual fitting 2D video.
The technical scheme is introduced as follows:
to replace real-person fitting apparel, a virtualized human body model and a virtualized apparel model may be employed to generate a virtual fitting 3D image. In order to ensure that the presenting effect of the virtual fitting 3D image is not rigid, but the natural effect of the real person fitting is simulated as much as possible (the effect is more vivid), the relevance between the posture change of the human body model and the shape change of the clothes model caused by fitting the clothes is considered, the relevance between the two in the virtual fitting 3D image is close to the relevance between the two in the real person fitting scene as much as possible, and the generated virtual fitting 3D image can be more vivid.
And the relevance between the two in the virtual fitting 3D image is close to the relevance between the two in the real person fitting scene as much as possible, and a certain relevance rule needs to be discovered. Therefore, the technical scheme adopts an artificial intelligence AI technology, and an artificial intelligence model can be used for discovering the association rule.
Considering that the posture change of the human body model depends on the size characteristics of the human body model and is dynamic data, the size parameter and the posture parameter sequence of the human body model are adopted to describe the posture change of the human body model; considering that the shape change of the clothing model is changed from the default shape of the clothing model and is dynamic data, the shape change of the clothing model is described by adopting the default shape parameter and non-default shape parameter sequence of the clothing model.
Further, in order to adapt to the processing of the serialized data, a sequence prediction type AI algorithm is required to be adopted to construct an artificial model, so that a clothing shape prediction model is constructed by adopting a recurrent neural network, and the size parameters of the human body model, the posture parameter sequence of the human body model and the default shape parameters of the clothing model are input into the clothing shape prediction model. And the output of the apparel shape prediction model is a non-default shape parameter sequence of the apparel model. Therefore, on the premise of giving a human body model, a series of postures to be done by the human body model and a dress model, the shape of the dress model under each posture of the human body model can be predicted by the dress shape prediction model, so that in a 3D image of the dress model simulated by the human body model, the shape of the dress model looks natural and has a dynamic flexible effect.
Therefore, through the technical scheme, a more vivid virtual fitting 3D image can be obtained. In addition, by means of the technical scheme, only the calculation power is needed to be consumed in the process of training the clothing shape prediction model, under the scene that a large number of virtual fitting videos of clothing commodities need to be generated, the shape prediction model is directly provided for a merchant to use, the merchant can directly generate 3D images in batches by using the clothing shape prediction model, the calculation power consumption is lower, and the efficiency of generating the 3D images is high.
The technical scheme for generating the virtual try-on 3D image can be particularly applied to live television scenes. In the live e-commerce scenario, it is increasingly common for e-commerce to market goods to users (as live viewers) via internet live broadcasts. When a person who is a member of an e-commerce promotes apparel goods (such as clothes, trousers, shoes, hats, ornaments and the like) to a user in a live broadcast, the person usually tries on the apparel goods in person to show the effect of the upper body of the apparel goods to the user so as to attract the user to purchase the apparel goods. In the actual live broadcast process, the anchor personnel hardly meet the fitting requirements put forward by the user in time, so that a fitting 3D image of a certain dress commodity which is manufactured in advance needs to be played for the user.
For example, the anchor introduces the clothing commodity a, but the anchor asks the anchor to try on the clothing commodity B when a user issues a comment, and the anchor either has to interrupt the introduction of the clothing commodity a and try on the clothing commodity B to affect the live broadcast effect, or only temporarily neglects the requirement of the user to affect the live broadcast viewing experience of the user.
For another example, when the anchor program tries on clothes in live broadcasting, the anchor program needs to leave the lens temporarily and spend a certain time trying on clothes, which also affects the live broadcasting viewing experience of the user. Especially, if the anchor needs to try on different clothes continuously, it will take more time to match with the corresponding hairstyle, makeup, etc.
And through above-mentioned technical scheme, can broadcast the virtual fit 3D image of certain dress class commodity to the user at any time to, the virtual fit video that the user watched is high-fidelity, and is more lifelike, can simulate out the dress shape change that the dress would have in the real person's fitting, gives the very natural effect show of user, lets the user obtain to be close to watching the experience of real person's fitting dress, and, the user need not wait for the anchor dress.
In addition, the hairstyle and the makeup of the human body model in the virtual try-on video can be flexibly changed, and compared with the situation that the hairstyle and the makeup of a user need to be adjusted by a director, the virtual try-on video is higher in efficiency.
It should be noted that, the above technical solution for generating a virtual fitting 3D image can be applied not only to live scenes of e-commerce, but also to other scenes in which a fitting effect of clothing goods needs to be displayed. For example, when a user browses an interface of a clothing commodity on an e-commerce platform, the user may click an introduction picture of the clothing commodity, the introduction picture may be one or more frames of virtual try-on 3D images, and the user may rotate an angle of a combination model (i.e., a combination of a human body model and a clothing model of the clothing commodity) in the virtual try-on 3D images to view a try-on effect from multiple angles.
Therefore, the application of the technical solution for generating virtual fitting 3D images to the content of live tv commercial scenes, which is described later, is only one possible implementation manner, and this does not constitute a limitation to the application scene range of the technical solution for generating virtual fitting 3D images, and after understanding this technical solution, those skilled in the art will easily think of applying this technical solution to more scenes that need to show fitting effects of clothing goods, which does not need to pay extra creative labor.
The technical scheme is described in detail in the following with reference to the accompanying drawings.
Fig. 1 exemplarily provides a flow of a method of generating a virtual fitting video, including:
s100: and determining the human body model to be tried on and the clothing model of the clothing commodity to be tried on.
S102: and acquiring a size parameter and posture parameter sequence of the human body model.
S104: and acquiring default shape parameters of the clothing model.
S106: inputting the size parameter, the posture parameter sequence and the default shape parameter of the clothing model into a clothing shape prediction model, and outputting the non-default shape parameter sequence of the clothing model.
S108: and fusing the human body model under each attitude parameter with the clothing model under the non-default shape parameter corresponding to the attitude parameter to obtain a corresponding frame of virtual fitting 3D image.
The method shown in fig. 1 may be implemented by a merchant of apparel goods.
Before the method shown in fig. 1 is implemented, a clothing shape prediction model constructed based on a recurrent neural network RNN may be trained. The training party of the model can be a merchant or a technical service party. In the embodiment of the technical service party training model, the technical service party can provide the trained model for the merchant to use.
As mentioned above, the set of input data of the model may be size parameters (three-dimensional dimensions, such as length, width, height, respectively corresponding to fat-thin, width, height, and may further include head-body ratio, head-shoulder ratio, etc.) of the human body model, a sequence of posture parameters of the human body model, and default shape parameters of the apparel model of the apparel product.
The sequence of pose parameters of the mannequin includes a plurality of consecutive pose parameters for the mannequin to make a plurality of consecutive pose changes. The default shape parameter of the clothing commodity is determined according to the shape of the clothing commodity in the non-fitting state.
Various technical means are readily conceivable for the person skilled in the art to implement the definition of the above-mentioned pose parameters and the above-mentioned shape parameters. Examples are provided herein. For example, the posture parameter may be an inclination angle of each joint of the human body model relative to a horizontal plane, and the shape parameter may be a three-dimensional coordinate of a specific shape of the clothing commodity in a point cloud coordinate system (the coordinate system may be established by using Meshlab software).
The set of output data of the model can be a non-default shape parameter sequence of the clothing commodity, the clothing shape prediction model is constructed based on a recurrent neural network, each non-default shape parameter in the non-default shape parameter sequence corresponds to each posture parameter in the posture parameter sequence in a one-to-one mode, and a non-default shape parameter corresponding to one posture parameter is a shape parameter which is caused by the fact that the human body model under the posture parameters tries on the clothing model and has a changed shape.
It will be readily appreciated that the model is a set of input data describing a virtual model for fitting an article of apparel. And the output data of the model is used for describing the shape of the clothing commodity after the clothing commodity is deformed by trying on the virtual model.
In the model training phase, training labels of the model need to be specified, and the training labels are usually specified by a training party. The training method can utilize a physical computation engine to calculate the shape parameters of the clothing model in each posture of the human body model on the premise of giving a series of postures and clothing models to be done by the human body model, and a real non-default shape parameter sequence is formed and used as a training label.
In addition, the training party can also calculate the shape parameters of the commodity type clothes under each posture of the real person in a mode of trying on the clothes type goods by the real person to form a real non-default shape parameter sequence.
In the model training process, the non-default shape parameter sequence predicted by the model gradually approaches to the real non-default shape parameter sequence through one iteration training, and therefore the model training is completed.
In the stage of training the model, the model may be trained based on a plurality of different human body models, and/or be based on a plurality of different apparel articles. That is, different sets of model input data may correspond to different human body models, or to different apparel goods.
In the model application stage, a human body model to be tried on and a clothing model of a clothing commodity to be tried on are determined, the size parameter and the posture parameter sequence of the human body model and the default shape parameter of the clothing model are used as model input data, a clothing shape prediction model is input, and the predicted non-default shape parameter sequence of the clothing model is output.
Then, based on the non-default shape parameter sequence of the clothing model, the human body model under each posture parameter is fused with the clothing model under the non-default shape parameter corresponding to the posture parameter, and a corresponding frame of virtual fitting 3D image is obtained.
In some embodiments, the generated virtual fitting 3D image may be accompanied by other apparel goods in addition to fitting the apparel goods. For example, the current clothing product as the target of fitting is a red coat, and other clothing products to be matched may include a plurality of pink trousers, a pair of blue leather shoes, and a pair of black frame glasses. The human body model needs to try on a red coat, pink trousers, blue leather shoes and black-frame glasses at the same time, and a virtual try-on video corresponding to the red coat. It is readily understood that the virtual fitting 3D images of different apparel goods may be the same.
Thus, a further apparel model for at least one further apparel item to be fitted may be determined; acquiring default shape parameters of each other clothing model; for each other clothing model, inputting the size parameter and the posture parameter sequence of the human body model and the default shape parameter of the other clothing model into a clothing shape prediction model, and outputting the non-default shape parameter sequence of the other clothing model; and fusing the human body model under each attitude parameter with the clothing model under the non-default shape parameter corresponding to the attitude parameter and each other clothing model to obtain a corresponding frame of virtual fitting 3D image.
In some embodiments, the human body model under each pose parameter may be fused with the clothing model under the non-default shape parameter corresponding to the pose parameter, and simulated natural light reflections presented by the fused human body model and the clothing model are rendered to obtain a corresponding frame of virtual fitting 3D image.
Therefore, the natural deformation after the dress fitting can be presented in the frame of virtual dress fitting 3D image, and the shadow effect caused by the natural deformation can be presented.
In addition, after obtaining the multi-frame virtual fitting 3D image of the clothing commodity based on the method flow shown in fig. 1, a virtual fitting video of the clothing commodity may be obtained according to each obtained virtual fitting 3D image, where the virtual fitting video is a 2D video or a 3D video.
Fig. 2 exemplarily provides a flow of a video playing method applied to e-commerce live broadcast, including:
s200: and responding to the virtual fitting instruction, and determining the clothing commodity to be fitted.
S202: and playing the virtual fitting video of the clothing commodity to the user.
In the live e-commerce scenario, the method flow shown in fig. 2 may be implemented by a live room system. Virtual fitting video(s) of one or more apparel goods may be generated based on the method of generating virtual video described above. The merchant can generate the virtual fitting videos in advance before the live broadcast starts, and can also generate some virtual fitting videos in real time according to requirements and play the videos after the live broadcast starts.
The method can respond to a virtual fitting instruction sent by the E-commerce side system to determine the clothing commodity to be fitted. And the clothing commodity to be tried on can be determined in response to the virtual fitting instruction sent by the user side client.
That is, the host or any user watching the live broadcast can initiate the playing of the virtual fitting video.
The human body model to be tried on can be determined in response to a virtual fitting instruction sent by the user side client or the E-commerce side system. That is, when the user initiates playing of the virtual video, the user may specify the virtual model, and the e-commerce may also specify the virtual model. Generally, the virtual fitting video generated by the e-commerce aiming at a certain dress type commodity can have different mannequin versions. For example, different virtual fitting videos may be generated by using human models of different genders, or different virtual fitting videos may be generated by using human models of different sizes.
Further, a user-side client of a certain user may pre-configure a personalized human body model, for example, the personalized human body model may be the user's own human body model. In this way, the user can view the virtual fitting video of the human body model fitting certain clothing goods.
In addition, if there are multiple user-side clients (in practical applications, many users often watch live broadcasts together), the virtual fitting video of the apparel goods may be played only to the user-side clients in the case of determining the apparel goods to be fitted in response to the virtual fitting instruction sent by the user-side clients.
In some embodiments, when the virtual fitting video of the clothing commodity is a 2D video, a first live video stream to be currently played to a user may be obtained, and the virtual fitting video of the clothing commodity is fused in the first live video stream to obtain a second live video stream; and playing the second live video stream to the user.
Further, the picture of the second live video stream includes a first picture area and a second picture area, the first picture area includes a picture of the first live video stream, and the second picture area includes a picture of the virtual try-on video corresponding to the clothing commodity. In practical application, when a user watches a live video stream, one side of a picture can be seen to be a live person anchor for commodity introduction, and the other side of the picture is a virtual model for trying on a certain clothing commodity.
In some embodiments, the virtual fitting 2D video may not be merged into the live video stream, but a sub-interface is popped up in the live viewing interface in response to an operation instruction of the user, and the virtual fitting video of a certain clothing commodity selected by the user is played in the sub-interface.
In some embodiments, if the virtual fitting video is a 3D video, it may be implemented that a 3D virtual anchor is rendered on site in a live broadcast room, and the 3D virtual anchor may show fitting effects of a plurality of clothing goods on site. The user can view the 3D virtual anchor appearing on site through the live video stream.
In practical application, if a user starts to watch live broadcast, but the anchor broadcast does not start to work, or the anchor broadcast temporarily leaves in the live broadcast process, a virtual fitting video of some selected clothing commodities can be played to the user, so that the user can be effectively attracted to continuously watch the live broadcast.
In practical applications, the merchant may mark specific style labels for the apparel goods, such as sweet, vintage, neutral, leisure, etc. Different styles mean different wearing schemes and/or different fitting display actions (which can be understood as a sequence of display gestures), such as circling, crossing, smiling, jumping. Besides the hair style and the makeup, the putting-on scheme can also relate to at least one other clothing commodity besides the target clothing commodity.
In some embodiments, the step of generating a virtual fitting video of one or more apparel goods may comprise: aiming at one or more clothing commodities, at least one putting-on scheme corresponding to the clothing commodity is determined; aiming at each putting-on scheme, determining other clothing commodities except the clothing commodity in the putting-on scheme; and generating a virtual fitting video of the clothing commodity corresponding to the fitting scheme.
Further, a fitting scheme specified by the virtual fitting instruction may be determined in response to the virtual fitting instruction. Therefore, the virtual fitting video corresponding to the fitting scheme of the clothing commodity can be played to the user.
In other embodiments, the step of generating a virtual fitting video of one or more apparel goods may comprise: aiming at one or more clothing commodities, determining a display posture corresponding to the clothing commodity; determining a posture parameter sequence of the human body model according to the display posture corresponding to the clothing commodity; and generating a virtual fitting video of the clothing commodity.
The above embodiments in which different fitting schemes are considered may be combined with the above embodiments in which different fitting operations are considered.
The present disclosure also provides a computer readable storage medium, as shown in fig. 3, on which medium 140 a computer program is stored, which when executed by a processor implements the method of the embodiments of the present disclosure.
The present disclosure also provides a computing device comprising a memory, a processor; the memory is used to store computer instructions executable on the processor for implementing the methods of the embodiments of the present disclosure when the computer instructions are executed.
Fig. 4 is a schematic structural diagram of a computing device provided by the present disclosure, where thecomputing device 15 may include, but is not limited to: aprocessor 151, amemory 152, and abus 153 that connects the various system components, including thememory 152 and theprocessor 151.
Wherein thememory 152 stores computer instructions executable by theprocessor 151 to enable theprocessor 151 to perform the methods of any of the embodiments of the present disclosure. Thememory 152 may include a random access memory unit RAM1521, acache memory unit 1522, and/or a read onlymemory unit ROM 1523. Thememory 152 may further include: aprogram tool 1525 having a set ofprogram modules 1524, theprogram modules 1524 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, one or more combinations of which may comprise an implementation of a network environment.
Thebus 153 may include, for example, a data bus, an address bus, a control bus, and the like. Thecomputing device 15 may also communicate withexternal devices 155 through the I/O interface 154, theexternal devices 155 may be, for example, a keyboard, a bluetooth device, etc. The computing device 150 may also communicate with one or more networks, which may be, for example, local area networks, wide area networks, public networks, etc., through thenetwork adapter 156. Thenetwork adapter 156 may also communicate with other modules of thecomputing device 15 via thebus 153, as shown.
Further, while the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the present disclosure have been described with reference to several particular embodiments, it is to be understood that the present disclosure is not limited to the particular embodiments disclosed, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium, that may be used to store information that may be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The foregoing describes several embodiments of the present specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the various embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments herein. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in various embodiments of the present description to describe various information, the information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the various embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, the method embodiments are substantially similar to the method embodiments, so that the description is simple, and reference may be made to the partial description of the method embodiments for relevant points. The above-described method embodiments are merely illustrative, and the modules described as separate components may or may not be physically separate, and the functions of the modules may be implemented in one or more software and/or hardware when implementing the embodiments of the present disclosure. And part or all of the modules can be selected according to actual needs to realize the purpose of the scheme of the embodiment. One of ordinary skill in the art can understand and implement without inventive effort.
The above description is only for the purpose of illustrating the preferred embodiments of the present disclosure, and is not intended to limit the present disclosure to the embodiments, and any modifications, equivalents, improvements and the like made within the spirit and principle of the embodiments should be included in the scope of the present disclosure.