Detailed Description
Natural language processing (Nature Language Processing, NLP) is an important direction in the fields of computer science and artificial intelligence. It is studying various theories and methods that enable effective communication between a person and a computer in natural language. Natural language processing is a science that integrates linguistics, computer science, and mathematics. Thus, the research in this field will involve natural language, i.e. language that people use daily, so it has a close relationship with the research in linguistics. Natural language processing techniques typically include text processing, semantic understanding, machine translation, robotic questions and answers, knowledge graph techniques, and the like.
Specifically, the process of analyzing and processing the target text based on the semantic coding model, the emotion generation model and the emotion classification model to obtain the emotion classification result of the target text relates to text processing, semantic understanding technology and the like in NLP.
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or device.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention will be further described in detail below with reference to the accompanying drawings and examples. It should be understood that the detailed description and specific examples, while indicating the embodiment of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention. First, the embodiment of the present invention explains the following concept:
emotion classification: refers to classifying text into predefined types of emotion (e.g., anger, happiness, etc.) based on the text content.
BERT: the encoder, which is called Bidirectional Encoder Representations from Transformers, bi-directional transducer, is a bi-directional language model training method that utilizes massive text. The BERT is used for extracting text features, can fully describe character-level, word-level, sentence-level and even inter-sentence relationship features, and is widely used for various natural language processing tasks.
LSTM: the Long-short Term Memory memory network is a cyclic neural network structure designed manually, and mainly aims to solve the problems of gradient elimination and gradient explosion in the Long-sequence training process. LSTM has found a variety of applications in the field of natural language processing, such as learning translation languages, controlling robots, image analysis, document summarization, and speech recognition image recognition, etc., using LSTM-based systems.
MLP: all called Multi-layerPerceptron, a Multi-layer perceptron, is a Multi-layer feedforward neural network. The MLP is a neural network consisting of full connections, containing at least one hidden layer, and the output of each hidden layer is transformed by an activation function.
Referring to fig. 1 of the specification, a schematic diagram of an implementation environment of a text emotion classification method according to an embodiment of the present invention is shown. As shown in fig. 1, the implementation environment may include at least a terminal 110 and a server 120, where the terminal 110 and the server 120 may be directly or indirectly connected through a wired or wireless communication manner, and the present invention is not limited thereto. For example, the terminal 110 may upload, through a wired or wireless communication manner, a corresponding target text that needs to be subjected to emotion classification, etc., to the server 120, and the server 120 may display, through a wired or wireless communication manner, an emotion classification result, etc., of the target text to the terminal 110.
Specifically, the terminal 110 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited thereto. The server 120 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services.
It should be noted that fig. 1 is only an example.
Referring to fig. 2 of the specification, a flow of a text emotion classification method according to an embodiment of the present invention is shown. The text emotion classification method provided by the embodiment of the invention can be applied to any scene needing emotion analysis of the text, for example, can be applied to emotion classification of the text input by the user in intelligent customer service and voice assistant, and can also be applied to text emotion classification of user evaluation information. As shown in fig. 2, the method may include:
s210: and obtaining a target text to be classified, and processing the target text to obtain a word vector sequence of the target text.
In the embodiment of the present invention, the target text may be any text, and the mode of obtaining the target text is not specifically limited in the embodiment of the present invention, for example, the target text may be obtained based on an application scenario.
If the text emotion classification method provided by the embodiment of the invention is applied to scenes such as intelligent questions and answers, intelligent customer service, voice assistants and the like, the scenes comprise a server and a terminal for implementing the text emotion classification method. The terminal displays a user interface, acquires a text input by a user based on the user interface, and sends the text to the server, and the server receives the text, namely the acquired target text.
It should be noted that, the target text may be a section of speech input by the user, or may be a sentence or a word; the target text may be any content, for example, may be a certain question, may express a certain user idea, etc., and the embodiment of the present invention does not limit the content and the length of the target text. In practical application, if the text to be subjected to emotion analysis is a text paragraph comprising a plurality of sentences, the text paragraph can be directly used as a target text, and the emotion classification is performed by adopting the text emotion classification method provided by the embodiment of the invention to obtain an emotion classification result of the text paragraph; and carrying out sentence dividing processing on the text paragraphs, taking each sentence after sentence dividing as a target text, and carrying out emotion classification by adopting the text emotion classification method provided by the embodiment of the invention to obtain emotion classification results of each sentence, thereby determining the emotion classification results of the text paragraphs.
In one possible embodiment, the obtaining the target text to be classified, and processing the target text to obtain the word vector sequence of the target text may include:
Word segmentation processing is carried out on the target text, so that a plurality of words contained in the target text are obtained;
encoding the plurality of words respectively to obtain word vectors of the plurality of words;
and generating a word vector sequence of the target text according to the word vectors of the words.
In the embodiment of the invention, the word segmentation can be performed by adopting any word segmentation method in the text processing field, as long as the word segmentation words included in the target text can be determined. For example, the target text is ' i's true depression and no happiness today ', and then word segmentation processing is performed on the target text, so that word segmentation words ' i ', ' today ', ' true ', ' depression ' and ' no happiness ' included in the target text can be obtained.
In the embodiment of the present invention, any word vector representation method in the text processing field may be used to determine word vectors of a plurality of words included in the target text, which is not limited in the embodiment of the present invention. For example, the word vector of each word may be determined by a pre-trained word vector model.
In the embodiment of the invention, the word vector sequence refers to the ordered word vectors, and the ordering of the word vectors corresponds to the front-to-back order of the words in the target text.
S220: and processing the word vector sequence by utilizing a pre-trained semantic coding model to obtain a word semantic vector sequence corresponding to the word vector sequence and a semantic vector of the target text.
In the embodiment of the invention, the word vector sequence can be input into a pre-trained semantic coding model, a hidden layer vector is generated for each word in the target text and used as the word semantic vector of each word, and the hidden layer vector is used for generating and outputting the semantic vector of the target text.
In one possible embodiment, the semantic coding model may include a third neural network based on self-attention;
the processing the word vector sequence by using a pre-trained semantic coding model to obtain a word semantic vector sequence corresponding to the word vector sequence and a semantic vector of the target text may include:
processing the word vector sequence based on the third neural network to obtain word sense vectors of all words in the target text;
generating a word sense vector sequence corresponding to the word vector sequence according to the word sense vector of each word in the target text;
and aggregating the word meaning vector sequences to obtain the meaning vector of the target text.
In the embodiment of the present invention, the third Neural network may be a plurality of Neural networks, for example, different Neural networks such as Bi-directional Long-short Term Memory (Bi-LSTM), recurrent Neural Network (RNN), convolutional Neural network (Convolution Neural Network, CNN), and other networks with mixed RNN, CNN and self-attention, which are not limited in the embodiment of the present invention. The word sense vector sequence refers to the ordered word sense vector, and the ordering of the word sense vector and the front-back order of the words in the target text are corresponding.
Illustratively, referring to FIG. 3 of the drawings in conjunction with the description, there is shown the structure of a neural network model provided by an embodiment of the present invention, where the neural network model may include a semantic coding model 310, where the semantic coding model 310 may include a Bi-LSTM network, where the word vector sequence { E } of the target textw1 ,Ew2 ,Ew3 ,…,Ewn Inputting the Bi-LSTM network, and outputting word sense vector sequence { h } consisting of word sense vectors of each word in the target text1 ,h2 ,h3 ,…,hn -and semantic vector h of the target texts 。
S230: and inputting the word vector sequence, the word meaning vector sequence and the semantic vector into a pre-trained emotion generation model to obtain the emotion vector of the target text, wherein the emotion generation model is a neural network model based on attention.
In the embodiment of the invention, the emotion generation model can generate a word emotion vector based on context for each word of the target text according to the word vector sequence and the word sense vector sequence, and generate the emotion vector of the target text according to the word emotion vector of each word and the semantic vector by adopting an attention mechanism.
In practice, the same word may represent different emotions in different sentences, e.g. "penalty", in which the sentence "the bad person eventually gets the due penalty-! "in which the emotion of" qi anger "is expressed, and" i break the vase today, punished by the mother ". "sad" emotion is expressed in "middle. The word emotion vector generated in the embodiment of the invention has dynamic property, and can adjust the emotion of the word according to the context, so that different word emotion vectors can be generated by the same word under different scenes, and compared with the fixed labeling accuracy in the prior art, the accuracy of emotion classification can be effectively improved.
In one possible embodiment, the emotion generation model may include a first neural network based on self-attention and an attention network;
Referring to fig. 4 of the drawings, the inputting the word vector sequence, the word sense vector sequence, and the semantic vector into a pre-trained emotion generation model to obtain the emotion vector of the target text may include:
s410: and analyzing the word vector sequence and the word sense vector sequence based on the first neural network to obtain word emotion vectors of all words in the target text.
S420: and generating a word emotion vector sequence of the target text according to the word emotion vector of each word in the target text.
S430: analyzing the semantic vector and the word emotion vector sequence based on the attention network, and polymerizing the word emotion vector sequence into an emotion vector of the target text.
In the embodiment of the present invention, the first neural network may be a plurality of neural networks, for example, different neural networks such as a bidirectional long-short time memory network or a convolutional neural network, which is not limited in the embodiment of the present invention. The word emotion vector sequence refers to ordered word emotion vectors, and the ordering of the word emotion vectors corresponds to the front-to-back order of words in the target text. The word emotion vectors of the words in the target text can also be used as byproducts for carrying out subsequent natural language processing tasks.
Illustratively, in conjunction with FIG. 3 referring to the description, the neural network model may further include an emotion generation model 320, where the emotion generation model 320 may include a Bi-LSTM network, a normalization layer, and an attention network, and the word vector sequence { E } of the target textw1 ,Ew2 ,Ew3 ,…,Ewn Sum of word sense vector sequence { h }1 ,h2 ,h3 ,…,hn The Bi-LSTM network is input, and word emotion vectors e of all words in the target text can be output1 ,e2 ,e3 ,…,en Generating a word emotion vector sequence { e } of the target text1 ,e2 ,e3 ,…,en Inputting the word emotion vector sequence into a normalization layer for normalization and then carrying out semantic vector h with the target texts Together with the input of the attention network, the emotion vector e of the target text can be obtaineds 。
S240: and inputting the semantic vector and the emotion vector into a pre-trained emotion classification model to obtain an emotion classification result of the target text.
In the embodiment of the invention, an emotion set can be predetermined for a current application scene, and the emotion set can comprise emotion types common in the current scene, for example, the emotion set can comprise happiness, sadness, surprise, angry, aversion, fear and the like. The emotion classification model can determine the component of the target text on each type of emotion in the emotion set according to the semantic vector, and finally determine the emotion classification result of the target text by combining the emotion vector.
It should be noted that different emotion sets may be provided in different application scenarios, and emotion types and numbers of emotion types of emotion sets in different scenarios may also be different, which is not limited by the embodiment of the present invention.
In one possible embodiment, the emotion classification model may include a second neural network and a classification network;
the inputting the semantic vector and the emotion vector into a pre-trained emotion classification model, and obtaining the emotion classification result of the target text may include:
processing the semantic vector based on the second neural network to obtain a classification feature vector of the target text;
and analyzing the classification feature vector and the emotion vector based on the classification network to obtain an emotion classification result of the target text.
In the embodiment of the present invention, the second neural network may be a plurality of neural networks, for example, different neural networks such as a multi-layer perceptron or a convolutional neural network, and the classification network may also be a plurality of neural networks, for example, a Softmax classification network, which is not limited in the embodiment of the present invention.
Illustratively, in conjunction with fig. 3 referring to the specification, the neural network model may further include an emotion classification model 330, where the emotion classification model 330 may include an MLPs network and a Softmax classification network, and the semantic vector h of the target text is determined bys Inputting the MLPs network can output the classification feature vector of the target text, and combining the classification feature vector with the emotion vector e of the target texts And inputting the Softmax classification network can output the emotion classification result of the target text.
In one possible embodiment, the method may further include determining a response text according to the emotion classification result of the target text, and feeding back the response text to the terminal. In an intelligent question-answering system, a corresponding answer text set can be constructed in advance for different emotion classification results, and after the emotion classification result of the target text is obtained, a matched answer text can be selected from the corresponding answer text set based on the emotion classification result and is fed back to the terminal as a response text. For example, some text matching detection algorithms may be specifically used, which is not specifically limited in the embodiments of the present invention.
The following describes an intelligent question-answering system as an example, where the intelligent question-answering system includes a terminal and a server, and when implementing intelligent question-answering, the intelligent question-answering system may include the following steps:
1) The terminal acquires target texts input by a user through a client, and sends a question-answer request to a server, wherein the question-answer request comprises a text identifier and the target texts, and the text identifier is used for marking the target texts, so that different target texts are distinguished.
2) After receiving the question-answer request, the server carries out emotion classification on the target text based on the semantic coding model, the emotion generation model and the emotion classification model by using the emotion classification method provided by the embodiment of the method of the invention to obtain an emotion classification result, determines a corresponding answer text according to the emotion classification result, and returns the answer text to the terminal.
3) And after receiving the answer text, the terminal can display the answer text through an interactive interface of the client.
The interaction process based on the intelligent question-answering system is described by taking the user interface shown in fig. 5 as an example. The user may enter and send the target text through an operation control 52 in the conversation interface 51 for the user to enter the message text and an operation control 53 for the user to trigger the sending of the message text. After receiving the target text, the server carries out emotion classification on the target text, determines a corresponding answer text according to the emotion classification result, and returns the answer text to the terminal. The terminal may present the answer text through the session interface 51. When the target text input by the user is 'i feel well and smoldering today', the user is not happy. After the emotion classification result is obtained by the emotion classification method provided by the embodiment of the invention, the answer text obtained based on the emotion classification result can be 'not crying', and the sunshine is always in the weather. "
Besides the above examples, the method provided by the embodiment of the invention can be applied to application program evaluation analysis, network product evaluation analysis, network business intelligent customer service, voice assistant and the like, and gives the robot a certain emotion capability.
Referring to the description, FIG. 6 is a flow chart illustrating a model training method provided by one embodiment of the present invention. As shown in fig. 6, the text emotion classification method may further include a step of training the semantic coding model, the emotion generation model, and the emotion classification model; the step of training the semantic coding model, the emotion generation model, and the emotion classification model may include:
s610: the method comprises the steps of constructing a preset neural network model, wherein the preset neural network model comprises a preset semantic coding model, a preset emotion generation model and a preset emotion classification model, the preset semantic coding model comprises a third neural network based on self-attention, the preset emotion generation model comprises a first neural network and an attention network based on self-attention, and the preset emotion classification model comprises a second neural network and a classification network.
In one possible embodiment, the first neural network comprises a two-way long and short time memory network or a convolutional neural network, the second neural network comprises a multi-layer perceptron network or a convolutional neural network, and the third neural network comprises a two-way long and short time memory network, a convolutional neural network, or a convolutional neural network. Illustratively, the preset semantic coding model may include a Bi-LSTM network, the preset emotion generation model may include a Bi-LSTM network, a normalization layer, and an attention network, and the preset emotion classification model may include an MLPs network and a Softmax classification network.
S620: and acquiring a training text set, wherein the training text set comprises a plurality of training texts and emotion labels corresponding to the training texts.
In the embodiment of the invention, the training texts in the training text set can be corpus texts marked with emotion type labels in the target field (for example, the intelligent question-answering field), and the labels can be manually marked labels for marking emotion types of the training texts. In practical applications, the disclosed dataset may be used as a training text set for training the semantic coding model, the emotion generation model and the emotion classification model.
S630: training the preset neural network model by using training texts in the training text set, and adjusting model parameters of the preset semantic coding model, the preset emotion generation model and the preset emotion classification model in the training process until an output result of the preset neural network model is matched with an emotion label of the training text to obtain the semantic coding model, the emotion generation model and the emotion classification model.
In the embodiment of the invention, machine learning training can be performed based on BERT models, LSTM networks and MLPs networks, model parameters of the preset semantic coding model, the preset emotion generation model and the preset emotion classification model are adjusted according to the value of a loss function in the training process until the loss function converges, then the preset semantic coding model corresponding to the current model parameter is used as a trained semantic coding model, the preset emotion generation model corresponding to the current model parameter is used as a trained emotion generation model, and the preset emotion classification model corresponding to the current model parameter is used as a trained emotion classification model.
Verification on a disclosed dataset SemEval 2018 shows that the emotion classification method provided by the embodiment of the invention is effectively higher than other existing methods, the accuracy of a test set on the dataset is improved to 59.3%, and compared with the method proposed by Baziotis et al 2018, the accuracy of the test set on the dataset is improved by 5%.
In summary, according to the text emotion classification method disclosed by the invention, the word emotion vector based on the context is dynamically generated for each word of the target text through the pre-trained emotion generation model, and the word emotion vector is introduced into the text emotion classification, so that the accuracy of emotion classification can be effectively improved. Because the word emotion vector is dynamically generated for each word, compared with the fixed label, the multi-emotion problem of one word can be effectively processed. According to the text emotion classification method, an external emotion dictionary is not needed in the model training process, and the problems of mismatching and dictionary coverage caused by inconsistent labels are avoided. The text emotion classification method of the invention has better adaptability to new tasks and new data because of no need of relying on an external dictionary.
Referring to fig. 7 of the drawings, a structure of a text emotion classification device according to an embodiment of the present invention is shown. As shown in fig. 7, the apparatus may include:
The word vector sequence generating module 710 is configured to obtain a target text to be classified, and process the target text to obtain a word vector sequence of the target text;
the semantic vector generation module 720 is configured to process the word vector sequence by using a pre-trained semantic coding model, so as to obtain a word semantic vector sequence corresponding to the word vector sequence and a semantic vector of the target text;
the emotion vector generation module 730 is configured to input the word vector sequence, the word sense vector sequence, and the semantic vector into a pre-trained emotion generation model to obtain an emotion vector of the target text, where the emotion generation model is a neural network model based on attention;
and the emotion classification module 740 is used for inputting the semantic vector and the emotion vector into a pre-trained emotion classification model to obtain an emotion classification result of the target text.
In one possible embodiment, the apparatus may further comprise a model training module 750, the model training module 750 being configured to train the semantic coding model, the emotion generation model, and the emotion classification model; referring to fig. 8 of the specification, the model training module 750 may include:
A model construction unit 751, configured to construct a preset neural network model, where the preset neural network model includes a preset semantic coding model, a preset emotion generation model, and a preset emotion classification model, the preset semantic coding model includes a third neural network based on self-attention, the preset emotion generation model includes a first neural network based on self-attention and an attention network, and the preset emotion classification model includes a second neural network and a classification network;
a training text set obtaining unit 752, configured to obtain a training text set, where the training text set includes a plurality of training texts and emotion tags corresponding to the training texts;
the model training unit 753 is configured to train the preset neural network model by using training texts in the training text set, and adjust model parameters of the preset semantic coding model, the preset emotion generation model and the preset emotion classification model in the training process until an output result of the preset neural network model is matched with an emotion label of the training text, so as to obtain the semantic coding model, the emotion generation model and the emotion classification model.
In one possible embodiment, the first neural network comprises a two-way long and short time memory network or a convolutional neural network, the second neural network comprises a multi-layer perceptron network or a convolutional neural network, and the third neural network comprises a two-way long and short time memory network, a convolutional neural network, or a convolutional neural network.
It should be noted that, the device embodiment provided by the embodiment of the present invention and the method embodiment described above are based on the same inventive concept. In the apparatus provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above.
The embodiment of the invention also provides electronic equipment, which comprises a processor and a memory, wherein at least one instruction or at least one section of program is stored in the memory, and the at least one instruction or the at least one section of program is loaded and executed by the processor to realize the text emotion classification method provided by the embodiment of the method.
The memory may be used to store software programs and modules, and the processor executes the software programs and modules stored in the memory to perform various functional applications and emotion classification. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for functions, and the like; the storage data area may store data created according to the use of the device, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory may also include a memory controller to provide access to the memory by the processor.
The method embodiments provided by the embodiments of the present invention may be performed in a terminal, a server, or a similar computing device, i.e., the electronic device may include a terminal, a server, or a similar computing device. Taking the operation on a server as an example, as shown in fig. 9, a schematic structural diagram of a server running a text emotion classification method according to an embodiment of the present invention is shown. The server 900 may vary considerably in configuration or performance and may include one or more central processing units (Central Processing Unit, CPUs) 910 (e.g., one or more processors) and memory 930, one or more storage media 920 (e.g., one or more mass storage devices) storing applications 923 or data 922. Wherein memory 930 and storage medium 920 may be transitory or persistent storage. The program stored on the storage medium 920 may include one or more modules, each of which may include a series of instruction operations on a server. Still further, the central processor 910 may be configured to communicate with a storage medium 920 and execute a series of instruction operations in the storage medium 920 on the server 900. The server 900 may also include one or more power supplies 960, one or more wired or wireless network interfaces 950, one or more input/output interfaces 940, and/or one or more operating systems 921, such as Windows ServerTM, mac OS XTM, unixTM, linuxTM, freeBSDTM, etc.
The input-output interface 940 may be used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the server 900. In one example, the input-output interface 940 includes a network adapter (Network Interface Controller, NIC) that may be connected to other network devices through a base station to communicate with the internet. In one example, the input-output interface 940 may be a Radio Frequency (RF) module for communicating with the internet wirelessly, which may use any communication standard or protocol including, but not limited to, global system for mobile communications (Global System ofMobile communication, GSM), general packet Radio service (General Packet Radio Service, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), email, short message service (ShortMessaging Service, SMS), and the like.
It will be appreciated by those of ordinary skill in the art that the architecture shown in fig. 9 is merely illustrative, and that the server 900 may also include more or fewer components than those shown in fig. 9, or have a different configuration than that shown in fig. 9.
An embodiment of the present invention also provides a computer readable storage medium having at least one instruction or at least one program stored therein, where the at least one instruction or at least one program is loaded and executed by a processor to implement a text emotion classification method as provided in the above method embodiment.
Alternatively, in an embodiment of the present invention, the above-mentioned computer-readable storage medium may include, but is not limited to: a U-disk, a Read-only memory (ROM), a random access memory (RandomAccess Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
An embodiment of the present invention also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium and executes the computer instructions to cause the computer device to perform the text emotion classification method provided in the various alternative implementations of the method embodiments described above.
It should be noted that: the sequence of the embodiments of the present invention is only for description, and does not represent the advantages and disadvantages of the embodiments. And the foregoing description has been directed to specific embodiments of this specification. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the device, terminal and server embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and references to the parts of the description of the method embodiments are only needed.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.