Disclosure of Invention
In view of the above, an object of the embodiments of the present invention is to provide a method and an apparatus for identifying a named entity, so as to solve the above technical problems.
In a first aspect, an embodiment of the present invention provides a method for identifying a named entity, including:
acquiring a text to be recognized, wherein the text to be recognized comprises a plurality of words;
embedding the text input words to be recognized into a model to obtain a word vector corresponding to each word in the text to be recognized, and inputting the text to be recognized into a pinyin embedded model to obtain a pinyin vector corresponding to each word;
combining each word vector and the corresponding pinyin vector to obtain a combined vector, inputting the combined vectors respectively corresponding to all the words into a bidirectional long-short time memory network (BilTM) for semantic coding, and obtaining semantic information characteristics corresponding to the text to be recognized;
and obtaining a corresponding entity tag sequence in the text to be recognized according to the semantic information characteristics.
Further, the obtaining a corresponding entity tag sequence in the text to be recognized according to the semantic information features includes:
normalizing the semantic information features by utilizing a softmax function to obtain the emission probability value of each word in the text to be recognized as each label;
and obtaining the label corresponding to the maximum emission probability value corresponding to each word, and forming the entity label sequence according to the labels corresponding to all the words.
Further, the embedding the text input word to be recognized into a model to obtain a word vector corresponding to each word in the text to be recognized includes:
and inputting the text to be recognized into the word embedding model, and performing OneHot coding mapping on each word by the word embedding model to obtain a word vector corresponding to each word, wherein the length of the word vector is fixed.
Further, the BilSTM comprises a forward long-short time memory network LSTM and a reverse LSTM; correspondingly, the step of inputting the combined vector into a bidirectional long-and-short time memory network BilSTM for semantic coding to obtain semantic information characteristics corresponding to the text to be recognized includes:
the forward SLTM extracts features of the text to be recognized to obtain a first hidden state sequence;
the reverse SLTM performs feature extraction on the text to be recognized to obtain a second hidden state sequence;
and splicing the first hidden state sequence and the second hidden state sequence according to the sequence of each word in the text to be recognized to obtain the semantic information characteristics.
Further, the obtaining a corresponding entity tag sequence in the text to be recognized according to the semantic information features includes:
normalizing the semantic information by utilizing a softmax function to obtain the emission probability value of each word in the text to be recognized as each label;
and inputting the emission probability value into a conditional random field CRF, and acquiring the entity label sequence according to the emission probability value of each label and a transition probability matrix in the CRF.
Further, the inputting the emission probability value into a conditional random field CRF, and obtaining the entity tag sequence according to the emission probability value of each tag and a transition probability matrix in the CRF includes:
according toCalculating and obtaining the total probability value of the text to be recognized;
according toNormalizing the total probability value to obtain the entity label sequence; wherein,the transmission probability value corresponding to the ith character in the text to be recognized, n is the number of the characters of the text to be recognized,the transfer probability value corresponding to the transmission probability value of the ith word in the text to be recognized is obtained; y isxN and i are positive integers, and i is less than or equal to n.
Further, the method further comprises:
and constructing the pinyin embedding model through a convolutional neural network and a maximum pooling method.
In a second aspect, an embodiment of the present invention provides a named entity identifying device, including:
the device comprises an acquisition module, a recognition module and a recognition module, wherein the acquisition module is used for acquiring a text to be recognized, and the text to be recognized comprises a plurality of characters;
the vector identification module is used for embedding the text input words to be identified into a model to obtain a word vector corresponding to each word in the text to be identified, and inputting the text to be identified into a pinyin embedding model to obtain a pinyin vector corresponding to each word;
the semantic coding module is used for combining each word vector and the corresponding pinyin vector to obtain a combined vector, inputting the combined vectors corresponding to all the words into a bidirectional long-short time memory network (BilTM) for semantic coding, and obtaining semantic information characteristics corresponding to the text to be recognized;
and the marking module is used for obtaining the corresponding entity label sequence in the text to be recognized according to the semantic information characteristics.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a processor, a memory, and a bus, wherein,
the processor and the memory are communicated with each other through the bus;
the memory stores program instructions executable by the processor, the processor being capable of performing the method steps of the first aspect when invoked by the program instructions.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, including:
the non-transitory computer readable storage medium stores computer instructions that cause the computer to perform the method steps of the first aspect.
According to the embodiment of the invention, the word vector and the pinyin vector corresponding to the text to be recognized are respectively obtained according to the word embedding model and the pinyin vector model, and the word vector and the pinyin vector are combined and input into the BilSTM for recognition, so that the defect of word vector representation can be well compensated, and the recognition accuracy is greatly improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Fig. 1 is a schematic flow chart of a named entity identification method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
step 101: acquiring a text to be recognized, wherein the text to be recognized comprises a plurality of words;
in a specific implementation process, the recognition device first obtains a text to be recognized, which should be noted that the text to be recognized may be a section of text to be recognized, and the text includes a plurality of chinese characters. The named entity recognition is to recognize words belonging to a preset tag in the text, where the preset tag may be: the name of a person, a place, a government, or a weapon may be defined in advance according to actual circumstances. The embodiment of the invention adopts a label set of BIEO, B represents the beginning (Begin) of an entity, I represents the middle (Intermediate) of the entity, E represents the End (End) of the entity, and O represents a non-entity (Other). For example, if the entity identification of the name is to be performed, taking "learning the mine peak good list" as an example, the tagging sequence is "oobooo", and thus the name "mine peak" is extracted.
Step 102: embedding the text input words to be recognized into a model to obtain a word vector corresponding to each word in the text to be recognized, and inputting the text to be recognized into a pinyin embedded model to obtain a pinyin vector corresponding to each word;
in the specific implementation process, the device comprises an identification model, wherein the identification model comprises a word embedding model, a pinyin embedding model and a BilSTM network, and also comprises a CRF layer, wherein the word embedding model and the pinyin embedding model form an embedding layer, and the BilSTM network forms a BilSTM layer. The device firstly inputs a text to be recognized into a word embedding model and a pinyin embedding model, the word embedding model generates a word vector corresponding to each word in the text to be recognized, and the pinyin embedding model generates a pinyin vector corresponding to each word in the text to be recognized. It should be noted that the working principle of the word embedding model is to perform One Hot encoding mapping on each word in the received text to be recognized, so as to obtain a low-dimensional dense word vector, and the length of the word vector is fixed. Moreover, the length of the pinyin vector is also fixed, and if the length of the pinyin vector of each word obtained by the pinyin embedding model is smaller than the set length, the pinyin vectors can be complemented by PADDING, such as: < PADDING, W, a, N, G, PADDING >, etc.
Step 103: combining each word vector and the corresponding pinyin vector to obtain a combined vector, inputting the combined vectors respectively corresponding to all the words into a bidirectional long-short time memory network (BilTM) for semantic coding, and obtaining semantic information characteristics corresponding to the text to be recognized;
in a specific implementation process, after the word vector and the pinyin vector corresponding to each word are obtained, the word vector and the pinyin vector are connected to obtain a combined vector, specifically, the pinyin vector is connected to the back of the word vector. Assuming that the length of the word vector is p and the length of the pinyin vector is q, the length of the embedded model layer is w ═ p | | | q. Where | represents vector concatenation. And taking the combined vector as the input of the BilSTM, and performing semantic coding in the BilSTM so as to obtain the semantic information characteristics corresponding to the text to be recognized. It should be noted that the semantic information feature refers to a score of each word in the text to be recognized as each label, which is not normalized.
Step 104: and obtaining a corresponding entity tag sequence in the text to be recognized according to the semantic information characteristics.
In a specific implementation process, after the device acquires the semantic information features, for each word, the device acquires the label with the highest score as the label of the word, so that the corresponding entity label sequence in the text to be recognized can be acquired.
According to the embodiment of the invention, the word vector and the pinyin vector corresponding to the text to be recognized are respectively obtained according to the word embedding model and the pinyin vector model, and the word vector and the pinyin vector are combined and input into the BilSTM for recognition, so that the defect of word vector representation can be well compensated, and the recognition accuracy is greatly improved.
On the basis of the above embodiment, the obtaining a corresponding entity tag sequence in the text to be recognized according to the semantic information features includes:
normalizing the semantic information features by utilizing a softmax function to obtain the emission probability value of each word in the text to be recognized as each label;
and obtaining the label corresponding to the maximum emission probability value corresponding to each word, and forming the entity label sequence according to the labels corresponding to all the words.
In a specific implementation process, normalization is carried out by utilizing a softmax function according to the obtained semantic feature information, the emission probability value that each word in the text to be recognized is each label is obtained, and the score corresponding to each word in the semantic feature information is enabled to be between [0,1 ]. For each word, the label with the highest emission probability value is used as the label of the word, and the labels corresponding to all the words in the text to be recognized form an entity label sequence. It should be noted that the label with the highest score can also be directly taken as the label of the word from the semantic information features, and then the labels of all words form the entity label sequence.
On the basis of the above embodiment, the BiLSTM includes a forward long-short time memory network LSTM and a reverse LSTM; correspondingly, the step of inputting the combined vector into a bidirectional long-and-short time memory network BilSTM for semantic coding to obtain semantic information characteristics corresponding to the text to be recognized includes:
the forward SLTM extracts features of the text to be recognized to obtain a first hidden state sequence; the reverse SLTM performs feature extraction on the text to be recognized to obtain a second hidden state sequence; and splicing the first hidden state sequence and the second hidden state sequence according to the sequence of each word in the text to be recognized to obtain the semantic information characteristics.
In a specific implementation, the BilSTM comprises a forward long-short-term memory network LSTM and a backward LSTM, and a vector (x) is combined1,x2,…,xn) After the BiLSTM is input, the forward LSTM operates the combined vector to obtain a first hidden state sequenceThe reverse LSTM operates the combination vector to obtain a second hidden state sequenceWill be provided withAndthe hidden states output at all positions are spliced according to the positions to obtain the semantic information characteristicsIt should be noted that the position refers to the position of each word in the text to be recognized.
On the basis of the above embodiment, the obtaining a corresponding entity tag sequence in the text to be recognized according to the semantic information features includes:
normalizing the semantic information by utilizing a softmax function to obtain the emission probability value of each word in the text to be recognized as each label; and inputting the emission probability value into a conditional random field CRF, and acquiring the entity label sequence according to the emission probability value of each label and a transition probability matrix in the CRF.
In a specific implementation process, theThe recognition result of the text to be recognized obtained by BilSTM may be invalid, for example: when the label of the two continuous words is B, the identification is invalid. At this time, normalization can be performed by utilizing a softmax function according to the obtained semantic feature information, the emission probability value that each word in the text to be recognized is each label is obtained, and the score corresponding to each word in the semantic feature information is all [0,1]]In the meantime. Inputting the emission probability value into a CRF, wherein the parameter included in the CRF is a matrix A of (k +2) × (k +2), k is the number of preset labels, AijThe transition score from the ith tag to the jth tag is shown, plus 2 because a start state is added for the beginning of the sentence and an end state is added for the end of the sentence. The label sequence corresponding to the length of the text to be recognized is marked as y ═ y (y)1,y2,...,yn) Then the CRF model scores y for a label of the text x to be recognized asThe total probability value of the whole text to be recognized is equal to the sum of scores of all positions, and the score of each position consists of two parts, wherein one part is p output by BilSTMiDetermining where p isiIs based on semantic information characteristics obtained by BilSTMThe other part is determined by the state transition matrix A of the CRF through Softmax calculation. Normalized by Softmax, has a probability ofWherein Y isxAll tag sequences are meant to include both possible and impossible tag sequences. Wherein,is the transmission probability value corresponding to the ith character in the text to be recognized, n is the number of the characters of the text to be recognized,a transfer probability value corresponding to the transmission probability value of the ith word in the text to be recognized; y isxN and i are positive integers, and i is less than or equal to n.
When the CRF is trained in advance, the CRF model is trained by maximizing the log-likelihood function, and the model uses a dynamically-programmed Viterbi algorithm to solve the optimal path in prediction.
The embodiment of the invention inputs the emission probability value into the CRF for labeling, thereby solving the problem of invalid output of the BilSTM and further improving the identification accuracy.
FIG. 2 is a schematic diagram of a named entity recognition model provided by an embodiment of the present invention, as shown in FIG. 2, the model includes a word embedding model, a pinyin embedding model, a BilSTM model and a CRF layer, wherein the BilSTM model includes a forward LSTM and a backward LSTM, taking the text to be recognized as "I love Chinese", firstly embedding the input word of "I love Chinese" into the model to obtain a word vector of each word, then inputting "I love Chinese" into the pinyin embedding model to obtain a pinyin vector of each word, combining the word vector and the pinyin vector of each word to obtain a combined vector, then inputting the combined vector into the forward LSTM and the backward LSTM to obtain a first hidden state sequence and a second hidden state sequence respectively, splicing the first hidden state sequence and the second hidden state sequence to obtain semantic information characteristics, then inputting the semantic information characteristics into the CRF to realize labeling of the recognized text, obtaining the entity tag sequence.
On the basis of the above embodiment, the method further includes:
and constructing the pinyin embedding model through a convolutional neural network and a maximum pooling method.
In the specific implementation process, before the identification, a pinyin embedding model needs to be constructed in advance,
fig. 3 is a schematic diagram of a pinyin embedding model provided in an embodiment of the present invention, and as shown in fig. 3, taking a "network" as an example, where the pinyin of the "network" is "wanglo", and since the length of "wanglo" is smaller than a preset length, Padding is used for both front and back sides, then "wanglo" is input into a word embedding model, convolution calculation is performed on "wanglo" through a convolution layer, in order to avoid overfitting of the convolution neural network, a pooling layer is added behind a volume base layer, and a pinyin vector is obtained through maximum pooling.
The embodiment of the invention constructs the pinyin embedded model through the convolutional neural network and the maximum pooling method, prevents the convolutional neural network from being over-fitted, and thus obtains a proper pinyin vector.
Fig. 4 is a schematic structural diagram of a named entity recognition apparatus according to an embodiment of the present invention, as shown in fig. 4, the apparatus includes: an acquisition module 401, a vector recognition module 402, a semantic coding module 403, and a labeling module 404, wherein,
the obtaining module 401 is configured to obtain a text to be recognized, where the text to be recognized includes multiple words; the vector identification module 402 is configured to embed the text input word to be identified into a model to obtain a word vector corresponding to each word in the text to be identified, and to input the text to be identified into a pinyin embedding model to obtain a pinyin vector corresponding to each word; the semantic coding module 403 is configured to combine each word vector and the corresponding pinyin vector to obtain a combined vector, input the combined vectors corresponding to all the words into a bidirectional long-short time memory network BiLSTM for semantic coding, and obtain semantic information features corresponding to the text to be recognized; the labeling module 404 is configured to obtain a corresponding entity tag sequence in the text to be recognized according to the semantic information features.
On the basis of the foregoing embodiment, the labeling module 404 is specifically configured to:
normalizing the semantic information features by utilizing a softmax function to obtain the emission probability value of each word in the text to be recognized as each label;
and obtaining the label corresponding to the maximum emission probability value corresponding to each word, and forming the entity label sequence according to the labels corresponding to all the words.
On the basis of the foregoing embodiment, the vector identification module 402 is specifically configured to:
and inputting the text to be recognized into the word embedding model, and performing OneHot coding mapping on each word by the word embedding model to obtain a word vector corresponding to each word, wherein the length of the word vector is fixed.
On the basis of the above embodiment, the BiLSTM includes a forward long-short time memory network LSTM and a reverse LSTM; correspondingly, the semantic coding module 403 is specifically configured to:
the forward SLTM extracts features of the text to be recognized to obtain a first hidden state sequence;
the reverse SLTM performs feature extraction on the text to be recognized to obtain a second hidden state sequence;
and splicing the first hidden state sequence and the second hidden state sequence according to the sequence of each word in the text to be recognized to obtain the semantic information characteristics.
On the basis of the foregoing embodiment, the labeling module 404 is specifically configured to:
normalizing the semantic information by utilizing a softmax function to obtain the emission probability value of each word in the text to be recognized as each label;
and inputting the emission probability value into a conditional random field CRF, and acquiring the entity label sequence according to the emission probability value of each label and a transition probability matrix in the CRF.
On the basis of the foregoing embodiment, the labeling module 404 is specifically configured to:
according toComputingObtaining a total probability value of the text to be recognized;
according toNormalizing the total probability value to obtain the entity label sequence; wherein,the transmission probability value corresponding to the ith character in the text to be recognized, n is the number of the characters of the text to be recognized,the transfer probability value corresponding to the transmission probability value of the ith word in the text to be recognized is obtained; y isxN and i are positive integers, and i is less than or equal to n.
On the basis of the above embodiment, the apparatus further includes:
and the pinyin model establishing module is used for establishing the pinyin embedded model through a convolutional neural network and a maximum pooling method.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus described above may refer to the corresponding process in the foregoing method, and will not be described in too much detail herein.
In summary, the embodiment of the invention obtains the word vector and the pinyin vector corresponding to the text to be recognized respectively according to the word embedding model and the pinyin vector model, and combines and inputs the word vector and the pinyin vector into the BilSTM for recognition, thereby well compensating the deficiency of word vector representation and greatly improving the recognition accuracy.
The present embodiment discloses a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method provided by the above-mentioned method embodiments, for example, comprising: acquiring a text to be recognized, wherein the text to be recognized comprises a plurality of words; embedding the text input words to be recognized into a model to obtain a word vector corresponding to each word in the text to be recognized, and inputting the text to be recognized into a pinyin embedded model to obtain a pinyin vector corresponding to each word; combining each word vector and the corresponding pinyin vector to obtain a combined vector, inputting the combined vectors respectively corresponding to all the words into a bidirectional long-short time memory network (BilTM) for semantic coding, and obtaining semantic information characteristics corresponding to the text to be recognized; and obtaining a corresponding entity tag sequence in the text to be recognized according to the semantic information characteristics.
The present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the methods provided by the above method embodiments, for example, including: acquiring a text to be recognized, wherein the text to be recognized comprises a plurality of words; embedding the text input words to be recognized into a model to obtain a word vector corresponding to each word in the text to be recognized, and inputting the text to be recognized into a pinyin embedded model to obtain a pinyin vector corresponding to each word; combining each word vector and the corresponding pinyin vector to obtain a combined vector, inputting the combined vectors respectively corresponding to all the words into a bidirectional long-short time memory network (BilTM) for semantic coding, and obtaining semantic information characteristics corresponding to the text to be recognized; and obtaining a corresponding entity tag sequence in the text to be recognized according to the semantic information characteristics.
Referring to fig. 5, fig. 5 is a block diagram of an electronic device according to an embodiment of the present invention. The electronic device may include a recognition model 501, a memory 502, a storage controller 503, a processor 504, a peripheral interface 505, an input output unit 506, an audio unit 507, a display unit 508.
The memory 502, the memory controller 503, the processor 504, the peripheral interface 505, the input/output unit 506, the audio unit 507, and the display unit 508 are electrically connected to each other directly or indirectly, so as to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The recognition model 501 includes at least one software function module which may be stored in the memory 502 in the form of software or firmware (firmware) or solidified in an Operating System (OS) of the recognition model 501. The processor 504 is adapted to execute executable modules stored in the memory 502, such as software functional modules or computer programs comprised by the recognition model 501.
The Memory 502 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 502 is used for storing a program, and the processor 504 executes the program after receiving an execution instruction, and the method executed by the server defined by the flow process disclosed in any of the foregoing embodiments of the present invention may be applied to the processor 504, or implemented by the processor 504.
The processor 504 may be an integrated circuit chip having signal processing capabilities. The Processor 504 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor 504 may be any conventional processor or the like.
The peripheral interface 505 couples various input/output devices to the processor 504 and to the memory 502. In some embodiments, the peripheral interface 505, the processor 504, and the memory controller 503 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input and output unit 506 is used for providing input data for a user to realize the interaction of the user with the server (or the local terminal). The input/output unit 506 may be, but is not limited to, a mouse, a keyboard, and the like.
Audio unit 507 provides an audio interface to a user, which may include one or more microphones, one or more speakers, and audio circuitry.
The display unit 508 provides an interactive interface (e.g., a user interface) between the electronic device and a user or for displaying image data to a user reference. In this embodiment, the display unit 508 may be a liquid crystal display or a touch display. In the case of a touch display, the display can be a capacitive touch screen or a resistive touch screen, which supports single-point and multi-point touch operations. Supporting single-point and multi-point touch operations means that the touch display can sense touch operations from one or more locations on the touch display at the same time, and the sensed touch operations are sent to the processor 504 for calculation and processing.
The peripheral interface 505 couples various input/output devices to the processor 504 and to the memory 502. In some embodiments, the peripheral interface 505, the processor 504, and the memory controller 503 may be implemented in a single chip. In other examples, they may be implemented separately from the individual chips.
The input/output unit 506 is used for providing input data for a user to realize the interaction of the user and the processing terminal. The input/output unit 506 may be, but is not limited to, a mouse, a keyboard, and the like.
It will be appreciated that the configuration shown in fig. 5 is merely illustrative and that the electronic device may include more or fewer components than shown in fig. 5 or may have a different configuration than shown in fig. 5. The components shown in fig. 5 may be implemented in hardware, software, or a combination thereof.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.