Disclosure of Invention
The embodiment of the invention provides a text classification method, a text classification device and a storage medium.
The technical scheme of the embodiment of the invention is as follows:
a text classification method, comprising:
determining a first vector characterizing a noisy text sequence;
determining, based on the first vector, a second vector characterizing the noisy text sequence and a third vector characterizing the text sequence that is noise-free;
fusing the second vector and the third vector to generate a fused vector based on a fusion parameter, wherein the fusion parameter is determined based on the first vector;
and determining a text classification result of the noisy text sequence based on the fused vector.
In one embodiment, the determining a first vector characterizing the noisy text sequence comprises: inputting a noisy text sequence into a pre-training model for encoding to output the first vector from the pre-training model;
the determining, based on the first vector, a second vector characterizing the noisy text sequence and a third vector characterizing the text sequence that is noise-free comprises: inputting the first vector to a denoising self-encoder, the denoising self-encoder including an encoder and a decoder to perform encoding on the first vector by the encoder to output the second vector, and the decoder performing decoding on the second vector to output the third vector;
the determining the text classification result of the noisy text sequence based on the fused vector comprises: inputting the fused vector into a classification model to output the text classification result based on the fused vector by the classification model.
In one embodiment, the encoder performing encoding on the first vector to output the second vector includes:
the encoder extracts an external state of a last minimum unit in a forward direction in the first vector and an external state of a last minimum unit in a backward direction in the first vector;
the encoder concatenates the outer state of the last minimum unit in the forward direction and the outer state of the last minimum unit in the backward direction to obtain the second vector.
In one embodiment, the method further comprises:
inputting the first vector into a fusion parameter determination network comprising an encoder in a Transformer model adapted to encode the first vector to generate a fourth vector characterizing a ratio between a characterization vector of a noisy text sequence and a characterization vector of a non-noisy text sequence, and a fully connected network; the fully connected network is adapted to convert the fourth vector into the fusion parameter.
A text classification device, comprising:
a first determination module for determining a first vector characterizing a noisy text sequence;
a second determining module for determining, based on the first vector, a second vector characterizing the noisy text sequence and a third vector characterizing the text sequence without noise;
a fusion module for fusing the second vector and the third vector to generate a fused vector based on a fusion parameter, wherein the fusion parameter is determined based on the first vector;
and a third determining module, configured to determine a text classification result of the noisy text sequence based on the fused vector.
In one embodiment, the first determining module is configured to input a noisy text sequence into a pre-training model for encoding to output the first vector from the pre-training model;
the second determination module is configured to input the first vector to a denoising self-encoder, the denoising self-encoder including an encoder and a decoder to perform encoding on the first vector by the encoder to output the second vector, and the decoder to perform decoding on the second vector to output the third vector;
the fusion module is used for inputting the fused vector into a classification model so as to output the text classification result based on the fused vector by the classification model.
In one embodiment, the second determination module is configured to denoise the first vector input from an encoder to be used by the encoder: extracting the external state of the last minimum unit in the forward direction in the first vector and the external state of the last minimum unit in the backward direction in the first vector; and splicing the external state of the last minimum unit in the forward direction and the external state of the last minimum unit in the backward direction to obtain the second vector.
In one embodiment, the method further comprises:
a fusion parameter determination module for inputting the first vector into a fusion parameter determination network comprising an encoder in a transducer model adapted to encode the first vector to generate a fourth vector characterizing a ratio between a characterization vector of a noisy text sequence and a characterization vector of a non-noisy text sequence, and a fully connected network; the fully connected network is adapted to convert the fourth vector into the fusion parameter.
An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the text classification method according to any one of the above.
A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement a text classification method as claimed in any one of the preceding claims.
A computer program product comprising computer instructions which, when executed by a processor, implement a text classification method as claimed in any one of the preceding claims.
As can be seen from the above technical solution, in an embodiment of the present invention, a first vector characterizing a noisy text sequence is determined; determining, based on the first vector, a second vector characterizing the noisy text sequence and a third vector characterizing the noiseless text sequence; fusing the second vector and the third vector to generate a fused vector based on a fusion parameter, wherein the fusion parameter is determined based on the first vector; based on the fused vector, a text classification result of the noisy text sequence is determined. Therefore, the embodiment of the invention provides an effective method for fusing noise-free features and noise-containing features, and can simultaneously increase model performance and model robustness.
And the fusion parameters can be obtained by utilizing the network to determine the fusion parameters, so that the model can learn the noise level by itself, and the difficulty in adjusting the noise level is reduced.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent.
For simplicity and clarity of description, the following description sets forth aspects of the invention by describing several exemplary embodiments. Numerous details in the embodiments are provided solely to aid in the understanding of the invention. It will be apparent, however, that the embodiments of the invention may be practiced without limitation to these specific details. Some embodiments are not described in detail in order to avoid unnecessarily obscuring aspects of the present invention, but rather only to present a framework. Hereinafter, "comprising" means "including but not limited to", "according to … …" means "according to at least … …, but not limited to only … …". The term "a" or "an" is used herein to refer to a number of components, either one or more, or at least one, unless otherwise specified. The terms first, second, third, fourth and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the invention described herein may be capable of being practiced otherwise than as specifically illustrated and described.
In the following, some terms related to the embodiments of the present disclosure are explained.
A recurrent neural network (Recurrent Neural Network, RNN), a time-based neural network, the output of the current state will be the input of the next state.
Long and short term memory network (Long Short Term Memory network): the LSTM contains general use ct A Cell State (Cell State) represented for performing linear cyclic information transfer; at the same time, contains nonlinear output information to the external state h of the hidden layer (hidden state)t 。
The Denoising Auto-Encoder (Denoising Auto-Encoder) is a Sequence-Sequence/Encoder-Decoder (Sequence-Sequence/Encoder-Decoder) model, namely, noise is firstly introduced into data and then the data is restored, and the Denoising Auto-Encoder can effectively improve the robustness of the model and is more suitable for realizing a scene.
Minimum unit (token) is a minimum unit processed by the natural language processing field model, wherein the minimum unit is a Chinese character in the Chinese field, and is a subword in the English field.
< BOS >: for sequence/Sentence start identifier (Begin of Sequence/Sentence).
< EOS >: is a Sequence/Sentence End identifier (End of Sequence/Sequence).
The applicant found that: in the prior art noiseless text classification method: since there is no or only a small amount of data containing noise in the training sample, it is difficult to fit the data distribution containing noise, so that the performance degradation is very obvious and the robustness is poor in the case that the data noise is very large. In prior art noisy text classification methods: under the condition of small noise, the data added with the noise may not simulate the data of a real scene, and the robustness of the model cannot be obviously increased; under the condition of overlarge noise, the noise is likely to mask the important information (Key words) of the current sample, so that the gap between samples is reduced, the data quality is obviously reduced, and the model performance is obviously reduced.
In view of the above-mentioned drawbacks of the noiseless text classification method and the noisy text classification method in the prior art, the embodiment of the invention provides an effective method for fusing noise-free features and noise-containing features, and simultaneously increases model performance and model robustness, wherein fusion parameters can be obtained by determining a network by using the fusion parameters, so that the text classification model can learn the noise level by itself, and the difficulty of adjusting the noise level is reduced.
Fig. 1 is an exemplary flowchart of a text classification method according to an embodiment of the present invention.
As shown in fig. 1, the method includes:
step 101: a first vector characterizing the noisy text sequence is determined.
Here, the acquisition manner of the noisy text sequence may include at least one of the following manners:
(1) A Masking Language Modeling (MLM) is used to randomly mask a portion of the words in the text sequence.
(2) Misspellings are added to some random words in the text sequence.
(3) Common errors in adding simulated QWERTY layout keyboard inputs to a text sequence.
(4) Some random words are replaced with placeholder tags.
In one embodiment, a noisy text sequence is input to a pre-trained Model (pre-trained Model) for encoding to output a first vector representing the noisy text sequence from the pre-trained Model.
The pre-training model is usually a language model, because the training of the language model is unsupervised, a large-scale corpus can be obtained, and the language model is the basis of many typical NLP tasks, such as machine translation, text generation, reading understanding and the like. Pre-training models on large corpora can learn generic language characterizations, which are helpful for downstream NLP tasks, and can avoid training new models from scratch.
Pre-trained models that may be employed herein include BERT, GPT, roBERTa, transformer-XL and ERNIE, among others. The main tasks of the pre-training module include: (1) as a word embedding layer; (2) obtaining a basic vector representation. Preferably, a discard (drop out) process may be further provided in the pre-training module to further enhance noise in the text sequence.
Therefore, the embodiment of the invention adopts a pre-training model, and can quickly determine the basic vector representation of the text sequence, thereby improving the running rate.
While the above exemplary description describes typical examples of noisy text sequences and pre-trained models, those skilled in the art will recognize that this description is merely exemplary and is not intended to limit the scope of embodiments of the present invention.
Step 102: based on the first vector, a second vector characterizing the noisy text sequence and a third vector characterizing the noiseless text sequence are determined.
Here, a denoising self-encoder may be employed to obtain a second vector characterizing the noisy text sequence and a third vector characterizing the noiseless text sequence. Wherein the second vector is typically a high-dimensional vector of the first vector.
A de-noised self-encoder is a type of self-encoder that accepts corrupted data as input and is trained to predict the original uncorrupted data as input. The function to be realized by the denoising self-encoder is to learn the original data of the superimposed noise, and the learned characteristics are almost the same as those of the data of the non-superimposed noise, but the characteristics learned by the denoising self-encoder from the input of the superimposed noise are more robust, various problems encountered by the self-encoder can be avoided, and the same characteristic values are simply learned.
In one embodiment, step 102 specifically includes: the first vector is input to a denoising self-Encoder that includes an Encoder (Encoder) and a Decoder (Decoder) to perform encoding on the first vector by the Encoder to output a second vector, and the Decoder performs decoding on the second vector to output a third vector.
Specifically, the encoder performing encoding on the first vector to output the second vector includes: the encoder extracts an external state of the last minimum unit in the forward direction in the first vector and an external state of the last minimum unit in the backward direction in the first vector; the encoder concatenates the outer state of the last minimum unit in the forward direction and the outer state of the last minimum unit in the backward direction to obtain the second vector. Preferably, the encoder of the denoising self-encoder may include Bi-RNN, bi-LSTM, etc.; the decoder of the de-noised self-encoder may include RNN, LSTM, etc.
The decoder is a one-way timing model that restores the text sequence in a generated manner. The input to the decoder is the second vector and < BOS >, where the output of the external state of the previous minimum cell serves as the input of the external state of the next minimum cell. The decoder extracts the outer state of the last smallest unit of the unidirectional sequence of sequences to obtain a third vector characterizing the noiseless sequence of text and predicts the end of sequence symbol < EOS >.
It can be seen that by employing a self-encoder, a second vector characterizing a noisy text sequence and a third vector characterizing a non-noisy text sequence can be quickly determined, further increasing the running rate.
Step 103: the second vector and the third vector are fused based on a fusion parameter to generate a fused vector, wherein the fusion parameter is determined based on the first vector.
For example, assume that the fusion parameter is gate and the second vector is Hnoised The third vector is Hdeoised The method comprises the steps of carrying out a first treatment on the surface of the Then the feature vector H that will not be denoised is usednoised And denoised feature vector Hdeoised Fusing to obtain a fused vector: gate Hnoised +(1-gate)*Hdeoised 。
Step 104: based on the fused vector, a text classification result of the noisy text sequence is determined.
In one embodiment, determining the text classification result for the noisy text sequence based on the post-fusion vector comprises: the fused vector is input into a classification model to output a text classification result based on the fused vector by the classification model.
Therefore, the embodiment of the invention can integrate noise-free and noise-containing characteristics, thereby simultaneously increasing model performance and model robustness.
In one embodiment, the method further comprises: inputting the first vector into a fusion parameter determination network comprising an encoder in a transducer model adapted to encode the first vector to generate a fourth vector characterizing a ratio between a characterization vector of the noisy text sequence and a characterization vector of the non-noisy text sequence, and a fully connected network; the fully connected network is adapted to convert the fourth vector into a fusion parameter.
Therefore, the embodiment of the invention utilizes the fusion parameters to determine the network to obtain the fusion parameters, so that the model can learn the noise level by itself, thereby eliminating the need of manually adjusting the noise level and reducing the adjustment difficulty.
Fig. 2 is an exemplary schematic diagram of a text classification process according to an embodiment of the invention.
In fig. 2, the original Input is a noisy perturbed text sequence Input. First, the text sequence Input is Input into a pre-training model, resulting in a basic vector representation Input' (corresponding to the first vector in fig. 1). Input' Input denoising self-encoder to obtain H characterizing noisy text sequencenoised (Hnoised Corresponding to the second vector in fig. 1, the second vector having a higher dimension than the first vector) and H characterizing a noiseless text sequencedeoised (corresponding to the third vector in fig. 1). Furthermore, input' is Input into the fusion parameter determination network to obtain the fusion parameter gate from the fusion parameter determination network. Fusion parameter gate for fusing vector H of noisy featuresnoised Removing noise eigenvector Hdeoised . And inputting the fusion result into a classification model, and obtaining the text category of the text sequence through prediction.
The specific components in fig. 2 are described below.
(1): pre-training module
Preferably, the pre-training module may use any pre-training model with a strong performance, such as Bert, roberta, ERNIE. The task of the pre-training module is two, the first is as a word embedding layer and the second is for obtaining a basic vector representation.
(2): denoising self-encoder
The de-noising self-encoder is mainly used for obtaining noisy Hnoised Removing noise eigenvector Hdeoised . For example, at the encoder end of the denoising self-encoder, bi-RNN or Bi-LSTM may be used.
Fig. 3 is an exemplary block diagram of an encoder in a denoising self-encoder according to an embodiment of the present invention.
The encoder in the denoising self-encoder is a bidirectional time sequence model, and the model can be ensured to learn the semantics of the whole text sequence fully by learning in the forward direction and the backward direction. Let Input' comprise T minimum units, x respectively1 、x2 、x2 、x4 ……xT 。
In the forward model: the first minimum unit is x1 The unit state of the linear cyclic information transmission is thatThe external state of the non-linear output information to the hidden layer is +.>The second minimum unit is x2 The unit state of the linear cyclic information transfer is +.>The external state of the non-linear output information to the hidden layer is +.>The third minimum unit is x3 The unit state of the linear cyclic information transfer is +.>The external state of the non-linear output information to the hidden layer is +.>… … the T minimum unit is xT The unit state of the linear cyclic information transfer is +.>The external state of the non-linear output information to the hidden layer is +.>
In the backward model: the first minimum unit is xT The unit state of the linear cyclic information transmission is thatThe external state of the non-linear output information to the hidden layer is +.>The second minimum unit is xT-1 The unit state of the linear cyclic information transfer is +.>The external state of the non-linear output information to the hidden layer is +.>… …; the T-2 th minimum unit is x3 The unit state of the linear cyclic information transfer is +.>The external state of the non-linear output information to the hidden layer is +.>The T-1 th minimum unit is x2 The unit state of the linear cyclic information transmission is thatThe external state of the non-linear output information to the hidden layer is +.>The T-th minimum unit is x1 The unit state of the linear cyclic information transfer is +.>The external state of the non-linear output information to the hidden layer is +.>
The encoder extracts the last minimum unit of the forward modelExtracting +.>These two vectors are spliced along the characteristic dimension (i.e. +.>And->) Obtaining a noisy vector representation Hnoised . Wherein: />Is a vector spliced along the characteristic dimension; />The transposed matrix of Hnoised 。
Fig. 4 is an exemplary block diagram of a decoder in a denoising self-encoder according to an embodiment of the present invention.
The decoder is a unidirectional time sequence model, the text sequence is restored in a generated mode, and the initial input of the decoder is Hnoised And<BOS>then the output of the external state of the previous minimum unit is used as the input of the external state of the next minimum unit, and the unidirectional timing model finally predicts the sequence ending symbol<EOS>And get a denoised vector representation Hdeoised 。
(3): fusion parameter determination network
Fig. 5 is an exemplary block diagram of a fusion parameter determination network according to an embodiment of the present invention.
The fusion parameter determination network includes an encoder (Transformer Encoder Block) in a transducer model and a fully connected network. An encoder in the Transformer model is adapted to encode the Input' to generate a fourth vector characterizing a ratio between a characterization vector of the noisy text sequence and a characterization vector of the non-noisy text sequence; the fully connected network is adapted to map the fourth vector into one dimension and then activate it by a Sigmoid activation function to obtain the fusion parameter gate. For example, transformer Encoder Block may include a Multi-Head Attention (Multi-Head Attention) mechanism, connection (Add)&Normalization (Norm) processing and Feed Forward (Feed Forward) processing. Wherein: add is a residual connection, usually used to solve the problem of multi-layer network training, which allows the network to focus on only the current difference, norm refers to layer normalization (Layer Normalization), usually used for RNN structure, layer Normalization converts the inputs of each layer of neurons into the same mean variance to speed up convergence; feed Forward is a two-layer fully connected layer, the first layer having an activation function of Relu and the second layer not using an activation function. Then, the undenoised feature vector H is subjected to the following step of using gatenoised And denoised feature vector Hdeoised Fusion is performed to obtain classification data (classification data), wherein:
classificationdata=gate*Hnoised +(1-gate)*Hdeoised ;
(4): classification model
Classification model uses feature orientation fused with non-denoisingQuantity Hnoised And denoised feature vector Hdeoised As a network input classification data. The classification model is in particular a deep neural network with a nonlinear activation function. After classification data is input into the classification model, the classification model can obtain a classification result of the text sequence.
The embodiment of the invention also provides a text classification device. Fig. 6 is an exemplary block diagram of a text classification apparatus according to an embodiment of the present invention.
As shown in fig. 6, the text classification module 600 includes:
a first determining module 601 for determining a first vector characterizing a noisy text sequence;
a second determining module 602 for determining, based on the first vector, a second vector characterizing a noisy text sequence and a third vector characterizing a non-noisy text sequence;
a fusion module 603 for fusing the second vector and the third vector to generate a fused vector based on a fusion parameter, wherein the fusion parameter is determined based on the first vector;
a third determining module 604 is configured to determine a text classification result of the noisy text sequence based on the fused vector.
In one embodiment, a first determining module 601 is configured to input a noisy text sequence into a pre-training model for encoding to output a first vector from the pre-training model; a second determination module 602 for inputting the first vector into a denoising self-encoder, the denoising self-encoder comprising an encoder and a decoder, to perform encoding on the first vector by the encoder to output a second vector, and to perform decoding on the second vector by the decoder to output a third vector; the fusion module 603 is configured to input the fused vector into a classification model, so as to output a text classification result based on the fused vector by the classification model.
In one embodiment, the second determination module 602 is configured to denoise the first vector input from the encoder to be encoded by the encoder: extracting the external state of the last minimum unit in the forward direction in the first vector and the external state of the last minimum unit in the backward direction in the first vector; the outer state of the last minimum unit in the forward direction and the outer state of the last minimum unit in the backward direction are stitched to obtain a second vector.
In one embodiment, the fusion parameter determination module 605 is further included for inputting the first vector into a fusion parameter determination network comprising an encoder in a transducer model adapted to encode the first vector to generate a fourth vector that characterizes a ratio between a characterization vector of the noisy text sequence and a characterization vector of the noiseless text sequence, and a fully connected network; the fully connected network is adapted to convert the fourth vector into a fusion parameter.
In summary, in the embodiment of the present invention, certain words may be first randomly masked in a text sequence as source data, so as to obtain a text sequence with noise; then, the text sequence with noise is encoded by a pre-training model to obtain basic vector representation; then obtaining a vector representation with noise at the encoder end of the denoising self-encoder, and obtaining a vector representation with noise removed at the decoder end; extracting fusion parameters from the basic vector representation through a fusion parameter determination network; fusing the noiseless vector representation and the noisy vector representation using the fusion parameters to obtain a fused vector representation; and representing the prediction classification category by using the fused vector to obtain a text classification result. It can be seen that the embodiments of the present invention propose an efficient method to fuse noise-free and noise-containing features while increasing model performance and model robustness.
And the fusion parameters can be obtained by utilizing the network to determine the fusion parameters, so that the model can learn the noise by itself, and the difficulty in adjusting the noise is reduced.
Embodiments of the present invention also provide a computer readable medium storing instructions that, when executed by a processor, perform the steps in the text classification method as described above. The computer readable medium in practical use may be contained in the apparatus/device/system described in the above embodiment or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs, which when executed, implement the text classification method described in the above embodiments. According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: portable computer diskette, hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), portable compact disc read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing, but are not intended to limit the scope of the invention. In the disclosed embodiments, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
As shown in fig. 7, the embodiment of the present invention further provides an electronic device, where an apparatus for implementing a method of the embodiment of the present invention may be integrated. As shown in fig. 7, which shows an exemplary structural diagram of an electronic device according to an embodiment of the present invention,
specifically: the electronic device may include a processor 701 of one or more processing cores, a memory 702 of one or more computer-readable storage media, and a computer program stored on the memory and executable on the processor. The above text classification method may be implemented when the program of the memory 702 is executed.
In practical applications, the electronic device may further include a power supply 703, an input unit 704, and an output unit 705. It will be appreciated by those skilled in the art that the structure of the electronic device shown in fig. 7 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. Wherein: the processor 701 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of a server and processes data by running or executing software programs and/or modules stored in the memory 702, and calling data stored in the memory 702, thereby performing overall monitoring of the electronic device. The memory 702 may be used to store software programs and modules, i.e., the computer-readable storage media described above. The processor 701 executes various functional applications and data processing by running software programs and modules stored in the memory 702. The memory 702 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function, and the like; the storage data area may store data created according to the use of the server, etc. In addition, the memory 702 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 702 may also include a memory controller to provide access to the memory 702 by the processor 701.
The electronic device further comprises a power supply 703 for supplying power to the various components, which may be logically connected to the processor 701 by a power management system, so that functions of managing charging, discharging, power consumption management, etc. are implemented by the power management system. The power supply 703 may also include one or more of any component, such as a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, etc. The electronic device may further comprise an input unit 704, which input unit 704 may be used for receiving input digital or character information and generating keyboard, mouse, joystick, optical or trackball signal inputs in connection with user settings and function control. The electronic device may further comprise an output unit 705, which output unit 705 may be used for displaying information entered by a user or provided to a user as well as various graphical user interfaces, which may be constituted by graphics, text, icons, video and any combination thereof.
Embodiments of the present invention also provide a computer program product comprising computer instructions which, when executed by a processor, implement a text classification method as described in any of the above embodiments.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The principles and embodiments of the present invention have been described herein with reference to specific embodiments, but the description of the embodiments is only for aiding in the understanding of the method and core concept of the present invention and is not intended to limit the invention. It will be apparent to those skilled in the art that variations can be made in the present embodiments and applications within the spirit and principles of the invention, and any modifications, equivalents, improvements, etc. are intended to be included within the scope of the present invention.