Disclosure of Invention
The embodiment of the application provides an intelligent translation method, an intelligent translation device, terminal equipment and a computer readable storage medium, and aims to solve the problem that reading experience of a user is poor due to one-to-one redundant translation of a source text in an existing machine translation mode.
In a first aspect, an embodiment of the present application provides an intelligent translation method, including:
acquiring text data to be translated;
translating the text data to be translated into target text data;
and inputting the target text data into a text abstract extraction network model to obtain abstract data output by the text abstract extraction network model, wherein the abstract data is abstract information of the target text data.
With reference to the first aspect, in a possible implementation manner, the text summarization extraction network model is an HSSAS model including a bidirectional gating cycle unit and an attention unit, the HSSAS model includes a word coding layer, a sentence coding layer and a classification layer, the word coding layer includes a word vector layer, a word coder and a word attention layer, and the sentence coding layer includes a sentence vector layer, a sentence coder and a sentence attention layer;
inputting the target text data into a text abstract extraction model to obtain abstract data output by the text abstract extraction model, wherein the abstract data comprises:
calculating, by the word encoding layer, a sentence vector for each sentence in the target text data;
calculating a text vector of the target text data through the sentence coding layer according to the sentence vector;
and extracting abstract data of the target text data through the classification layer according to the text vector.
With reference to the first aspect, in one possible implementation manner, the calculating, by the word encoding layer, a sentence vector of each sentence in the target text data includes:
calculating a forward word vector and a backward word vector for each word in the target text data by the bidirectional gated loop unit;
respectively splicing the forward word vectors and the backward word vectors of all words to obtain hidden layer expression vectors of all sentences;
splicing the hidden layer expression vectors of each sentence to obtain hidden layer expression vectors of the target text data;
calculating a weight coefficient matrix of a word level through the attention unit according to the hidden layer representation vector of the target text data;
and calculating a sentence vector of each sentence according to the weight coefficient matrix of the word level.
With reference to the first aspect, in a possible implementation manner, the calculating, by the sentence coding layer and according to the sentence vector, a text vector of the target text data includes:
calculating a forward sentence vector and a backward sentence vector of each sentence in the target text data by the bidirectional gating circulation unit according to the sentence vectors;
respectively splicing the forward sentence vector and the backward sentence vector of each sentence to obtain a hidden layer expression vector of the target text data;
calculating a sentence-level weight coefficient matrix by the attention unit according to the hidden layer representation vector of the target text data;
and calculating a text vector of the target text data according to the weight coefficient matrix of the sentence level.
With reference to the first aspect, in a possible implementation manner, translating the text data to be translated into target text data includes:
inputting the text data to be translated into a translation model, and obtaining target text data output by the translation model, wherein the target text data is a translation result of the text data to be translated.
With reference to the first aspect, in a possible implementation manner, the translation model is a transform model including an adaptive attention unit, the transform model includes an embedding layer, an encoding layer, a decoding layer, and an output layer, an encoder of the encoding layer includes the adaptive attention unit and a feed-forward neural network, and a decoder of the decoding layer includes the adaptive attention unit, a Masked multi-head attention unit, and a feed-forward neural network;
inputting the text data to be translated into a translation model to obtain target text data output by the translation model, wherein the method comprises the following steps:
calculating an embedding vector of each sentence in the text data to be translated through the embedding layer;
according to the embedded vector, each sentence is coded through the coding layer to obtain a coding result;
decoding the coding result through the decoding layer to obtain a decoding result;
and outputting the target text data through the output layer according to the decoding result.
With reference to the first aspect, in a possible implementation manner, after the obtaining text data to be translated, the method further includes:
detecting whether the text data to be translated is legal or not;
if the text data to be translated is legal, the step of translating the text data to be translated into target text data is carried out;
and if the text data to be translated is illegal, outputting alarm information.
With reference to the first aspect, in a possible implementation manner, the detecting whether the text data to be translated is legal includes:
counting the number of illegal characters in the text data to be translated and the length of the text data to be translated;
if the number of the illegal characters is larger than a preset number threshold value and/or the length is smaller than or equal to a preset threshold value, judging that the text data to be translated is illegal;
and if the number of the illegal characters is less than or equal to the preset number threshold and the length is greater than the preset threshold, judging that the text data to be translated is legal.
In a second aspect, an embodiment of the present application provides an intelligent translation apparatus, including:
the acquisition module is used for acquiring text data to be translated;
the translation module is used for translating the text data to be translated into target text data;
and the abstract extraction module is used for inputting the target text data into a text abstract extraction network model and obtaining abstract data output by the text abstract extraction network model, wherein the abstract data is abstract information of the target text data.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the method according to any one of the above first aspects.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the method according to any one of the above first aspects.
In a fifth aspect, embodiments of the present application provide a computer program product, which, when run on a terminal device, causes the terminal device to perform the method of any one of the first aspect.
According to the method and the device, the input text data to be translated is translated, the text abstract is used for extracting the network model, the abstract data is extracted from the translated target text data, namely, the core content is extracted after the source text is translated, the main information of the text is extracted, the text content is shortened, the reading efficiency is improved, the reading time is shortened, the user can quickly know the key information of the texts in other languages, and the reading experience of the user is improved.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application.
It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The intelligent translation method provided by the embodiment of the application can be applied to terminal devices such as mobile phones, tablet computers, wearable devices, vehicle-mounted devices, notebook computers, ultra-mobile personal computers (UMPCs), netbooks and Personal Digital Assistants (PDAs), and the specific type of the terminal device is not limited at all in the embodiment of the application.
The intelligent translation scheme inputs long text in a source language and outputs short text in a translated target language, for example, a thesis in English, and outputs short text in Chinese, which is a text abstract of an English thesis. After the intelligent translation scheme of the embodiment of the application is used for performing machine translation on the input source language long text, the abstract extraction is performed on the translated target text data through the text abstract extraction network model, and the abstract information is obtained. The machine translation model and the text abstract extraction network model in the embodiment of the application are deep neural network models.
The technical solutions provided in the embodiments of the present application will be described below with reference to specific embodiments.
Referring to fig. 1, a schematic flow chart diagram of an intelligent translation method provided in an embodiment of the present application is shown, where the method may include the following steps:
and S101, acquiring text data to be translated.
It is to be understood that the text data to be translated may be any language and any type of text data. For example, the text data to be translated is a japanese news article. The text data to be translated is generally a long text, that is, the content and the number of words of the source language text to be translated are both large.
In some embodiments, the input text data to be translated may be illegal, so that the subsequent translation effect is poor, in order to further improve the translation effect, after the input text data to be translated is obtained, the legality of the input text data to be translated may be checked, and after the check is passed, the text data to be translated is translated into the target text data. That is, after the obtaining of the text data to be translated, the method may further include: detecting whether the text data to be translated is legal or not; if the text data to be translated is legal, the step of translating the text data to be translated into target text data is carried out; and if the text data to be translated is illegal, outputting warning information.
Further, the process of detecting whether the text data to be translated is legal may include: counting the number of illegal characters in the text data to be translated and the length of the text data to be translated; if the number of illegal characters is larger than a preset number threshold value and/or the length is smaller than or equal to a preset threshold value, judging that the text data to be translated is illegal; and if the number of the illegal characters is less than or equal to the preset number threshold and the length of the illegal characters is greater than the preset threshold, judging that the text data to be translated is legal.
It can be understood that the preset number threshold and the preset threshold can be set according to the actual application requirement. For example, the preset number threshold is one third of the total text character number, that is, when the number of illegal characters in the text data to be translated exceeds one third of the total text character number, the text data to be translated is considered as illegal input. For another example, the preset threshold is 10 characters, that is, when the length of the input text is less than or equal to 10 characters, the length of the input text is considered to be too short, and it is difficult to extract keywords in combination with the context, and the input text data to be translated is determined as an illegal input. Otherwise, if the number of the illegal characters is smaller than the preset number threshold value and the length is larger than the preset threshold value, the input is considered to be legal, and the legal text data to be translated is input into the machine translation model.
Of course, it is also possible to judge whether the input text is legal by only counting the number of illegal characters or the input length of the text.
And step S102, translating the text data to be translated into target text data.
It should be noted that the manner of translating the text data to be translated into the target text data may be any. In some embodiments, the text data to be translated may be input to a translation model, and the target text data output by the translation model may be obtained. The translation model may be any existing model, for example, the translation model is a Transformer model, and the model includes self-Attention. However, the size of the context field, the computing speed and the strength of context correlation cannot be decided autonomously by the Transformer model including self-Attention. In order to enable the model to autonomously decide the size of the context length, reduce the calculation power, increase the calculation speed and increase the length of the context association, the embodiment of the application provides an improved Transformer model. The improved Transformer model replaces self-Attention in the original model with an Adaptive Attention (Adaptive Attention) unit.
In some embodiments, see the schematic block diagram of the structure of the improved Transformer model shown in fig. 2, the above-mentioned translation model is a Transformer model including an adaptive attention unit, and the Transformer model includes an Embedding (Embedding)layer 21, an encoding (Encoder)layer 22, a decoding layer (Decoder)23 and anoutput layer 24. Theencoder 221 of theencoding layer 22 includes an Adaptive-Attention (Adaptive-Attention)2211 unit and a feedforwardneural network 2212, and thedecoder 231 of thedecoding layer 23 includes an Adaptive-Attention (Adaptive-Attention)unit 2311, a Masked Multi-Headed-Attention (Masked Multi-ended-Attention)unit 2312 and a feedforwardneural network 2313. Theoutput layer 24 includes a fully connectedlayer 241 and asoftmax layer 242.
Thecoding layer 22 includes 6encoders 221, each of which includes two sublayers, namely a feedforward neural network and Adaptive-Attention. Thedecoding layer 23 includes 6decoders 231, each of which includes 3 sub-layers, namely, a feedforward neural network, Adaptive-Attention, and (Masked Multi-Headed-Attention.
The Embedding (Embedding)layer 21 may perform an Embedding operation on the input text data to be translated. Specifically, each sentence is a sequence of words, which is turned into an embedded vector by the Embedding operation. The vectors obtained after the Embedding operation are input to theencoding layer 22.
After the embedded vector is input into the coding layer, Adaptive-Attention data are firstly processed, and then the data are transmitted to a feedforward neural network, the calculation of the feedforward neural network can be parallel, and the obtained output can be input into the next coder. The Adaptive-Attention function can associate context information of each word in the text and automatically associate the expansion length, so that longer context information can be combined when the current word is predicted, and meanwhile, the calculation amount is not increased. In addition, parallel feed-forward neural network milling may enable the model to more quickly approximate the objective function.
The output of the coding layer is used as the input of the decoding layer, and the effect of Adaptive-attack and feedforward neural network in the decoding layer is the same as that of the coding layer, and the description is omitted here. Mask Multi-header Attention in the decoding layer may mask certain values from effect when parameters are updated.
The output of the decoding layer is used as the input of the output layer, and the output layer comprises a full connection layer and a softmax layer. The role of the fully connected layer is to integrate features to reduce the impact of the sort location. The Softmax layer is used for classifying, the highest probability is taken as a prediction result, for example, ten thousand words are shared, the probability of inputting ten thousand words is taken as the Softmax layer, and the word with the highest probability is selected as a final result.
Based on the model shown in fig. 2, referring to the schematic block diagram of the translation process shown in fig. 3, the process of inputting the text data to be translated into the translation model and obtaining the target text data output by the translation model specifically includes:
step S301, calculating an embedding vector of each sentence in the text data to be translated through an embedding layer.
And step S302, coding each sentence through a coding layer according to the embedded vector to obtain a coding result.
And step S303, decoding the coding result through the decoding layer to obtain a decoding result.
And step S304, outputting the target text data through the output layer according to the decoding result.
It can be seen that the machine translation using the improved transform model provided in the embodiments of the present application can improve the translation effect.
It should be appreciated that machine translation may translate long text in a source language to long text in a target language, which is typically a one-to-one translation. For example, the input text data to be translated is an English thesis, and a Chinese translation is obtained after machine translation. The current intelligent translation mode directly presents the result of machine translation to the user after obtaining the result. Therefore, the user is required to spend a lot of time reading the translated full text to obtain the key information of the full text, which is poor in reading experience for some users who want to quickly know the key information of the foreign language text. In order to improve the reading experience of a user, the embodiment of the application extracts the abstract information of the translated text from the translated text by using the deep neural network after the machine is translated.
Step S103, inputting the target text data into the text abstract extracting network model, and obtaining abstract data output by the text abstract extracting network model, wherein the abstract data is abstract information of the target text data.
It should be noted that the text summarization network model may be any existing summarization model, for example, HSSAS model, which includes biLSTM (bidirection Long Short-Term Memory) and Attention.
In some embodiments, an improved HSSAS model is provided, which replaces the original bilst with a bidirectional gated round unit (BiGRU), and the BiGRU has a simpler network structure and faster computation speed compared to the bilst.
Referring to fig. 4, a schematic diagram of an improved HSSAS model is shown, where the text abstraction network model is an HSSAS model including a bidirectional gating cycle unit and an attention unit, and the HSSAS model includes a word encoding layer, a sentence encoding layer, and a classification layer.
Based on the model shown in fig. 4, referring to the flow schematic block diagram of the text summarization process shown in fig. 5, the process of inputting the target text data into the text summarization extraction model to obtain the summarized data output by the text summarization extraction model may include:
step S501, a sentence vector of each sentence in the target text data is calculated by the word encoding layer.
It should be noted that the word encoding layer includes a word vector layer, a word encoder, and a word attention layer. The word encoding layer may process the input target text data. The encoder comprises a forward GRU and a backward GRU, and each word in the sentence is encoded through the forward GRU and the backward GRU so as to extract semantic information in two directions. And then splicing the obtained forward word vector and the backward word vector to obtain a hidden layer representation vector of the whole sentence. And then splicing the hidden layer representation vectors of each sentence to obtain the word hidden layer representation vectors of the whole document. And finally, according to a weight coefficient matrix of a word level obtained by the word hidden layer vector of the whole document, multiplying the weight coefficient matrix and the word hidden layer expression vector of the whole document to obtain a sentence vector of each sentence.
Specifically, referring to the flow schematic block diagram of the word encoding layer shown in fig. 6, the above process of calculating a sentence vector of each sentence in the target text data by the word encoding layer may include:
step S601, calculating a forward word vector and a backward word vector of each word in the target text data through a bidirectional gating circulation unit.
Step S602, the forward word vectors and the backward word vectors of all the words are spliced respectively to obtain the hidden layer expression vectors of each sentence.
Specifically, for a certain word v, the forward word vector is specifically: h isFront side=GRUFront side(v,ht-1) The backward word vector is specifically: h isRear end=GRURear end(v,ht+1). The forward word vector and the backward word vector are spliced to obtain a sentence vector, namely hi=【hFront side,hRear end】。
And step S603, splicing the hidden layer representation vectors of each sentence to obtain the hidden layer representation vectors of the target text data.
Specifically, the hidden layer representation vectors of each sentence are spliced to obtain the hidden layer representation vectors of the multiple sentence vectors, namely Hs=(h1,h2,h3,…,hn)。
And step S604, calculating a weight coefficient matrix of a word level through an attention unit according to the hidden layer representation vector of the target text data.
Step S605, calculating a sentence vector of each sentence according to the weight coefficient matrix of the word hierarchy.
It will be appreciated that the attention mechanism may enable more or less encoded representations for each word with a different degree of semantic contribution to the sentence. In particular to HsSubstituting into softmax function to obtain weight coefficient matrix asBy means of a weight matrix asWords with high scores may be retained and words with low scores discarded. Then extracting sentence vector S of each sentencei,Si=Hs*as. A long sentence can be converted into a vector of fixed length by the word encoder.
Step S502, calculating a text vector of the target text data through a sentence coding layer according to the sentence vector.
It should be noted that the sentence encoding layer includes a sentence vector layer, a sentence encoder and a sentence attention layer.
Extract each sentence vector SiAnd then, calculating a hidden layer representation vector of each sentence through a bidirectional GRU unit, solving a weight coefficient matrix of a sentence level according to the integral importance degree and semantic contribution of each sentence in the document to the document, and calculating the vector of the whole document according to the weight coefficient matrix.
In some embodiments, the above process of calculating a text vector of the target text data through the sentence coding layer according to the sentence vector may include: calculating a forward sentence vector and a backward sentence vector of each sentence in the target text data through a bidirectional gating circulation unit according to the sentence vectors; respectively splicing the forward sentence vector and the backward sentence vector of each sentence to obtain a hidden layer expression vector of the target text data; calculating a sentence-level weight coefficient matrix through an attention unit according to a hidden layer representation vector of target text data; and calculating a text vector of the target text data according to the weight coefficient matrix of the sentence level.
Specifically, a forward sentence vector hFront side=GRUFront side(Si,ht-1) Vector h of backward sentencesRear end=GRURear end(Si,ht+1)。hjBefore and after h. The hidden layer representation vector of the whole text is Hd=(h1,h2,h3,…,hn). According to HdAnd adCalculating a text vector S of the entire textd,Sd=Hd*ad。adIs a sentence-level weight coefficient matrix.
And S503, extracting abstract data of the target text data through the classification layer according to the text vector.
Specifically, a logic classification layer is adopted to generate a binary label of each sentence to judge whether each sentence belongs to the final abstract text, and the classification judgment standard depends on a series of abstract characteristics including the content abundance of the sentences, the importance degree of the documents, the novelty degree of the accumulated abstract text, the position vector and the like
It can be seen that the improved HSSAS model provided by the embodiment of the present application is used for abstract extraction, so that the calculation speed and the abstract extraction effect can be improved.
After the translation result and the digest result are obtained, the obtained results may be output. The output result can be target text data and abstract data, namely target text data obtained by machine translation and abstract data obtained by abstract extraction can be output simultaneously; the summary data may be only, that is, the summary data may be output separately.
Therefore, the core content is extracted after the source text is translated, the main information of the text is extracted, the reading efficiency is improved, the reading time is shortened, the user can quickly know the key information of the texts in other languages, and the reading experience of the user is improved. In addition, the translation effect and the abstract effect are improved through an improved machine translation model and an improved text abstract extraction network model.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 7 shows a structural block diagram of the intelligent translation apparatus provided in the embodiment of the present application, corresponding to the intelligent translation method described in the above embodiment, and only the parts related to the embodiment of the present application are shown for convenience of description.
Referring to fig. 7, the apparatus includes:
an obtainingmodule 71, configured to obtain text data to be translated;
and the translation module 72 is configured to translate the text data to be translated into target text data. And the abstract extractingmodule 73 is used for inputting the target text data into the text abstract extracting network model and obtaining abstract data output by the text abstract extracting network model, wherein the abstract data is abstract information of the target text data.
In one possible implementation, the text abstraction network model is an HSSAS model including a bidirectional gated round-robin unit and an attention unit, the HSSAS model includes a word coding layer, a sentence coding layer and a classification layer, the word coding layer includes a word vector layer, a word encoder and a word attention layer, and the sentence coding layer includes a sentence vector layer, a sentence encoder and a sentence attention layer;
the abstract extraction module is specifically configured to:
calculating a sentence vector of each sentence in the target text data through the word encoding layer;
calculating a text vector of the target text data through a sentence coding layer according to the sentence vector;
and extracting abstract data of the target text data through the classification layer according to the text vector.
In a possible implementation manner, the abstract extraction module is specifically configured to:
calculating a forward word vector and a backward word vector of each word in the target text data through a bidirectional gating circulation unit;
respectively splicing the forward word vectors and the backward word vectors of all words to obtain a hidden layer expression vector of each sentence;
splicing the hidden layer representation vectors of each sentence to obtain the hidden layer representation vectors of the target text data;
calculating a weight coefficient matrix of a word level through an attention unit according to a hidden layer representation vector of target text data;
and calculating a sentence vector of each sentence according to the weight coefficient matrix of the word level.
In a possible implementation manner, the abstract extraction module is specifically configured to:
calculating a forward sentence vector and a backward sentence vector of each sentence in the target text data through a bidirectional gating circulation unit according to the sentence vectors;
respectively splicing the forward sentence vector and the backward sentence vector of each sentence to obtain a hidden layer expression vector of the target text data;
calculating a sentence-level weight coefficient matrix through an attention unit according to a hidden layer representation vector of target text data;
and calculating a text vector of the target text data according to the weight coefficient matrix of the sentence level.
In a possible implementation manner, the translation module is specifically configured to:
and inputting the text data to be translated into the translation model to obtain target text data output by the translation model, wherein the target text data is a translation result of the text data to be translated.
In one possible implementation manner, the translation model is a Transformer model comprising an adaptive attention unit, the Transformer model comprises an embedding layer, an encoding layer, a decoding layer and an output layer, an encoder of the encoding layer comprises the adaptive attention unit and a feedforward neural network, and a decoder of the decoding layer comprises the adaptive attention unit, a Masked multi-head attention unit and a feedforward neural network;
the translation module is specifically configured to:
calculating an embedding vector of each sentence in the text data to be translated through an embedding layer;
coding each sentence through a coding layer according to the embedded vector to obtain a coding result;
decoding the coding result through a decoding layer to obtain a decoding result;
and outputting the target text data through the output layer according to the decoding result.
In a possible implementation manner, the apparatus may further include a validity detecting module, which is specifically configured to:
detecting whether the text data to be translated is legal or not;
if the text data to be translated is legal, the step of translating the text data to be translated into target text data is carried out;
and if the text data to be translated is illegal, outputting alarm information.
Further, the validity detecting module is specifically configured to:
counting the number of illegal characters in the text data to be translated and the length of the text data to be translated;
if the number of illegal characters is larger than a preset number threshold value and/or the length is smaller than or equal to a preset threshold value, judging that the text data to be translated is illegal;
and if the number of the illegal characters is less than or equal to the preset number threshold and the length of the illegal characters is greater than the preset threshold, judging that the text data to be translated is legal.
It should be noted that, the above-mentioned intelligent translation apparatus and the above-mentioned intelligent translation method correspond to each other one to one, and the related introduction can participate in the above-mentioned corresponding content, which is not described herein again.
The intelligent translation device has the function of realizing the intelligent translation method, the function can be realized by hardware, or can be realized by executing corresponding software by hardware, the hardware or the software comprises one or more modules corresponding to the function, and the modules can be software and/or hardware.
It should be noted that, for the information interaction, execution process and other contents between the above-mentioned devices and modules, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and reference may be made to the part of the embodiment of the method specifically, and details are not described here.
Fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 8, theterminal device 8 of this embodiment includes: at least oneprocessor 80, amemory 81, and acomputer program 82 stored in thememory 81 and executable on the at least oneprocessor 80, theprocessor 80 implementing the steps in any of the various method embodiments described above when executing thecomputer program 82.
Theterminal device 8 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, aprocessor 80, amemory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of theterminal device 8, and does not constitute a limitation of theterminal device 8, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.
TheProcessor 80 may be a Central Processing Unit (CPU), and theProcessor 80 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Thememory 81 may in some embodiments be an internal storage unit of theterminal device 8, such as a hard disk or a memory of theterminal device 8. In other embodiments, thememory 81 may also be an external storage device of theterminal device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on theterminal device 8. Further, thememory 81 may also include both an internal storage unit and an external storage device of theterminal device 8. Thememory 81 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. Thememory 81 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps that can be implemented in the above method embodiments.
The embodiments of the present application provide a computer program product, which, when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus, device and method may be implemented in other ways. For example, the above-described apparatus and device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.