Detailed Description
The terms "first," "second," and the like in the presently disclosed embodiments are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
Some technical terms related to the embodiments of the present disclosure will be first described.
Speech recognition refers to converting speech into its corresponding text. Generally, a speech recognition model can be trained and obtained based on a sample corpus, and speech to be recognized is recognized through the speech recognition model, so that a text corresponding to the speech to be recognized is obtained.
However, in some scenarios, there may be terms of art in the speech to be recognized, and the conventional speech recognition model recognizes the speech containing terms of art, which often results in erroneous recognition results. Taking a conference scenario of software development as an example, in the scenario, the professional terms may be "debug", "admin", etc., and the voice to be recognized may be "we debug the voice of this system", and in the process of recognizing the voice containing "debug" by using the traditional voice recognition model, the voice of "debug" is decoded into "eighth", so as to obtain an erroneous recognition result, thereby reducing the accuracy of voice recognition.
In view of the above, the embodiment of the disclosure provides a voice recognition method, which includes obtaining a voice to be recognized, determining a first transcription text of the voice to be recognized according to the voice to be recognized, and performing semantic restoration on the first transcription text when keywords in the first transcription text hit in an error keyword set, wherein the error keyword set includes a plurality of keywords corresponding to the same voice.
In the method, after the first transcription text of the voice to be recognized is obtained, the first transcription text is subjected to semantic restoration by utilizing the error keyword set, so that the accuracy of the transcription text obtained after restoration can be improved. Further, in the decoding process, the output probabilities of the candidate keywords are interfered by using a preset keyword set, so that the output probability of the target keyword in the candidate keywords is improved, and then the transcription text of the voice to be recognized is obtained according to the output probabilities of the candidate keywords. Thus, when the voice to be recognized contains the term, the accuracy of the transcribed text obtained by recognizing the voice containing the term can be improved.
The voice recognition method provided by the embodiment of the disclosure can be applied to different APP (Application). For example, the voice recognition method can be applied to video applications, can be used for recognizing voices included in videos and further adding subtitles to the videos, and can be applied to conference applications, can be used for recognizing voices in a conference process and further automatically generating conference records (texts corresponding to the voices in the conference process). The embodiments of the present disclosure are not particularly limited to the application scenario of the speech recognition method, and the above is merely an exemplary description.
It should be noted that, when the speech recognition method is applied to the above application, the speech recognition method is specifically implemented in a form of a computer program. In some embodiments, the computer program may be stand-alone, for example, a stand-alone application with corresponding functionality. In other embodiments, the computer program may be a functional module or plug-in, etc., that is attached to and runs in an existing application.
The voice recognition method provided by the embodiment of the disclosure can be independently executed by the terminal, independently executed by the server, or cooperatively executed by the terminal and the server. When the speech recognition method is performed solely by a terminal, for example a terminal of a speech recognition application, it is indicated that the speech recognition application can be run offline. For ease of understanding, the following description will be exemplified by a voice recognition method cooperatively performed by a terminal and a server.
In order to make the technical solution of the present disclosure clearer and easier to understand, the architecture of the speech recognition system provided by the embodiments of the present disclosure is described below with reference to the accompanying drawings.
Referring to a system architecture diagram of the speech recognition system 100 shown in fig. 1, the speech recognition system 100 includes a terminal 11 and a server 12. The terminal 11 and the server 12 are connected through a network. The terminal 11 is disposed in a terminal, and the terminal includes, but is not limited to, a smart phone, a tablet computer, a notebook computer, a Personal Digital Assistant (PDA), a smart wearable device, or the like. The server 12 may be a cloud server, such as a central server in a central cloud computing cluster, or an edge server in an edge cloud computing cluster. Of course, the server 12 may also be a server in a local data center. The local data center refers to a data center directly controlled by a user.
In some examples, the terminal 11 is configured to send the voice to be recognized to the server 12, and after obtaining the voice to be recognized, the server 12 is configured to recognize the voice to be recognized and determine a first transcription text of the voice to be recognized. In the process of recognizing the voice to be recognized, the server 12 intervenes on a plurality of candidate keywords corresponding to the voice to be recognized by using a preset keyword set, so as to improve the output probability of the target keyword in the plurality of candidate keywords, wherein the target keyword can be a keyword of a professional term, such as "debug", and thus, the accuracy of recognizing the voice containing the professional term is improved.
Further, after the server 12 obtains the first transcribed text, when the keywords in the first transcribed text hit in the wrong keyword set, the server may further determine the reasonable degree of the first transcribed text, where the wrong keyword set may include a plurality of keywords corresponding to the same voice. The plurality of keywords corresponding to the same voice may be "eighth" and "debug", or "break" and "gap". Taking the first transcribed text as an example of "we eighth one next system", the first transcribed text includes the keyword "eighth" in the error keyword set, the server 12 may replace "eighth" with "debug" to obtain the second transcribed text "we debug one next system", and then determine the degree of rationality of the first transcribed text and the second transcribed text. It can be seen that the second transcribed text is more reasonable than the first transcribed text, and the server 12 replaces the first transcribed text with the second transcribed text, thereby further improving the accuracy of recognizing speech containing technical terms.
In order to make the technical solution of the present disclosure clearer and easier to understand, the voice recognition method provided by the embodiments of the present disclosure is described below in terms of a terminal and a server. Referring to fig. 2, the flowchart of a voice recognition method according to an embodiment of the disclosure includes:
s201, the terminal collects the voice to be recognized.
In some examples, the terminal may collect the speech to be recognized based on a microphone. To facilitate understanding, taking a conference scenario as an example, a user may participate in a conference (e.g., a video conference) through a terminal, and during the conference, speak speech containing terms of art, and the terminal may collect speech of the user while speaking. In other examples, the terminal may further record the conference based on the operation triggered by the user, so as to obtain the video file. The video file includes sound information such as speech during a conference.
It should be noted that, the voice to be recognized collected by the terminal is not limited to the above example, and those skilled in the art may determine the voice to be recognized according to actual needs.
And S202, the terminal sends the voice to be recognized to the server.
The server can receive the voice to be recognized sent by the terminal so as to recognize the voice to be recognized.
S203, the server determines a plurality of candidate keywords corresponding to the voice to be recognized according to the voice to be recognized.
The speech to be recognized may be a speech corresponding to a sentence or a speech corresponding to a word. Taking the voice to be recognized as the voice corresponding to the sentence as an example, the voice to be recognized contains a professional term, such as a voice of 'we debug down the system', and the candidate keywords used for the voice to be recognized can be 'we', 'eighth', 'debug', 'next', 'the system'. In the decoding process, the server may determine the text corresponding to the speech to be recognized based on the output probability of each candidate keyword.
In some examples, the server pre-trains a speech recognition model with a recognition function for speech containing terminology. Based on the fact that the corresponding special terms in different business scenes are different, the corpus containing the special terms corresponding to the preset business scene can be determined according to actual needs, and then model training is carried out by utilizing the corpus containing the special terms corresponding to the preset business scene, so that a voice recognition model is obtained. The corpus containing the technical terms comprises preset texts and voices of the preset texts, and the preset texts comprise keywords corresponding to the preset service scenes.
Taking a preset service scene as an example of a conference scene developed by software, the server can acquire a preset text based on feedback of a user, and can also automatically generate preset keywords, and the preset text is respectively described below.
In some examples, the server may generate a recognition result (e.g., transcribed text) of the speech to be recognized and send to the terminal, which presents the recognition result to the user, which may receive a verification result of the recognition result by the user. For example, the verification result may include whether the recognition result is accurate and the modified recognition result. Taking the speech to be recognized containing the technical term as "we debug down this system" as an example, when the text corresponding to the recognition result of the speech to be recognized is "we eighth down this system", the user may mark the recognition result as error, modify the text of the recognition result as "we debug down this system", and then use the modified text as the preset text. Further, the terminal may record the wrong keywords "eighth" and "debug" to obtain a wrong keyword set.
In other examples, the server may also send a recognition result (e.g., a history transcription text) of the history of the speech to be recognized to the terminal, which may present the recognition result to the user, and the terminal may also receive a verification result of the history transcription text by the user. For example, the user may modify the history transcription text or mark whether the history transcription text is accurately marked, etc., and then use the modified history transcription text as the preset text.
In other examples, the server may also automatically generate the preset text based on the preset keywords, for example, the preset keywords may be terms in a preset service scenario, so as to further increase the data size of the preset text, and further increase the data size of the corpus.
The server may receive a voice of a preset text transmitted from the terminal. In some examples, the server may send the preset text to the terminal, the terminal presents the preset text to the user and prompts the user to speak the voice of the preset text, thereby implementing recording of the voice of the preset text, and then sends the recorded voice of the preset text to the server.
Fig. 3 is a schematic diagram of a terminal recording interface according to an embodiment of the present disclosure. The recording interface includes a keyword list 310, a text presentation area 320, a recording control 330, and an editing control 340. The user may click on the keywords in the keyword list 310, the terminal displays a text (for example, a preset text) including the keywords clicked by the user in the text display area 320 based on the click operation of the user on the keywords, then the user may speak the voice of the text displayed in the text display area 320 after clicking the recording control 330, and the terminal starts recording based on the click operation of the user on the recording control 330 and then sends the voice of the preset text after the recording is completed to the server.
Further, the user can edit the text displayed in the text display area 320 by clicking the edit control 340, then the terminal performs subsequent recording processing based on the edited text, and sends the edited text and the voice of the text to the server, so that the corpus under the preset service scene is more abundant.
Then, the server can use the corpus under the preset service scene to carry out additional training on the traditional voice recognition model so as to obtain a new voice recognition model. Based on the above, the server may input the voice to be recognized into the new voice recognition model, so as to obtain a plurality of candidate keywords corresponding to the voice to be recognized.
S204, when the target keyword in the candidate keywords hits in the preset keyword set, the server improves the output probability of the target keyword.
The preset keyword set comprises keywords corresponding to a preset service scene. Continuing the above example, the keywords corresponding to the preset scene may be "debug", "admin", "gap", etc., where the target keyword "debug" in the plurality of candidate keywords hits in the preset keyword set, and the server may improve the output probability of the target keyword. As shown in table 1 below:
Table 1:
| Candidate keywords | Eighth one | debug |
| Original output probability | 0.6 | 0.4 |
| Output probability of prognosis | 0.6 | 0.7 |
As can be seen from table 1, the output probability of the target keyword can be improved by performing the dry prognosis on the plurality of candidate keywords by using the preset keyword set, for example, performing the dry prognosis on the target keyword, so that the server can decode the speech to be recognized containing the term of art into the text including the term of art in the decoding process. For the keywords shown in table 1, the server can decode the speech to be recognized as "we debug down this system" instead of "we eighth down this system", thereby improving the accuracy of recognizing the speech to be recognized containing the term of art.
S205, the server obtains a first transcription text of the voice to be recognized based on the output probabilities of the candidate keywords.
The candidate keywords shown in table 1 are output by the server in the decoding process with a larger output probability, so as to obtain a keyword "debug", and other candidate keywords of the voice to be recognized are similarly processed, so as to obtain a first transcription text of the voice to be recognized.
In the embodiment of the disclosure, a server intervenes output probabilities of a plurality of candidate keywords by using a preset keyword set, and then obtains a transcription text of a voice to be recognized based on the output probabilities of the plurality of candidate keywords. Therefore, the accuracy of recognizing the voice to be recognized containing the technical terms is improved, and the service requirement is met.
S206, when the keywords in the first transcription text are hit in the error keyword set, the server performs semantic restoration on the first transcription text.
The error keyword set comprises a plurality of keywords corresponding to the same voice. As described above, the voices of the keyword "eighth" and the keyword "debug" are the same, the wrong keyword set may include "debug" and "eighth", and for another example, the voices of the keyword "gap" and the keyword "ramp" are the same, and the wrong keyword set may include "gap" and "ramp".
It should be noted that, the present disclosure is not particularly limited to the manner of obtaining the error keyword set, and in some examples, the error keyword set may be obtained through a preset manner.
In order to further improve the accuracy of recognizing the voice to be recognized containing the technical terms, the server can perform semantic restoration on the first transcription text after obtaining the first transcription text. Specifically, when a keyword in a first transcription text hits in an error keyword set, semantic repair is performed on the first transcription text. For example, the first transcription text may be "we eighth under this system", the error keyword set may include "eighth" and "debug", and it is seen that the first transcription text includes the keyword "eighth" in the error keyword set, and the server performs semantic repair on the first transcription text.
In some examples, the server may replace keywords in the first transcribed text with keywords in the set of error keywords to obtain a second transcribed text, and then evaluate scores of the first transcribed text and the second transcribed text, respectively, to obtain a first accuracy score of the first transcribed text and a second accuracy score of the second transcribed text. The scoring criteria may be scoring based on the accuracy of the transcribed text, where the higher the accuracy of the transcribed text, the more reasonable the transcribed text is characterized and the lower the accuracy of the transcribed text, the less reasonable the transcribed text is characterized. For example, the first transcribed text is "we's eighth under this system", and after replacing "eighth" with "debug" the keywords in the wrong keyword set, a second transcribed text "we debug under this system" can be obtained. Then, the server may score the first transcribed text and the second transcribed text, respectively, using the language model, thereby obtaining a first accuracy score and a second accuracy score.
The server then performs semantic fixes on the first transcription text based on the first accuracy score and the second accuracy score. For example, when the second accuracy score is greater than the first accuracy score, the first transcribed text is repaired to the second transcribed text, i.e., the first transcribed text is replaced with the second transcribed text, indicating that the second transcribed text is more reasonable relative to the first transcribed text, and when the first accuracy score is greater than or equal to the second accuracy score, the first transcribed text is still output, indicating that the first transcribed text is more reasonable relative to the second transcribed text.
In the embodiment of the disclosure, after the first transcription text is obtained, the server further performs semantic restoration on the first transcription text, so that accuracy of identifying the voice to be identified containing the technical terms is further ensured, and the condition that the correct identification result is interfered as the wrong identification result only after the keyword set is dried is avoided.
And S207, the server sends the first transfer text after the semantic restoration to the terminal.
The first transcribed text after the semantic repair is the transcribed text with higher scores in the first transcribed text and the second transcribed text, for example, the first transcribed text and the second transcribed text can be used.
In some examples, after determining the first text after the semantic repair, the server may send the first text after the semantic repair to the terminal, and the terminal may then present the first text after the semantic repair. In other examples, the server may also generate a control instruction based on the first text after the semantic repair, where the control instruction is used to control a device corresponding to the preset service scenario. For example, the server may issue the control instruction to the controlled device (e.g., projector, intelligent switch lamp) to thereby realize control of the controlled device. In some examples, the operation corresponding to the first transcription text may be turning on the projector, and the server may generate a control instruction to turn on the projector based on the first transcription text, and then send the control instruction to turn on the projector to the projector, so that the projector is started based on the control instruction.
In other embodiments, the server may also send the first transcribed text directly to the terminal, or generate a control instruction based on the first transcribed text, i.e. the server does not semantically repair the first transcribed text.
Based on the above description, the embodiments of the present disclosure provide a speech recognition method, which obtains a corpus containing terms corresponding to a preset scene by directionally collecting audio, and then uses the corpus to train additionally on the basis of a traditional speech recognition model to obtain a new speech recognition model. The method also utilizes the error keyword set to carry out semantic restoration processing on the recognition result, thereby realizing post error correction spam, further improving the accuracy of recognizing the voice to be recognized containing the technical terms and meeting the service requirements.
Fig. 4 is a schematic diagram of a voice recognition apparatus according to an exemplary disclosed embodiment, and as shown in fig. 4, the voice recognition apparatus 400 includes:
An acquisition module 401, configured to acquire a voice to be recognized;
A text transcription module 402, configured to determine a first transcription text of the speech to be recognized according to the speech to be recognized;
The semantic restoration module 403 is configured to perform semantic restoration on the first transcription text when the keywords in the first transcription text hit in an error keyword set, where the error keyword set includes a plurality of keywords corresponding to the same speech.
Optionally, the semantic restoration module 403 is specifically configured to replace a keyword in the first transcribed text with a keyword in the error keyword set to obtain a second transcribed text, evaluate a first accuracy score of the first transcribed text and a second accuracy score of the second transcribed text, and replace the first transcribed text with the second transcribed text when the second accuracy score is greater than the first accuracy score.
Optionally, the text transcription module 402 is specifically configured to determine a plurality of candidate keywords corresponding to the speech to be recognized according to the speech to be recognized, increase an output probability of a target keyword of the plurality of candidate keywords when the target keyword hits in a preset keyword set, where the preset keyword set includes keywords corresponding to a preset service scene, and obtain a first transcription text of the speech to be recognized based on the output probabilities of the plurality of candidate keywords.
Optionally, the text transcription module 402 is specifically configured to input the speech to be recognized into a speech recognition model to obtain a plurality of candidate keywords corresponding to the speech to be recognized, where the speech recognition model is obtained based on corpus training corresponding to a preset service scene.
Optionally, the corpus corresponding to the preset service scene includes a preset text and a voice of the preset text, and the preset text includes a keyword corresponding to the preset service scene.
Optionally, the preset text is obtained through user feedback or is automatically generated through the preset keywords.
Optionally, the preset text is specifically obtained through verification operation of the user on the history transcription text.
Optionally, the device further includes an instruction generating module, configured to generate a control instruction according to the first transcription text, where the control instruction is configured to control a device corresponding to the preset service scenario to execute an operation corresponding to the first transcription text.
The functions of the above modules are described in detail in the method steps in the above embodiment, and are not described herein.
Referring now to fig. 5, a schematic diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown, which may be a server 12, the server 12 being configured to implement the functionality corresponding to the speech recognition device 400 shown in fig. 4. The electronic device shown in fig. 5 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 5, the electronic device 500 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
In general, devices may be connected to I/O interface 505 including input devices 506, including for example, touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc., output devices 507, including for example, liquid Crystal Displays (LCDs), speakers, vibrators, etc., storage devices 508, including for example, magnetic tape, hard disk, etc., and communication devices 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 501.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some embodiments, the terminals, servers, etc. may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), etc., and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be included in the electronic device or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs, when the one or more programs are executed by the electronic equipment, the electronic equipment is caused to acquire voice to be recognized, a first transcription text of the voice to be recognized is determined according to the voice to be recognized, semantic restoration is carried out on the first transcription text when keywords in the first transcription text hit in an error keyword set, and the error keyword set comprises a plurality of keywords corresponding to the same voice.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module is not limited to the module itself in some cases, and for example, the first acquisition module may also be described as "a module that acquires at least two internet protocol addresses".
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems-on-a-chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, example 1 provides a method for recognizing speech, obtaining speech to be recognized, determining a first transcription text of the speech to be recognized according to the speech to be recognized, performing semantic restoration on the first transcription text when keywords in the first transcription text hit in an error keyword set, wherein the error keyword set includes a plurality of keywords corresponding to the same speech.
According to one or more embodiments of the present disclosure, example 2 provides the method of example 1, the semantically repairing the first transcription text, comprising:
Replacing the keywords in the first transcription text by using the keywords in the error keyword set to obtain a second transcription text;
Evaluating a first accuracy score of the first transcribed text and a second accuracy score of the second transcribed text;
And when the second accuracy score is greater than the first accuracy score, replacing the first transcribed text with the second transcribed text.
According to one or more embodiments of the present disclosure, example 3 provides the method of example 1, the determining, according to the speech to be recognized, a first transcription text of the speech to be recognized, including:
Determining a plurality of candidate keywords corresponding to the voice to be recognized according to the voice to be recognized;
When a target keyword in the candidate keywords hits in a preset keyword set, the output probability of the target keyword is improved, wherein the preset keyword set comprises keywords corresponding to a preset service scene;
And obtaining the first transcription text of the voice to be recognized based on the output probabilities of the candidate keywords.
According to one or more embodiments of the present disclosure, example 4 provides the method of example 3, where determining, according to the speech to be recognized, a plurality of candidate keywords corresponding to the speech to be recognized includes:
Inputting the voice to be recognized into a voice recognition model to obtain a plurality of candidate keywords corresponding to the voice to be recognized, wherein the voice recognition model is obtained based on corpus training corresponding to a preset service scene.
According to one or more embodiments of the present disclosure, example 5 provides the method of example 4, where the corpus corresponding to the preset business scenario includes preset text and speech of the preset text, and the preset text includes keywords corresponding to the preset business scenario.
Example 6 provides the method of example 5, the preset text being derived by user feedback or automatically generated by the preset keyword, according to one or more embodiments of the present disclosure.
Example 7 provides the method of example 6, according to one or more embodiments of the present disclosure, wherein the preset text is specifically obtained by a user checking a history of transcribed text.
Example 8 provides the method of example 7, according to one or more embodiments of the present disclosure, the method further comprising:
Generating a control instruction according to the first transcription text, wherein the control instruction is used for controlling equipment corresponding to the preset service scene to execute the operation corresponding to the first transcription text.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims. The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.