METHODS, SYSTEMS, AND PROGRAMMING FOR PERFORMING SPEECH RECOGNITION
FIELD OF THE INVENTION
The present invention relates to methods, systems, and programming for performing speech recognition.
BACKGROUND OF THE INVENTION
Discrete large-vocabulary speech recognition systems have been available for use on desktop personal computers for approximately 10 years by the time of the writing of this patent application. Continuous large - ocabulary speech recognition systems have been available for use on such computers for approximately five years by this time. Such speech recognition systems have proven to be of considerable worth. In fact_j_ much of the text of the present patent application is being prepared by the use of a large large - vocabulary continuous speech recognition system.
As used in this specification and the claims that follows, when we referred to a large large -vocabulary speech recognition system, we mean one that has the ability to recognize a given utterance as being any__one of at least two thousand different vocabulary words, depending upon which of those words has corresponding phonetic models which that most closely match the given spoken word _??_. ?'¥0T? "- Io i-Q-O-Q0ι—
As -indicating indicated by FIG. l large large- vocabulary speech recognition typically functions by having a user 100 speak into a microphone 102, which in the example of FIG. 1 is a microphone of a cellular telephone 104. The microphone transduces the variation in air pressure over time caused by the utterance of words into a corresponding electronically represented waveform represented by an electronic signal 106. In many speech recognition systems this waveform signal is converted by digital signal processing performed either by a computer processor or by a special digital signal processor 108, into a time domain representation. Often the time domain representation■___-- comprised ofs a plurality of parameter frames 112, each of which represents properties of the sound represented by the waveform 106 at each of a plurality of successive time periods, such as every one -hundredth of a second.
As indicated in FIG. 2j_ the time domain, or frame, representation of an utterance to be recognized is then matched against a plurality of possible sequences of phonetic models 200 corresponding to^different words in a recognition systems^ individual words 202 are each represented by a corresponding phonetic spelling 204, similar to the phonetic spellings found in most dictionaries. Each phoneme in a phonetic spelling has one or more phonetic models 200 associated with it. In many systems the models 200 are phoneme -in-context models, which model the sound of their associat ed phoneme when it occurs in the context of the preceding and following phoneme in a given word's phonetic spelling. The phonetic models are commonly composed of the sequence of one or more probability models, each of which ^cprcpcntS represents the probability of different parameter values for each of the parameters used in the frames of the time domain representation 110 of an utterance to be recognized. One of the major trends in personal computing in recent years has been the trend toward the increased use of smaller and often more portable computing devices.
Originally most personal computing was performed upon desktop computers of the general type represented by FIG. 3. Then there was an increase in usage of even smaller personal computers in the form of laptop computers, which are not shown in the drawings because laptop computers have roughly the same type of computational interface as desktop computers . vocabulary speech recognition systems have been designed for use on such systems .
I
Recently there has been an increase in usage the use of new types of computers such as the tablet computer shown in FIG. 4, the personal digital assistant computer shown in FIG. 5, cell phones which have increased computing power, shown in FIG. 6, wrist phone computers represented in FIG. 7, and a wearable computer which provides a user interface with a screen and eyetracking and/or audio output provided from a head wearable device as indicated in FIG. 8.
Because of recent increases in computing power, such new types of devices can have computational power equal to that of the first desktops on which discrete large ^#) vocabulary recognition systems were provided and_j_ in some cases^ as much computational power as was provided on desktop computers that first ran large vocabulary continuous speech recognition. The computational capacities of such smaller and/or more portable personal computers will only grow as time goes by.
One of the more important challenges involved in on □uch ever more portable computers is that of providing a user interface that makes it easier and faster to create, edit, 'and use speech recognition on such devices.
SUMMARY OF THE INVENTION
One aspect of the present invention relates to speech recognition using selectable recognition modes. This includes innovations such as: allowing a user to select between recognition modes with and without language context; allowing, a user to select between 'con tinuous and discrete largej-fyocabulary speech recognition modes; allowing a user to select between at least two different alphabetic entry speech recognition modes; and allowing a user to select recognitions modes when creating text: a large ^©vocabulary mode, a letters recognizing mode, a numbers recognizing mode, and a punctuation recognizing mode.
I
Another asnapt. of the invention relates to using choice
choice lists_; providing vertically scrollable choice lists; providing horizontally scrollable choice lists; and providing choice lists on characters in an alphabetic filter used to limit recognition candidates .
Another aspect of the invention relates to enabling users to select word transformations. This includes innovations such as enabling a users- to select to havechoose one from a plurality of transformation^ to performed upon a recognized word so as to change it in a desired way, such as to change from singular to plural, to give irfe—the word a gerund form, etc. It also includes innovations such as enabling a user to select to transform a selected word between an alphabetic and non -alphabetic form. It also includes innovations such as providing a user with a choice list of transformed words_ corresponding to a recognized word and allowing the user to select one of the transformed words as output .
Another aspect of the invention relates to speech recognition that automatically turns recognition off in one ways. This includes innovation s_ such as a speech recognition command that turns on recognition and then automatically turns such recognition off until receiving another command to turn recognition back on. It also includes the innovation of speech recognition in which pressing a button causes recognition for a duration determined by the length of time of such a press, and in which clicking the same button causes 'recognition for a length of time independent of the length of such a click.
Another aspect^ of the invention relates to phone key includes the innovations of using phone keys to select a word from a choice list; of using them to select a help mode that provides explanation about a subsequently pressed key; and of using them to select a list of functions currently associated with phone keys. It also includes the innovation of speech recognition of having a text navigation mode in which multiple numbered phone keys concurrently have multiple different key mappings associated with them, and the pressing of such a key mapping key causes the functions associated with the numbered phone keys to change to the mapping associated with the pressed key.
Another aspect of the invention relates to speech recognition using phone key alphabetic filtering and spelling. By alphabetic filtering we mean favoring the speech recognition of words including a sequence of letters, normally an initial sequence of letters, corresponding to a sequence of letters indicated by user input . This aspect of the invention includes the innovation of using as filtering input the pressing of phone keys, where each key press is ambiguous in that it indicates that a corresponding character location in a desired word corresponds to one of a plurality of letters identified with that phone key. This aspect of the invention also includes the innovation of using as filtering input a sequence of phone key presses in
filter. The This aspect of the invention also includes the innovation' of using such ambiguous and non -ambiguous phone key input for spelling text that can be used in addition to text produced by speech recognition.
Another aspect of the invention relates to speech recognition that enables a user to perform re -utterance
I recognition, in which speech recognition is performed upon both a second saying of a sequence of one or more words and upon an early saying of the same sequence to help the speech recognition better select one or more best scoring text sequences for the utterances .
Another aspect of the invention relates to the combination of speech recognition' and text-to-speech (TTS) generation. This includes the innovation of having speech
recognition system that have has at least one mode which automatically uses TTS to says recognized text after its recognition and uses TTS or recorded audio to say the names of recognized commands after their recognition. This aspect of the invention also includes the innovation of a large vocabulary system that automatically repeats recognized text using TTS after each utterance. This aspect also includes the innovation of a large vocabulary system that enables a user to move back or forward in recognized text, with one or more words at the current location after each such move being said by TTS. This aspect also includes the innovation of a large vocabulary system that uses speech recognition to produce a choice list and provides TTS output of one or more of that list's choices.
Another aspect of the invention relates to the combination of speech recognition with handwriting and/or character recognition. This includes the innovation of selecting as a function of recognition of both handwritten and spoken representations of a sequence of one dr more words to be recognized. It also includes the innovation of using character or handwriting recognition of one or more letters to alphabetically filter speech recognition of one or more words. It also includes the innovations of using speech recognition of one or more letter Hjidentifying words to alphabetically filter handwriting recognition, and of using speech recognition to correct handwriting recognition of one or more words .
Another aspect of the invention relates to the combination of large—vocabulary speech recognition with audio recording and playback. It includes the innovation of a handheld device with both large —vocabulary speech recognition and audio recoding in which users can switch between at least two of the following modes of recording sound input: one which records audio without corresponding speech recognition output; one that records audio with corresponding speech recognition output; and one that records the audio' s speech recognition output without corresponding audio. This aspect of the invention also includes the innovation of a handheld device that has both large—vocabulary speech recognition and audio recoding capability and that enables a user to select a portion of previously recorded sound and to have speech recognition performed upon it. It also includes the innovation of a large— ocabulary speech recognition system that enable s_ a user to use large—vocabulary speech recognition to provide a text label for a portion of sound that is recorded without corresponding speech recognition output, and the innovation of a system that enables a user to search for a text label associated with portions of unrecognized recorded sound by uttering the label's words, recognizing the utterance, and searching for text containing those words. This aspect of the invention also includes the innovation of a large — vocabulary system that allows^ users to switch between playing back previously recorded audio and performing speech recognition with a single input, with successive audio playbacks automatically starting slightly before the end of prior playback. This aspect of the invention also includes the innovation of a cell phone that has both large — vocabulary speech recognition and audio recording and playback capabilities .
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the prese nt invention will become more evident upon reading the following description of the preferred embodiment in conjunction with the accompanying drawings
Figure 1 is a schematic illustration of how spoken sound can be converted into acoustic parameter frames for use by speech recognition software.
Figure 2 a schematic illustration of how speech recognition, using phonetic spellings, can be used to recognize words represented by a sequence of parameter frames such as those shown in figure 1, and how the time alignment between phonetic models of the word can be used to time aligne those words against the original acoustic signal from which the parameter frames have been derived
Figures 3 through 8 show a progression of different types of computing platforms upon which many aspects of the present invention can be used, and illustrating the trend toward smaller and/or more portable computing devices .
Figure 9 illustrates a personal digital assistant, or PDA, device having a touch screen displaying a software input panel, or SIP, embodying many aspects of the present invention, that allows entry by speech recognition of text into application programs running on such a device Figure 10 is a highly schematic illustration of many of the hardware and software components that can be found in a PDA of the type shown in figure 9.
Figure 11 is a blowup of the screen image shown in figure 9, used to point out many of the specific elements of the speech recognition SIP shown in figure 9
Figure 11 s similar to figure 12 except that it also illustrates a correction window produced by the speech recognition SIP and many of its graphical user interface elements .
Figures 13 through 17 provide a highly simplified pseudocode description of the responses that the speech recognition SIP makes to various inputs, particularly inputs received from its graphical user interface
Figure 18 is a highly simplified pseudocode description of the recognition duration logic used to determine the length of time for which speech recognition is turned on m response to the pressing of one or more user interface buttons, either in the speech recognition SIP shown m figure 9 or m the cellphone embodiment shown starting at figure 59.
Figure 19 is a highly simplified pseudocode description of a help mode that enables a user to see a description of the function associated with each element of the speech recognition SIP of figure 9 merely by tcuchmg it
Figures 20 and 21 are screen images produced by the help mode described in figure 19.
Figure 22 is a highly simplified pseudocode description of a displayChoice ist routine used m various forms by both the speech recognition SIP a figure 9 aid the cellphone embodiment of figure 59 to display correction windows.
Figure 23 is a highly simplified pseudocode description of the getchoices routine used m various forms by both the speech recognition SIP and the cellphone embodiment to generate one or more choice list for use by the display choice list routine of figure 22
Figures 2 and 25 illustrate the utterance list data structure used by the getchoices routine of figure 23
Figure 26 is a highly simplified pseudocode description of a filter εtch routine used by the getchoices routine to limit correction window choices to match filtering input, if any, entered by a user
Figure 27 is a highly simplified pseudocode description of a wordForm ist routine used in various forms by both the speech recognition SIP and the cellphone embodiment to generate a word form correction list that displays alternate forms of a given word or selection
Figures 28 and 29 provided a highly simplified pseudocode description of a filterEdit routine used in various forms by both the speech recognition SIP and cellphone embodiment to edit a filter string used by the filterMatch routine of figure 26 in response to alphabetic filtering information input from a user
Figure 30 provides a highly simplified pseudocode description of a filterCharacterChoice routine used m various forms by both the speech recognition SIP and cellphone embodiment to display choice lists for individual characters of a filter string
Figures 31 through 35 illustrate a sequence of interactions between a user and the speech recognition SIP, in which the user enters and corrects the recognition of words using a one-at-a-time discrete speech recognition method
Figure 36 shows how a user of the SIP can correct a mis-recognition shown at the end of figure 35 by a scrolling through the choice list provided in the correction window until finding a desired word and then using a capitalized button to capitalize it before entering it into text
Figures 37 shows how a user of the SIP can correct such amis-recognition by selecting part of an alternate choice in the correction window and using it as a filter for selecting the desired speech recognition output
Figure 38 shows how a user of the SIP can select two successive alphabetically ordered alternate choices m the correction window to cause the speech recognizer's output to be limited to output starting with a sequence of characters located between the two selected choices m the alphabetic
Figure 39 illustrates how a user of the sip can use th. speech recognition of letter names to input filtering characters and how a filter character choice list can be used to correct errors m the recognition of such filtering characters Figure 0 illustrates how a user of the SIP recognizer can enter one or more characters of a filter string using the international communication alphabets and how the SIP interface can show the user the words out of that alphabet
Figure 1 shows how a user can select an initial sequence of characters from an alternate choice in the correction window and then use international communication alphabets to add characters to that sequence so as to complete the spelling of a desired output
Figures .2 through 3 illustrate a sequence of user interactions in which the user enters and edits text into the SIP using continuous speech recognition
Figure45 illustrates how the user can correct a mis-recognition by spelling all or part of the desired output using continuous letter name recognition as an ambiguous (or multivalued) filter, and how the user can use filter character choice lists to rapidly correct errors produced m such continuous letter name recognition
Figure 6 illustrates how the speech recognition SIP also enables a user to input characters by drawn character recogaition
Figure 7 is a highly simplified pseudocode description of a character recognition mode used by the SIP when performing drawn character recognition of the type shown m figure 6
Figure48 illustrates how the speech recognition SIP lets a user nput text using handwriting recognition
Figure 49 is a highly simplified pseudocode description of the handwriting recognition mode used by the SIP when performing handwriting recognition of the type shown in figure 8
Figure 50 illustrates how the speech recognition system enables a user to input text with a software keyboard
Figure 51 illustrates a filter entry mode menu that can be selected to choose from different methods of entering filtering information, including speech recognition, character recognition, handwriting recognition, and software keyboard input
Figures 52 through 54 illustrates how either character recognition, handwriting recognition, or software keyboard input can be used to filter speech recognition choices produced by m the SIP's correction window
Figure 55 and 56 illustrate how the SIP allows speech recognition of words or filtering characters to be used to correct handwriting recognition input
Figure 58 is a highly simplified description of an alternate embodiment of the display choice list routine of figure 22 in which the choice list produced orders choices only by recognition score, rather than by alphabetical ordering as in figure 22
Figure 59 illustrates a cellphone that embodies many aspects of the present invention
Figure 60 provides a highly simplified block diagram of the ma] or components of a typical cellphone such as that shown in figure 59
Figure 61 is a highly simplified block diagram of various programming and data structures contained in one or more mass storage devices on the cellphone of figure 59
Figure 62 illustrates that the cellphone of figure 59 allows traditional phone dialing by the pressing of numbered phone keys
Figure 63 is a highly simplified pseudocode description of the command structure of the cellphone of figure 59 when in its top level phone mode, as illustrated by the screen shown in the top of figure 62
Figure 64 illustrates how a user of the cellphone of figure 59 can access and quickly view the commands of a main menu by pressing the menu key on the cellphone
Figure 65 and 66 provide a highly simplified pseudocode description of the operation of the main menu illustrated m figure 64
Figure 67 through 74 illustrate command mappings of the cellphone ' s numbered keys i_n_. each of various important modes and menus associated with a speech recognition text editor that operates on the cellphone of figure 59
Figure 75 illustrates how user of the cellphone's text editing software can rapidly see the function associated with one or nore keys in a non-menu mode by pressing the menu button and scrolling through a command l st that can be used substantially in the same manner as a menu of the type shown in figure 64 Figure 66 through 68 provide a highly simplified pseudocode description of the responses of the cellphone's speech recognition program when in its text window, editor, mode
Figure 79 and 80 provide a highly simplified pseudocode description of an entry mode menu, that can be accessed from various speech recognition modes, to select amoung various ways to enter text
Figures 81 through 83 provide a highly simplified pseudocode description of the correctionWmdow routine used by the cellphone to display a correction window and to respond to user input when such correction window is shown
Figure 84 is a highly simplified pseudocode description of an edit navigation menu that allows a user to select various ways of navigating with the cellphone's navigation keys when the ed t mode's text window is displayed
Figure 85 is a highly simplified pseudocode description of a correction window navigation menu that allows the user to select various ways of navigating with the cellphone's navigation keys when in a correction window, and also to select from among different ways the correction window can respond to the selection of an alternate choice m a correction window
Figures 86 through 88 provide highly simplified pseudocode descriptions of three slightly different embodiments of the key Alpha mode, which enables a user to enter a letter by saying a word starting with that letter and which responds to the pressing of a phone key by substantially limiting such recognition to words starting with one of the three or four letters associated with the pressed key
Figures 89 and 90 provide a highly simplified pseudocode description of some of the options available under the edits options menu that is accessible from many of the modes of the cellphone's speech recognition programming
Figures 91 and 92 provide a highly simplified description of a word type menu that can be used to limit recognition choices to a particular type of word, such as a particular grammatical type of word
Figure 93 provides a highly simplified pseudocode description of an entry preference menu that can be used to set default recognition settings for various speech recognition functions, or to set recognition duration settings
Figure 9 provides a highly simplified pseudocode description of text to speech playback operation available on the cellphone
Figure 95 provides a highly simplified pseudocode description of how the cellphone's text to speech generation uses programming and data structures also used by the cellphone's speech recognition
Figure 96 is a highly simplified pseudocode description of the cellphone's transcription mode that makes it easier for a user to transcribe audio recorded on the cellphone using the device's speech recognition capabilities
Figure 97 is a highly simplified pseudocode description of programming that enables the cellphone's speech recognition editor to be used to enter and edit text in dialogue boxes presented on the cellphone, as well as to change the state of controls such as list boxes check boxes and radio buttons in such dialog boxes
Figure 98 is a highly simplified pseudocode description of a help routine available on the cellphone to enable a user to rapidly find descriptions of various locations in the cellphone's command structure
Figures 99 and 100 illustrate examples of help menus of the type that displayedby the programming of figure 98
Figures 101 and 102 illustrate how a user can use the help programming of figure 98 to rapidly search for, and received descriptions of, the functions associated with various portions of the cellphone's command structure.
Figures 103 and 104 illustrate a sequence of interactions between a user and the cellphone's speech recognition editor's user interface in which the user enters and corrects text using continuous speech recognition
Figure 105 illustrates how a user can scroll horizontally in one a correction window displayed on the cellphone
Figure 107 illustrates operation of the key Alpha mode shown in figure 86
Figures 108 and 109 illustrate how the cellphone's speech recognition editor allows the user to address and enter and edit text in an e-mail message that can be sent by the cellphone's wireless communication capabilities Figure 110 illustrates how the cellphone's speech recognition can combine scores from the discrete recognition of one or more words with scores from a prior continuous recognition of those words to help produce the desired output
Figure 111 illustrates how the cellphone speech recognition software can be used to enter a URL for the purposes of accessing a World Wide Web site using the wireless communication capabilities of the cellphone
Figures 112 and 113 illustrate how elements of the cellphone's speech recognition user interface can be used to navigate World Wide Web pages and to select items and enter and edit text in the fields of such web pages
Figure 114 illustrates how elements of the cellphone speech recognition user interface can be used to enable a user to more easily read text strings too large to be seen at one time in a text field displayed on the cellphone screens, such as a text fields of a web page or dialogue box
Figure 115 illustrates the cellphone's find dialog box, how a user can enter a search string into that dialog box by speech recognition, how the find function then performs a search for the entered string, and how the found text can be text used to label audio recorded on the cellphone
Figure 116 illustrates how the dialog box editor programming shown m figure 97 enable speech recognition to be used to select from among possible values associated with a list boxes
Figure 117 illustrates how speech recognition can be used to dial people by name, and how the audio playback and recording capabilities of the cellphone can be used during such a cellphone call
Figure 118 illustrates how speech recognition can be turned on and of when the cellphone is recording audio to insert text labels or text comments into recorded audio
Figure 119 illustrates how the cellphone enables a user to have speech recognition performed on portions of previously recorded audio
Figure 120 illustrates how the cellphone enables a user to strip text recognized for a given segment of sound from the audio recording of that sound
Figure 121 illustrates how the cellphone enables the user to either turned on or off an indication of which portions of a selected segment of text have been associated audio recording
Figures 122 through 125 illustrate how the cellphone speech recognition software allows the user to enter telephone numbers by speech recognition and to correct the recognition such numbers when wrong
Figure 126 is provided to illustrate how many aspects of the cellphone embodiment shown in figure 59 through 125 can be used in an automotive environment, including the TTS and duration logic aspects of the cellphone embodiment
Figures 127 and 128 illustrate that most of the aspects of the cellphone embodiment shown in figure 59 through 125 can be used either on cordless phones or landlme phones
Figure 129 provides a highly simplified pseudocode description of the name dialing programming of the cellphone embodiment, which is partially illustrated in FIG 117
Figure 130 provides a highly simplified pseudocode description of the cellphone's digit dial programming illustrated in figures 122 through 125
Of
DETAILED DESCRIPTION OF SOME PREFERRED EMBODIMENTS FIG. 9 illustrates the personal digital assistant, or
PDA, 900 on which many aspects of the present invention can be used. The personal .digital aαoiat nt or PDA shown is similar to that currently being sold as the Compaq iPaq IPAQ H3650 Pocket PC, the Casio Cassiopeia, and the Hewlett - Packard Jornado 525.
Personal digital assiαtantThe PDA 900 includes a relatively high resolution touch screen 902j_ which enables the user to select software buttons as well as portions of text by means of touching the touch screen, such as with a stylus 904 or a finger. The personal digital aoolotant-PDA also includes a set of input buttons 906 and a two - dimensional navigational control 908.
In this specification' and the claims that follow_j_ a navigational input device that allows a user to select discrete units of motion on one or more dimensions will often be considered to be included in the definition of a button. The This is particularly true with regard to telephone interfaces, in which the up, down, left, and right inputs of a navigational device will be considered phone keys or phone buttons .
FIG. 10 provides a schematic system diagram of a PDA 900. It shows the touch screen 902 and input buttons 906 (which including include the navigational input 908) . It also shows that the device has a central processing unit such as a microprocessor 1002. The CPU 1002 is connected over one or more electronic communication buses 1004 with read-only memory 1006 (often flash ROM) ; random access memory 1008; -3r-one or more I/O devices 1010; a video controller 1012 for controlling displays on the touch screen 902; and an audio device 1014 for receiving input from a microphone 1015 and supplying audio output to a speaker 1016. T e PDA also includes a battery 1018 for providing it with portable power; a headphone -in and headphone -out ^-Jack 1020, which is connected to be—the audio circuitry 1014; a docking connector 1022 for providing a connection between the PDA and another computers- such as a desktop j_τ and an add-on connector 1024 for enabling a user to add circuitry to the PDA such as additional flash ROM, a modem, a wireless transceiver 1025, or a mass storage device.
In FIG. 10 shows a mass storage device 1017 is shown. In actuality^ this mass storage device could be any type of mass storage device, including all or part of the flash ROM 1006 or a miniture hard disk. In such a mass storage device the PDA would normally store an operating system 1026 for providing much of the basic functionality of the device. Commonly it would include one or more application programs, such as a word processor, a spreadsheet, a Web browser, or a personal information management system, in addit ion to the operating system and in additi recognition related functional
When the PDA 900 is used will normally include speech recognit on programm ng 1030.
It includes programming for performing word matching of the general type described above with regard to FIGS . 1 and 2.
The speech recognition programming will also normally include one or more vocabularies or vocabulary groupings
1032 including a large vocabulary that includes at le ast two thousand words . Many large vocabulary systems have a vocabulary of fifty thousand to several hundred thousand words. For each vocabulary word, the vocabulary will normally have a text spelling 1034 and— one or more vocabulary groupings 1036 to which the word belongs (for example^ the text output " . " mMight actually be in both a large— ocabulary recognition vocabulary, a spelling vocabulary, and a punctuation vocabulary grouping in some systems|_. Each vocabulary word will also normally have an of speech 1038 in which the phonetic spelling parts of speech.
The speech recognition programming commonly includes a pronunciation guesser 1042 for guessing the pronunciation of new words that are added to the system and, thus, which do not have a predefined phonetic spelling. The speech recognition programming commonly includes one or more phonetic lexical tree-J-s 1044. A phonetic lexical tree is a tree—shaped data structure that groups together in a common path from the tree's rootoutc all phonetic spellings that start with the same sequence of phonemes. Using such lexical trees improves recognition performance because it enables all portions of different words that share the same initial phonetic spelling to be scored a-fe—together.
Preferably the speech recognition programming will also include a PolyGram language model 1045 that indicates the probability of the occurrence of different words in text, including the probability of words occurring in text given one or more preceding and/or following words .
Commonly the speech recognition programming will store language model update data 1046, which includes information that can be used to update the PolyGram language model 1045 just described. Commonly this language model update data will either include or contain statistical information derived from text that the user has created or thatwhich the user has indicated isas- similar to the text that he or she wishesiα interested into generatirag-e . In FIG. 10 the speech recognition programming is shown storing contact information 1048, which includes names, addresses, phone numbers, e-mail addresses, and phonetic spellings for some or all of such information. This data is used to help the speech recogiiition programming recognize the speaking of such contact information. In many embodiments of the information such contact information will be included in an external program, such as one of the application programs 1028 or accessories to the operating system 1026, but, even in such cases, the speech recognition programming would normally need access to such names, addresses, phone numbers, e-mail
PDA us ng a software nput panel or SIP 1100 em o yng many aspects of the present invention.
FIG. 12 is similar to FIG. 11 except it shows the touch screen 902 when the speech recognition SIP is displaying a correction window 1200.
FIGS. 13 through 17 represent successive pages of a pseudocode description of how the speech recognition SIP responds to various inputs on its graphical user interface. For purposes of simplicity this pseudocode is represented as one main event loop 1300 in the SIP program which respon flsβe to user input . ^- in FIGS. 13 through 17 this event loop is described as having two major switch statements _— a switch statements- 1301 in FIGS-. 13 that responds to inputs on the user interface thatwhich can be generated whether or not the correction window 1200 is displayedshown, and a switch statements- 1542 in FIG. 15 that responds to user inputs that can only be generated when the correction window 1200 is displayed.
If the user presses the Talk button 1102 shown in FIG. 11, function 1302 of FIG. 13 causes functions 1304 through 1308 to be performed. Function 1304 tests to see if there is any text in the SIP buffer shown by the window 1104 in FIG. 11. In the SIP embodiment shown in the FIGS., the SIP buffer is designed to hold a relatively small number of lines of text, of which the SIP'S software will keep track of the acoustic input and best choices associated with the recognition of each word, and the linguistic context created
I by such text. Such a text buffer is used because the speech recognition SIP often will not have knowledge about the text in the remote application shown in the window 1106 in FIG. • 11 into which the SIP outputs text at the location of the current cursor 1108 in the application. In other embodiments of the invention a much larger SIP buffer could be used. In other embodiments many of the aspects of the present invention will be used as part of an independent speech recognition text creation application that will not require the use of a SIP for the inputting of text. The major advantage of using a speech recognizer that functions as a SIP is that it can be used to provided input for almost any application designed to run on a PDA.
Returning to FIG. 13, function 1304 clears any text from the SIP buffer 1104 because the Talk button 1102 is provided as a way for user to indicate to the SIP that he is dictating text in a new context. Thus, if the user of the
SIP has moved the cursor 1108 in the application window 1106 of FIG. 11, he should start the next dictation by pressing the Talk button 1102.
Function 1306 in FIG. 13 responds to the pressing on the Talk button by testing to see if the speech recognition system is currently in correction mode .— and T-_rf so, it r exits that mode± removing any correction window 1200— of the type shown in FIG. 12— that might be shown.
The SIP shown in the FIGS, is not in correction mode when a correction window is displayed, but has not been selected to receive input inputs from most buttons of the main SIP interface, and is in correction mode when the correction window is displayed and has' been selected to receive inputs from many of such buttons. This distinction is desirable because the particular SIP shown can be selected to operate in a one-at-a-timeOnc At A Time mode in which words are spoken and recognized discreetly, and in which a correction window is displayed fort for each word as it is recognized, to enable a user to more quickly see the choice list or provide correction input. In One At A Timcone-at-a-1ime mode most forms of user input not specifically related to making corrections are used to perform the additional function of confirming the first choice displayed in the current choice list as the desired word. When the system is not in one-at-a-tiτneOnc-At A Time mode_;_ the correction window is usually only displayed only when the user has provided input indicating a desire to correct previous input. In such cases the correction window is displayed in correction mode, because it is assumed that, since the user has aclcctcd chosen to make a correction, most forms of input should be directed to the correction window. It should be appreciated that in systems that only use
One At' A Timoone-at-a-time recognition, or those that do not use it at all, there would be no need to have the added complication of switching into and out of correction mode.
Returning to function 1306, it removes any current correction window because the pressing of the Talk button 1302 indicates a desire to start new dictation, rather than an interest in correcting old dictation.
Function 1308 of FIG. 13 responds to the pressing of the Talk button by causing SIP buffer recognition to start according to a previously selected current recognition duration mode. This recognition takes place without any prior language context for the first word. Preferably language model context will be derived from words recognized in response to one pressing of the Talk button and used to provide a language context for the recognition of the second and subsequent words in such recognition.
FIG. 18 is a schematic representation of the recognition duration programming 1800 that enables a user to select different modes of activating speech recognition in response to the pressing or clicking of any button in the . SIP interface that can be used to start speech recognition. In the shown embodiment there arc is a plurality of buttons, including the Talk button, each of which- that can be used to start speech recognition. This enables a user to both select a given mode of recognition and to start recognition in -at—that mode with a single pressing of a button.
Function 1802 helps determine which functions of FIG. 18 are performed, depending on the current recognition duration mode. The mode can have been set in multiple different ways, including by default and by selection under the Entry Preference option in the function menu shown in
FIG. 46.
If the Press Only recognition duration type has been selected, function 1804 will cause functions 1806 and 1808 to recognize speech sounds that are uttered during the pressing of a speech button. This recognition duration type is both simple and flexible-r because it enables a user to control the length of recognition by one simple rule: recognition occurs during and only during the pressing of a speech button. Preferably utterance and/or end of utterance detection is used during any recognition mode, to decrease the likelihood that background noises will be recognized as utterances .
If the current recognition duration type is the Press And Click To Utterance End type, function 1810 will cause functions 1812 and 1814 to respond to the pressing of a speech button by recognizing speech during that press. In this case the "pressing" of a speech button is defined as
from the time of that click until the next end of utterance detection.
The Press And Click To Utterance End recognition duration type has the benefit of enabling the use of one button to rapidly and easily select between a mode that allows a user to select a variable length extended recognition, and a mode that recognizes only a single utterance. If the current recognition duration type is the Press
Continuous, Click Discrete To Utterances End type, function 1820 causes functions 1822 through 1828 to be performed. If the speech button is clicked, as just defined, functions 1822 and 1824 perform discrete recognition until the next end of utterance. If, on the other hand, the speech button is pressed, as previously defined, functions 1826 and 1828 I perform continuous recognition as long as the speech button is continuouslyremains pressed. I
This recognition duration type has the benefit of making it easy for users to quickly switch between continuous and discrete recognition merely by using different types of presses on a given speech button. In the SIP embodiment shown, the other recognition duration types do not switch between continuous and discrete recognition.•
If the current recognition duration type is the Click To Timeout type, function 1830 causes functions 1832 to 1840
I to be performed. If the speech button is clicked, functions 1833 through 1836 normally toggle recognition between off and on. Function 1834 responds to a click by testing to see whether or not speech recognition is currently on. If so, and if the speech button being clicked is other than one that changes vocabulary, it responds to the click by turning off speech recognition. On the other handj, if speech recognition is off when the speech button is clicked, function 1836 turns speech recognition on until a timeout duration has elapsed. The length of this timeout duration can be set by the user under the Entry Preferences option in the function menu 4602 shown in FIG. 46. If the speech button is pressed for longer than a given duration, as described above, functions 1838 and 1840 will cause recognition to be on during the press but to be turned off at its end. This recognition duration type provides a quick and easy way for users to select with one button between toggling speech recognition on an off' and causing speech recognition to be turned on only during an extended press of a speech buttonk-ey.
Returning to function 1308 of FIG. 13, it can be seen that the selection of different recognition duration types can allow the user to select how the Tal k button and other speech buttons initiate recognition.
If the user selects the Clear button 1112 shown in FIG. 11, functions 1309 through 1314 remove any correction window which might be displayed-r and clear the contents of the SIP buffer without sending any deletions to the operating system text input. As stated above, in the speech SIP shown, the SIP text window 1104, shown in FIG. 11, is designed to hold a relatively small body of text. As text is entered or edited in the SIP buffer^ characters are supplied to the operating system of the PDA, causing corresponding changes to be made to text in the applicajfc-ion window 1006 shown in FIG. 11. The Clear button enable s a user to clear text from the SIP buffer, to prevent it from being overloaded, without causing corresponding deletions to be made to text in the application window.
The Continue button 1114 shown in FIG. 11 is intended to be used when the user wants to dictate a continuation of the last dictated text, or—Θ text which is to be inserted at the current location in the SIP buffer window 1104, shown in FIG. 11. When this button is pressed, function 1316 causes functions 1318 through 1330 to be performed. Function 1318 removes any correction window, because the pressing of the Continue button indicates that the user has no interest in using the correction window. Next _ function 1132 tests if the current cursor in the SIP buffer window has a prior language context which that can be used to help in predicting the probability of the first word or wo rds of any utterance recognized as a result of the pressing of the continue Continue k-eybutton. If so, it causes that language context to be used. If not, and if there is currently no text in the SIP buffer, function 1326 uses the last one or more words previously entered in the SIP buffer as the language context at the start of recognition initiated by the continued Continue button. Next function 1330 starts
SIP buffer recognition, that is, recognition of text to be output to the cursor in the SIP buffer, using the current recognition duration mode.
If the user selects the backspace Backspace button 1116 shown in FIG. 11, functions 1132 through 1336 will be performed. Function 1134 testa, if the SIP is currently in the correction mode. If so, it enters the backspace into the filter editor of the correction window. The correction window 1200 shown in FIG. 12 includes a first choice window 1202. As well will be described below in greater detail, the correction window interface allows the user to select and edited one or more characters in the first choice window as being part of a filter string which identifies a sequence of initial characters belonging to the desired recognition word or words. If the SIP is in the correction mode, pressing backspace will delete from the filter string and characters currently selected in the first choice window, and if no characters are so selected, will delete the character to the left of the filter cursor 1204.
If the SIP is not currently in the correction mode, function 1136 will respond to the pressing of a—the Bbackspace buttonk-ey by entering a backspace character into the SIP buffer and will make the outputting that same character to the operating system so that necessary to make the same change can be made to the corresponding text in the application window 1106 shown in FIG. 11.
If the user selects the New Paragraph bButton 1118 I shown in FIG. 11, functions 1338 through 1342 of FIG. 13 will exit correction mode, if the SIP is currently in it, ι ι and they will enter a New Paragraph c-Gharacter into the SIP buffer and provide corresponding output to the operating System.
As indicated by functions 1344 through 1338, the SIP responds to user selection of a Ss-pace button 1120 in substantially the same manner that it responds to a backspace, that iβj_ by entering it into the filter editor if the SIP is in correction mode, and. otherwise outputting it to the SIP buffer and the Operating system.
If the user selects one of the vocabulary Vocabulary selection Selection buttons 1122 through 1132 shown in FIG. 11, functions 1350 through 1370 FIG. 13, and functions 1402 through 1416 FIG. 14, will set the appropriate recognition mode's vocabulary to the vocabulary corresponding to the selected button and start speech recognition in that mode according to the current recognition duration mode and other settings for the recognition mode.
If the user selects the -name Name recognition Recognition button 1122, functions 1350 and 1356 set the current models recognition vocabulary to the name recognition vocabulary and start • recognition according to the current recognition duration settings and other appropriate speech settings. With all of the vocabulary buttons besides the Naame and Liarge Vvocabulary buttons, these functions will treat in—the current recognition mode as either filter or SIP buffer recognition, depending on whether the SIP is in correction mode. This is because these other vocabulary buttons are associated with vocabularies used for inputting sequences of characters tha t are appropriate for defining a filter string or for direct entry into the SIP buffer. The large vocabulary and the name vocabulary, however, are considered inappropriate for filter string editing and, thus, in the disclosed embodiment the current recognition mode is considered to be either re - utterance or SIP buffer recognition, depending on whether the SIP is in correction mode. In other embodiments, name and large vocabulary recognition could be used for editing a multiword filter.
In addition to the standard response associated with the pressing of a vocabulary button, if the AlphaBravo vocabulary Vocabulary button is pressed, functions 1404 I through 1406 cause a list of all the words used by the ^international Communication Aalphabet (or ICA) to be displayed, as is illustrated a—in . numberal 4002 in FIG. 40.
If the user selects the Ceontinuous/Ddiscrete Rϊeeognition key button 1134 shown in FIG. 11, functions 1418 through 1422 of FIG. 14 are performed. These toggle between continuous recognition mode, which that uses continuous speech acoustic models and which allows multiword recognition candidates to match a given single utterance, and a discrete recognition mode , which that uses discrete recognition acoustic models and which only allows single word recognition candidates to be recognized for a single utterance. The function also starts speech recognition using either discrete or continuous recognition, as has just been selected by the pressing of the Ceontinuous/Ddiscrete button.
If the user selects the function key 1110 by pressing it, functions 1424 and 1426 call the function menu 4602 shown in FIG.46. This function menu allows the user to select from other options besides those available directly from the buttons shown in FIGS. 11 and 12.
If the user selects the help -Help button 1136 shown in FIG. 11^ functions 1432 and 1434 of FIG. 14 call help mode.
As shown in FIG. 19, when the help mode is entered in response to an initial pressing of the help Help button, a function 1902 displays a help window 2000 providing information about using the help mode, as illustrated in n
FIG. 20. During subsequent operation of the help mode^ if the user touches a portion of the SIP interface „ functions 1904 and 1906 display a help window with information abut the touched portion of the interface that continues to be displayed as long as the user continues that touch. This is illustrated in FIG. 21^ in which the u'ser has used the stylus 904 to press the F ilter button 1218 of the correction window. In response ) a help window 2100 is shown that explains the function of the Ff-ilter button. If during the help mode a user double -click-^s on a portion of the display, functions 1908 and 1910 display such a help window that stays up until the user presses another portion of the interface. This enables the user to use the scroll bar 2102 shown in the help window of FIG. 21 to scroll through and read help information too large to fit on the help window 2102 at one time.
Although not shown in FIG. 19^ iα preferred the help wWindows caxijJUf_3κlso have a Kkeep Uup button 2100 to- which a user can drag -fee—from an initial down press on a portion of the SIP user interface of interest to also select to keep the help window up until the touching of a another portion of the SIP user interface.
When, after the initial entry of the help mode, the user again touches the Hfeelp button 1136 shown in FIGS. 11, 20, and 21, functions 1912 and 1914 remove any help windows and exit tip—the help mode, turning off the highlighting of the help Help button.
recognition of the tapped en—word, if any, the first entry in an utterance list^ which holds acoustic data associative associated with the current selection.
====start of dragon pad file 1=
As shown in FIG. 22j_ the displayChoiceList routine is called with the following parameters: a selection parameter;
I a filter string parameter; a filter range parameterj÷ a word type parameter, and a NotChoiceList flag. The selection parameter indicates the text in the SIP buffer for which the routine has been called. The filter string indicates a sequence of one or more characters or character indicating elements that define the set of one or more possible spellings with which the desired recognition output begins. The filter range parameter defines two character sequencesj_ which bound a section of the alphabet in which the desired recognition output falls. The word type parameter indicates that the desired recognition output is of a certain type, such as a desired grammatical type. The NotChoiceList flag indicates a list of one or more words that the user's actions indicate are not a desired word.
Function 2202 of the displayChoiceList routine calls a getchoices routine^ shown in FIG. 23^_ with the filter string and filter range parameters with which the displayChoiceList routine s—has been called and with an utterance list associated with the selection parameter.
As a—shown in FIGS. 24 and 25^ the utterance list 2404 stores sound representations of one or more utterances that have been spoken as part of the desired sequence of one or more words associated with the current selection. As previously stated, when function 2202 of FIG. 22 calls the getchoices routine, it places a representation 240 00(0, shown in FIG. 24^ of that portion of the sound 2402 from which the words of the current selection have been recognized. As was indicated in FIG. 2j_ the process of speech recognition time— ^aligns acoustic models against representations of an audio signal^ and tThe recognition system preferably stores these time alignments so that when -corrections or playback of selected text are desired it can find the corresponding audio representations from a—such time alignments.
In FIG. 24 the first entry 2004 in the utterance list is part of a continuous utterance 2402. The present invention enables a user to add additional utterances of a desired sequence of one or more words to a selection's utterance list_j_ and recognition can be performed on all these utterance together to increase the chance of correctly recognizing a desired output. As ar-shown in FIG. 24^ such additional utterances can include both discrete utterances, such as entry 2400A, as well as continuous utterances, such as entry 2400B. Each additional utterance contains information as indicated by the numerals 2406 and 2408 that indicates whether it is a continuous or discrete utterance and the vocabulary mode in which it was dictated.
In FIGS. 24 and 25^_ the acoustic representations of utterances in the utterance list are shown as waveforms . It should be appreciated that in many e —embodiments^ other forms of acoustic representation will be used, including parameter frame representations such as the representation 110 shown in FIGS. 1 and 2.
Figure FIG. 25 the -is similar to FIG. 24, except that in it_£_ the original utterance list entry is a sequence of discrete utterances. It shows that additional utterance entries used to help correct the recognition of an initial sequence of one or more discrete utterances can also include either discrete or continuous utterances, 2500A and 2500B, respectively.
As shown in FIG. 23, the getchoices routine 2300 includes a function 2302 which tests to see if there■' a there has been a prior recognition for the selection for which this routine has been called thatwhich hasiβ- been performed with the current utterance list and filter values (that is _ filter string and filter range values) . If so, it causes function 2304 to return with the choices from that prior
I recognition, since there have-^s- been no changes in the recognition parameters since the time the prior recognition was made .
If the test of function 2302 is not met, function 2306 tests to see if the filter range parameter is null. If it is not null, function 2308 tests to see if the filter range is more specific than the current filter string, and , if so, it changes the filter string to the common letters or the filter range.- If not 0. , function 2312 nulls the filter range,
<s sinceacnac the filter string contains more detailed information that it does.
As will be explained below, a filter range is selected when a user selects two choices on a choice list as an indication that the desired recognition output fall s_ between them in the alphabet . When the user selects two choices that share initial letters, function 2310 causes the filter string to correspond to those shared letters . This is done so that when the choice, list is displayed the shared letters will be indicated to the users- as one which havc÷-has been confirmed as corresponding to the initial characters of the desired output.
It should- be appreciated that when the user performs a command that selects either a new filter range or filter stringj_ if the newly selected one of these two parameters has values which that contradict values in the other, the value of the older of these two parameters will be nulled.
If there are any candidates from a prior recognition of the current utterance list , function 2316 causes function r
2318 and 2320 to be performed. Function 2318 calls a filterMatch routine shown in FIG. 26 for each such prior recognition candidate with the candidate's prior recognition score and the current filter definitions _,_ and function 2320 deletes those candidates returned as a result of such calls that have scores below a certain threshold.
As indicated in FIG..26^ the filterMatch routine 2600 performs filtering upon word candidates. In the embodiment of the invention shown_j_ this filtering process is extremely flexible, since it allows filters to be defined by filter strings, filter range, or word type. It is also flexible because it allows a combination of word type and either filter string or filter range specifications, and because it allows ambiguous filtering, including ambiguous filters where elements in a filter string are not only ambiguous as to the value of their associative characters but also ambiguous as to the number of characters in their associative character sequences. When we say a filter string— or a portion of a filter stringV is ambiguous_j_ we mean that a plurality of possible character sequences can be considered to match it.
Ambiguous filtering is valuable when used with a filter string input, which, although reliably recognized, does not uniquely defined a single character, such as is the case with ambiguous phone key filtering of the type described below with regard to a cell phone embodiment of many aspects of the present invention.
Ambiguous filtering is also valuable with filter string input that cannot be recognized with a high degree of certainty, such as recognition of letter names, particularly if the recognition is performed continuously. In such cases_j_ not only is there a high degree of likelihood that the best choice for the recognition of the sequence of characters will include one or more errors, but also there is also a reasonable probability that the number of characters recognized in a best —scoring recognition
I candidate might differ from the number spoken. But spelling all or the initial characters of a desired output is a very rapid and intuitive way of inputting filtering information, and even though the best choice from such recognition will often be incorrect, particularly when dictating under adverse conditions .
The filterMatch routine is called for each individual word candidates-. It is called with that word candidate's prior recognition score, if any, or else with a score of lβae. It returns a recognition score equal to the score with which it hais been called multiplied by -an indication e-f— [?] the probability that the candidate matches the current filter values.
Functions 2602 through 2606 of the filterMatch routine tests- to see if the word type parameter has been defined, and^_ if so and if the word candidates- is not of the defined word type^_ it returns from the filterMatch function with a score of 0, indicating that the word candidate© is clearly not compatible with current filter values.
Functions 2608 through 2614 test to see if a current value is defined for the filter range. If so, and if the current word candidate is alphabetically between the starting and ending words of that filter range , they returned withis an unchanged score value. Otherwise they returned- with a score value of 0.
Function 2616 finds determines if there is a defined filter string. If soi it causes functions 2618 through 2606 key2653 to be performed. Function 2618 sets the current candidate character, a variable that will be used in the following loop, to the first character in the word candidates- for which filterMatch -is—has been called Next ; a loop 2620 is performed until the end of the filter string is reached by its iterations. This loop includes functions 2622 through 2651.
The first function in each iteration of this loop is the test by step 2622 to determine the nature of the next elements- in the filter string. In the embodiment showj_ three types of filter string elements are allowed: an unambiguous character, an ambiguous character, and an ambiguous element representing a set of ambiguous character sequences^ which can be of different lengths .
And unambiguous character unambiguously identifies a letter of the alphabet or other character, such as a space. It can be produced by unambiguous recognition of any form of alphabetic input, but it is most commonly aasociativc associated with letter or ICA word recognition, keyboard input, or non-ambiguous phone key input in phone implementations . Any recognition of alphabetic input can be treated as unambiguous merely by accepting a single best scoring spelling output by the recognition as an unambiguous character sequence.
An ambiguous character is one which can have multiple letter values, but which has a definite length of 1 character. As stated above^ this can be produced by the ambiguous pressing upon keys in a telephone embodiment, or arc by speech or character recognition of letters. It can also be produced by continuous recognition of letter names in which the all the best scoring character sequences have the same character length.
An ambiguous length element is commonly associaitt eedd--fe with the output of continuous letter name recognition or handwriting recognition. It represents multiple best — scoring letter sequences against handwriting or spoken input, some of which sequences can have different lengths.
I
If the -next elements- in the filter string is ain unambiguous character, function 2644 causes functions 2626 through 2606 to be performed. Function 2626 tests to see if the current candidate character matches the current unambiguous character. If no ^ the called: to filterMatch returns with a score of 0 for the current word candidate. If sθj_ function 2630 increments the position of the current candidate character. .
If the next elements- in the filter string is ain ambiguous character, function 2632 causes functions 2634 through 2636 to be performed. Function 2634 tests to see if the current character fails to match one of the recognized values of the ambiguous character. If so, function 2636 returns from the call to filterMatch with a score of 0. Otherwise^ functions 2638 through 2.642 alter the current word candidate's score as a function of the probability of the ambiguous character matching the current candidate character's value, and .then increment the current candidate character's position is incremented.
If the next elements- in the filter string is ain I ambiguous length element, function 2644 causes a loop 2646 to be performed for each character sequence represented by the ambiguous length element. This loop is—comprisesd of I functions 2648 through 2652. Function 2648 tests to see if there is a matching sequence of characters starting at the current candidate's character position that matches the current character sequence of the loop 2646. If so, function 2649 alters the word candidate's score as a function of the probability of the recognized matching sequence represented by the ambiguous length element, and then function 2650 increments the current position of the current candidate character by the number of the characters in the matching ambiguous length element sequence. If there is no sequence of characters starting at the current word candidate's character position which that match any of the sequences of characters associated with the ambiguous length element, functions 2651 and 2652 returned from the call to filterMatch with a score of 0.
If the loop 2620 is completed, the current word candidate will have matched against the entire filter string. In this iα the case, function 2653 returns from filterMatch with the current word' s score produced by the loop 2620.
quick proof to horc=^=thc following blue text ia raw apceeh rGσognition output ,—p-lcaac review it more carefully If the test of step 2616 finds that there is no filter string' defined^ step 2654 merely returns from filterMatch with the current word candidate's score unchanged.
Returning now to function 2318 of FIG. 23, fee—it can be seen that the call to filterMatch for each word candidate will return a score for the candidate. These are the scores which that are used to determine which word candidates to delete in function 2320.
Once these deletions antennae have If-taken place^ function 2322 tests to see if the number of prior recognition candidates last left after the deletions, if any_^_ of function 2320 is below a desired number of candidates. Normally this desired number would represent a desired number of choices which arc desired for use in a choice list- that- is to be created . If the number of prior recognition candidates is below such a desired number _ functions 2324 through 2336 se—are performed. Function 2324
I performs speech recognition upon every one £>f he one or more entries in the utterance list 2400 p§§r*2øE_ shown in FIGS. 24 and 25. As indicated by functions 2326 and 2328^_ is—this recognition process includes a tests- to determine if there are both continuous and discrete entries in the utterance list, and^ if so± limitsieg the number of possible word candidates in recognition of the continuous entries to a number corresponding to the number of individual utterances—s- detected in one or more of the discrete entries. The recognition of function 2324 also includes recognizing each entry in the utterance list with either continuous arts or discrete recognition' depending upon the respective mode that was in fact effect when each was received, as indicated by the continuous or discrete recognition indication 2406 shown in FIGS. 24 and 25. As indicated by 2332 <^,, the recognition of each utterance list entry also includes using the filterMatch routine previously described and using a language model in selecting a list of best—scoring acceptable candidates for the recognition of each such utterance. In the filterMatch routine, the vocabulary indicator 2408 shown in FIGS . 24 and 25 for the most recent utterance in the utterance list is used as a word type filter to reflect any indication by the user that the desired word sequence is limited to one or more words
After the recognition of the one or more entries' in under itsthe utterance list has been performed, if there is more than -3-one entry in -the utterance' list, functions 2334 and 2336 pick a list of best scoring recognition candidates for the utterance list pace based on a combination of scores from different recognitions. It should be appreciated that in some embodime -nts of this aspect of the invention κ~r/,~ combination of scoring could be used from the recognition of the different utterances so as to improve the effectiveness of—e# the recognition using more than one utterance.
If the number of recognition candidates produced by functions 2314 through 2336 is ast less than the desired number, and if there is a non-null filter string or filter range definition^ functions 2338 and 2340 used filterMatch to slack■ select a desired number of additional choices from the vocabulary associative- associated with the most recent entry in the utterance list, or the current recognition vocabulary if there are no entries in the utterance list .
If there are no candidates from either recognition or the current vocabulary by the time began -the getC-ehoices routine of FIG. 23 reaches function 2342 X function 2344 uses the best—scoring character sequences that matches- the current filter string as choices, up to the desired number
sequences o one or more c aracters, t e cho ces pro uce y function 2344 will be scored. correspondingly by a scoring mechanism corresponding to that shown in functions 2616 through 2606 the three of FIG. 26.
-£ry^ e-4_-iffi__--the call to getchoices returns, a list of choices produced by recognition, by sJ-eciion^from a vocabulary according to filter, or g§ a list^>f possible filters will normally be returned.
Returning now to FIG. 22, when the call to getchoices in function 2202 returns to the displayChoiceList routine, function 2204 tests to see if any filter has been defined for the current selection, if there has been any utterance at aadded to the current selection's utterance list, and if the selection for which displayChoiceList has been called is not in the notChoiceList , which includes a list of one or more words which- that the user's, inputs indicate are not desired as recognition candidates. If these conditions are met up of the , function 2206 makes that selection the first choice for display in the correction window, which the routine is to create. Next function 2210 removes any other candidates from the list of candidates produced by the call to the getchoices routine that areiβ- contained in the notChoiceList . Next, if the first choice has not already
O been selected by function 2206 , function 2212 makes the best—scoring candidates- returned by the call to getchoices the first choice for the subsequent correction window display. If there is no single best —scoring recognition candidates-^ alphabetical order can be used to select that one of the candidates- which is to be the first choice.
Nextj_ function 2218 selects those characters of the first choice which correspond to the filter string, if any, for special display. As will be described below, in the preferred embodiments , characters in the first choice which correspond to an unambiguous filter are indicated in one way, and characters in the first choice which correspond to an ^-.ambiguous filter are indicated in a different way so I that the user can appreciate which portions of the filter string correspond to which type of filter elements. Next ^ I function 2220 places a filter cursor before the first character of the first choice thatwhich does not correspond to the filter string. When there i-'-s no filter string defined^ this cursor will be placed before the first character of the first choice.
e t^ function 2222 causes steps 2224 through 2228 to be performed if the getchoices routine returned any candidates other than the current first choice . In this case^ function 2224 creates a first—character—ordered
functions 2226 and 2228 create a second —character—ordered choice list of up to a preset number of screens f or all r«^_. such choices from the remaining best —scoring candidates .
What's When all this hasis been done_;_ function 2230 displays-is- a correction window showing the current first choice, and indication of which admits characters that the
any filter defined a-ft€ liQ- firat choico. list l-TtFL.is shmjn
It should be appreciated that the displayChoiceList routine can be called with a null value for the current selection as well as for a text selection which has no associated 'Utterances . In this case/ it will respond to alphabetic input by performing word completion based on the operation of functions 2338 and 2340. It allows to select choices for the recognition of an utterance without the use of filtering or re -utterances, to use filtering and/or re -
entering of a a—subsequent utterance, to spell a word which is not in the current vocabulary with alphabetic input, to mix and match different forms of alphabetic input includinged forms which are unambiguous, ambiguous with regard to character, and ambiguous with regard to length.
Returning now to FIG. 14, we've now explained how functions 1436 and 1438 respond to wait On-a tap on a word in the SIP buffer by calling the displayChoiceList routine, which in turn, causes a correction window such as the correction window 1200 shown in FIG. 12 to be displayed. The ability to display a correction window with its associated choice list merely by tapping on a word provides a fast and convenient way for enabling a user to correct a— single word errors. If the user double taps on a selection in the SIP buffer^ functions 1440 hrough 1444 escape from any current correction window that might be displayed^ and start SIP buffer recognition according to current recognition duration modes and settings_ using the current language context of the current selection. The recognition duration logic responds to the duration of associated with such a double -click in determining whether to respond as if there is—has been either a press or a click | for the purposes described above with regard to FIG. 18. The output of any such recognition will replace the current
If the user taps in any portion of the SIP buffer which does not include text, such as between words or before or after the text in the buffer, function 1446 causes functions 1448 to 1452 to be performed. Function 1448 plants a cursor at .the location of the tap. If the tap is located at any point in the SIP buffer window which is after the and end of the text in up to the SIP buffe_^_,the cursor will be placed after the last word in that buffer. If the tap is a double tap^ functions 1450 1452 start SIP buffer recognition at the new cursor location according to the current recognition duration modes and other settings, using the duration of the second touch of the double F-ea?—tap for determining whether it is -fche—to be responded to as a press or a click.
Figure 15 is a continuation of the pseudocode described above with regard to FIGS. 13 and 14.
If the user drags a—cross part of one or more words in the SIP buffer, functions 1502 and 1504 called the displayChoiceList routine described above with regard to
FIG. 22 with all of the words that are all or partially dragged across as the current selection and with the acoustic data aaaociativc associated with the recognition of those words, if any' as the first entry in the utterance list .
If the user drags across an initial part of an individual word in the SIP buffer^ functions 1506 and 1508 called the displayChoiceList function with that word as the selectionj_ with that word added to be—the notChoiceList, with the dragged initial portion of the word as the filter string, and with the acoustic data associated with that word as the first entry in the utterance list. This programming interprets the fact that a user has dragged across only the initial part of a word as an indication that the entire word is not the desired choice, as indicated by the fact that the word is added to fee—the notChoiceList .
If a user drags across the in bcingending of an individual word in the SIP buffer^ functions 1510 and 1512 called ttje displayChoiceList routine with the word as a selection,, with the selection added to be notChoiceList, with the onion undragged acroaa an initial portion of the word as the filter string, and with the acoustic data associative with a selected word as the first entry in the utterance list.
If -art—an indication is received -fee—that the SIP buffer has more than a certain amount of text- and, functions 1514 and 1516 display a warning to the user that the buffer is close to full. In the disclosed embodiment this morning warning informs the user that the buffer will be automatically cleared if more than an additional number of characters er atare added to the buffer , and request s_ that the user verify that the text currently in the buffer is correct and then press .talker talk or continue, which will clear the buffer.
If it—an indication, is received a-fe—that the SIP buffer has received text input ,/function 1518 causes steps 1520 through 1528 to be performed. Function 1520 tesc s, to see if the cursor is currently at the and of the SIP bulffffee.r, If not function 1522 outputs to the operating system a number of backspaces—is- equal to the distance from the last letter of the SIP buffer to the current cursor position within that buffer. Next' function 1526 causes the text input, which can be compos-?is-ed of one or more characters, to be output into the SIP buffer at its current cursor location. Steps 1527 and 1528 output the same text sequence' and any following text in the SIP buffer to the text input of the operating
following the received text to the operating system causes any change made to the text of^ the^SIP buffer that corresponds to text that'-haa» eady •kπs&rt- supplied tei fc «-« uiαLi g 'iiy i'Lfβπi to the
text input has been generated in response to speech recognition. If soΛ function 1537 calls the displayChoiceList routine for the recognized text, and function 1538 turns off correction mode. Normally^ the calling of the displayChoiceList routine switches the system to correction mode, but function 1538 prevents this from being the case when one at a timcone-at-a-time mode is being used. As has been described above, this is because in one at a timcone-at-a-time mode a correction window is displayed automatically each time speech recognition is performed upon a inan utterance of the word, and thus there is a relatively high likelihood that a user intends input supplied to fee—the non-correction window aspects of the SIP interface to be used for purposes other than input into the correction window. On the other hand, the correction window is being displayed as a result of specific user input indicating a desire to correct one or more words, correction mode is entered so that certain non -correction window inputs will be directed to the correction window.
Function 1539 tests to see if the following set of conditions is met: the SIP is in one■ at- a timcone-at-a-1ime mode, a correction window is ahowndisplayed, but the system is not in correction mode. This is the state of affairs which normally exists after each utterance of the word in one at a timcone-at-a-time mode. If they said conditions exist s- functions 1540 -rcaponae responds to any of the inputs above in FIGS. 13, 14, and 15 by confirming recognition of the first choice in the correction window for purposes of causing not that choice to be been- produccdintroduced Jo^ as text output into the SIP buffer and to the operating system for purposes of updating the current language context for the recognition of one or more subsequent words, for the purpose of providing data for the - use in updating a—the language model, and for the purpose of providing data for updating acoustic models. This enables a user to confirm the prior recognition of the word in one at a timeone-at-a-time mode by any_one of a large number of I inputs which can be used to also advance the recognition process.
It should be appreciated that if the user is in one at timcone-at-a-time mode and generates inputs indicating a desire to correct the word shown in a choice list^_ the SIP will be aad set to the correction mode^ and subsequent input during the continuation of that mode will not cause operation of function 1540.
ch
If the escape Escape but- inbutton 1210 of a correction window shown in FIG. 12 is pressed, functions 1544 and 1546 cause the SIP program to exit the correction window without changing the current selection.
If the delete Delete button 1212 of the correction window shown in FIG. 12 is pressed, functions 1548 and 1550 delete the current selection in the SIP buffer and aand send and output to the operating system , which causes a corresponding change to be made to any text in the application window which correspondings- to that in the SIP buffer.
If the __ew-New button 1214 shown in FIG. 12 is pressed^ functions 1552 causes functions 1553 to 1556 to be performed. Function 1553 deletes the current selection in the SIP buffer corresponding to the correction window and sends_ and output to the operating system so as to cause a corresponding change to text in the application window. Function 1554 sets the recognition mode to the new utterance default, which will normally be the large vocabulary recognition mode, and can be set by the user to be a to either a—continuous or discrete recognition mode aa he or she dcairea . Function 1556 starts feβ-SIP buffer recognition using the current recognition duration mode and other recognition settings. SIP buffer recognition is recognition that -will provides an input to the SIP buffer, according to the operation of functions 1518 to 1538' described above.
Figures FIG. 16 continues the illustration of the response of the main loop of the SIP program to input received during the display of a correction window.
If the re-utterance button 1216 of FIG. 12' is pressed, function 1600 and tol602 causes functions 1603 through 1600 and σanl610 to be performed. Function 1603 sets the SIP program to the correction mode if it is not currently in it . This will happen if the correction window has been display ed ^ as a result of a discrete word recognition in one at a timeone-at-a-time mode and the user responds by pressing a button in the correction window, in this case the -3?eRe- utterance button, indicating an intention to usethe correction window for correction purposes. Next, function
1C00 and forl604 sets the recognition mode toCthe current recognition mode associated with re -utterance recognition. Then function 1606 receives one or more utterance s—__s. according to the current -rccdyre-utterance recognition duration mode and other recognition settings, including vocabulary. Next function 1600 and a atl608 adds the one or more utterances—is- received by function 1606 to the utterance list for the correction window selection' along with an indication of the vocabulary mode at the time of those utterance—ss , an weatherand whether not continuous or discrete recognition is in effect. This causes the other- ends-utterance list 2004 shown in FIGS. 24 is-and 25 to have an additional utterance.
Then function 1600 and σanl610 calls te—the displayChoiceList routine of FIG. 22, described above. This in turn will call began the getchoices choice ia function described above the gardcnregarding jjj0 FIG. 23 and will cause functions 2306<i;!^6e^ξ^^^gg^_^' through 2336 to perform rccntcrs re-utterance recognition using the new Utterance list entry.
is a speech recognition mode and ^c- aoTrSS. causes function 1616 to start filter recognition and according to the current filter recognition duration mode and -settings. This causes any input generated by such«.fιP,agirt. T—gem isaiaa to be directed to the the cursor of the current filter string. If on the other hand the current filter entry mode is an lentry window mode functions 1618 and 1620 call the appropriate entry window. As described below, in the embodiment of the invention shown, these /entry window modes correspond to a character recognition entry mode, a handwriting recognition entry mode and a keyboard entry mode.
If the user presses the 1220 shown in FIG. 12^ functions 1622 through 1624 _ —cause the correction mode to be entered if the SIP program is not currently in its-, and cause the word form list routine of
FIG. 27 to be called for the current first choice word.
Until a user provides input to the correction window that causes a redisplay of the correction window, the current first choice will normally be the selection for which the correction window has been called. This means that by selecting one or more words in the SIP buffer and JrH—by pressing the word formWord Form button in the correction window^ a user can rapidly select a list of alternate forms for any such a selection.
FigurcFIG. 25 illustrates the function of the word form list routine. If a correction window is already displayed when it is ■ called^ functions 2702 and 2704 treat the current best choice as the selection for which the word form list will be displayed. If the current selection is one word_j_ functions- 2706 causes functions 2708 through 2714 to be performed. If the current selection has any homonyms , function 2708 places them at the start of the word form r i choice list. Next_^ step 2700 and canlO finds the root form of the selected word, and function 2712 creates a list of alternate grammatical forms for the word. Then function 2714 alphabetically orders all these grammatical forms in the choice list after any homonyms , which may have been added to the list by function 2708.
Yes If, on the other hand^ the selection is compoarised of multiple wordsj_ function 2716 causes functions 2718 through functions 2728 to be performed. Function 2718 test 1s to see it—if fee—the selection has any spaces between its words. If so_j_ function 2720 as—adds a copy of the selection to the choice list^ which has no such spaces between its words^ and function 2222 as—adds a copy of the selection with the spaces replaced by hyphens . mjjf period although Although not shown in FIG. 27^_ additional functions can be performed to replace hyphens with spaces or with the absence of spaces. If the selection has multiple elements subject to the same spelled/non-spelled transformation function^
2726 ads adds a copy of. the selection and all prior choices
H^^^^ transformations -gjjgS to the choice list. For example, tT-his for- example-will transform a series of Heh-number names into a numerical equivalent, or reoccurrences of the word
"period" -Into into βr-corresponding punctuation marks. Next , function 2728 alphabetically orders the choice list.
Once the choice list has been created either for a single word or a multiword selectionj_ function 2730 displays is—a correction window showing the selection as the first choice, the filter cursor at the start of the first choice, and a scrollable choice list and a scrollable list . kn some embodiments— where the selection is a single word_^_ the filter of "ppj-'which has a'single sequence of characters that occurs in all its grammatical forms, the filter cursor could be placed after that common sequence with the common sequence indicated as an unambiguous filter string.
In some embodiments of the invention' the word form list provides _fc—one single alphabetically ordered list of optional word forms. In other embodiments , options can be ordered in terms of frequency of use^ or their there could be a first and a second alphabetically ordered choice list, with the first choice list containing a set of the most commonly selected optional forms which will fit in the correction window at one time, and the second list containing less commonly used word forms .
As would will be demonstrated below^ the word form list provides a very rapid way of correcting a very common type of speech recognition error, that is a is, an error ^ in which the first choice is a homonym of the desired word or is an alternate grammatical form of it.
If the user presses the capitalization Capitalization key- button 1222 shown in FigurcFIG. 12, functions 1626 through 1628 will enter the correction mode if the system is currently not in it and will call the capitalized cycle function for the correction window's current first choice. The capitalized correction cycle will cause a sequence of one or more words which do not all have initia 1 capitalization to have initial capitalization of each word, will cause a sequence of one or more words which all have initial capitalization to be changed to an all capitalized form, and will cause a sequence of one or more words which
I have an all capitalized form to be changed to an all lower case form. By repeatedly pressing the capital!zed- Capitalization button, a user can rapidly select between these forms .
If the user selects the pla Play button 1224 shown in Figure FIG. 12, functions 1630 and 1632 cause an audio playback of the first entry in the utterance list associated with the correction window's associated selection, if any such entry exists . This enables a user to hear exactly what was spoken with regard to a mis -recognized sequence of one or more words. Although not shown, the preferred embodiments enables- a user to select a setting which automatically causes such audio to be plyed automatically when a correction window is first displayed. If the add Add word Word button 1226 shown in
FigurcFIG. 12 is pressed when it is not displayed in a grayed state, function.1634 and 1636 call a dialog box which that allows a user to enter the current first choice word into either the active or backup vocabulary. In this particular embodiments- of the SIP recognizer, the system uses a subset of its total vocabulary as the active vocabulary that is available for recognition during the normal recognition using the large vocabulary mode. Function 1636 allows a user to make a word that is normally in the backup lot vocabulary part of the active vocabulary. It also allows the user to add a word that is in neither vocabulary but which has been spelled in the first choice window by use of alphabetic input to be added to either the active or backup vocabulary. It should be appreciated that
The add Add word Word button 1226 will only be in a non-grayed state when the first choice word is not currently in the active vocabulary. This provides an indication to the user that he or she may want to add the first choice to either the active or backup vocabulary.
If the user selects the check Check button 1228 shown in Figure FIG. 12, functions 1638 through 1648 to—remove the current correction window and output its first choice to the SIP buffer and feed it—to the operating system a sequence of keystrokes necessary to make a corresponding change to text in the application window.
If the user taps one of the choices 1230 shown in the correction window of Figure FIG. 12, functions 1650 through 1653 remove the current correction window, and output the selected choice to the SIP buffer and feed the operating system a sequence of keystrokes necessary to make the corresponding change in the application window.
If the user taps on one of the choice addcd-€oiσe Edit buttons 1232 shown in Figure FIG. 12, function 1654 causes functions 1656 through 1658 to be performed. Function 1656 changes to correction mode if the system is not already currently in it. Function 1656 makes the choice associated with the tapped choice addedChoice Edit button to be the first choice and to be the current filter string, then function 1658 calls the displayChoiceList with a new filter string. As will be described below, this enables a user to selected a choice word or sequence of words as the current filter string and then. to added edit that filter string, normally by deleting any characters from it ' a its end which disagree with the desired word.
If the user drags across one or more initial characters of any choice, including the first choice, functions 1664
I through 1666 change the system to correction mode if it s not in it, and call the displayChoiceList with the dragged choice added to the choice list and with the dragged initial portion of the choice as the filter string. These functions allow a user to indicate that a current choice is not the desired first choice but that a—the dragged initial portion of it should be used as a filter to help find the desired choice.
The FigureFIG. 17 provides the final continuation of the list of functions which the SIP recognizer makes in response to correction window inpu .
If the user drags across the ending of a choice, including the first choice, functions 1702 and 1704— enter the correction mode if the system is currently not already in it, and call a—displayChoiceList with the partially dragged choice added to the set—notChoiceList choice liot and with the undragged initial portion of the choice as the filter string.
If the user drags across two choices in the choice list^ functions 1706 through 1708 enter the correction mode if the system is not currently in it, and call a the displayChoiceList with the two choices added to the notChoiceListnot choice liat and with the two choices as the beginning and ending words in the definition of the current filter range.
If the user taps between characters on the first choice, functions 1710 through 1712 enters- the correction mode if the SIP is not already in it, and moves- the filter cursor to the tapped location. No call is made to displayChoiceList at this time because the user has not yet made any change to the filter.
If the user enters a backspace by pressing the backspace Backspace button 1116 when in correction mode, as described above with regard to function 1334 of Figure FIG. 13, function 1714 causes functions 1718 through 1720 to be performed. Function 1718 calls the filter edit routine of Figure 'α FIGS. 28 and 29 when a backspace is input.
As will be illustrated with regard to Figure FIG. 28, the filter edit routine 2800 is designed to give a—the user in the editing of a filter with a combination of unambiguous, ambiguous, and/or ambiguous length filter elements.
This entertaining routine & includes a function 2802, a test to see if there are any characters in the choice with which it has been called before the current location of the filter cursor. If so, it causes functions 2804 to define the filter string with which the routine has been called as the old filter string, and function 2806 makes the characters in the choice with which the routine has been called before the location of the filter cursor, the new filter cursor, and all the characters in that string to be unambiguously defined. This enables a user to define any part of a first choice because of the location of an edit to be automatically confirmed as a correct filter character.
Next, the function 2807 tests to see if the input with which the filter edits has have been called is a backspace. If so, it causes functions 2808 through 2812 to be performed. Functions 2808 and 2810 delete the last character of the new filter string if the filter cursor is a non-selection cursor. If the filter cursor corresponds to a selection of one or more characters in the current first choice, these characters were already not to" be included in the new filter by the operation of function 2806 just described. Then a—function 2812 clears the old filter
I string because when the input to the filter edit is a backspace it is assumed that no portions of the prior filter to the right of the location of the backspace are intended for future inclusion in the filter. This deletes' any ambiguous as well as unambiguous elements in the filter string which might have been previously to the right of the location of the filter cursor.
If the input with which the filter edits- routine is called is one or more unambiguous characters, functions 2814 and 2816 add the one or more unambiguous characters to the end of the new filter string.
If the input to the filter edit routine is a sequence of one or more ambiguous characters of fixed length, function 2818 and function 2820 place an element representing each ambiguous character in the sequence at the end of the new filter.
If the input to the filter edit routine is an ambiguous length ambiguouselement Jjj 'r input■ function 2822^causes functions 2824 through 2832 to be performed. Fun ion 2824 selects the best—scoring sequence of letters associated with the ambiguous input, which^ if added to the prior unambiguous part of the filter, would correspond to all or an initial part of a vocabulary word. It should be remembered that when this function is performed, all of the prior portions of the new filter string will have been confirmed by the operation of function 2806/, described above. Next, function 2826 tests to see if there are any sequences selected by function 2824 above a certain minimum score. If so,7—it will cause function' 2828 to select the best—scoring letters- sequences independent of vocabulary. This is done because if the condition of the test in function 2826 is met, it indicates that the ambiguous filter is being used to spell out a vocabulary word. Next, functions 2830 and 2832 associate the character sequences selected by the operation of functions 2824 through function 2828 with a new ambiguous filter element, and they add that new ambiguous filter element to the end of the new filter string.
Next, a loop 2834 is performed for each filter element in the old filter string. This loop is comprised ef-comprises the functions 2836 through 2850 shown in the remainder of Figure FIG. 28 and the functions 2900 through 2922 shown in Figure -FIG. 29.
If the current old filter string element of the loop 2834 is an ambiguous, fixed length element that extends beyond a new fixed length element which has been added to the new filter string by functions 2814 through 2820, functions 2836 and 2838 add the old element to the end of the new filter string if it extends beyond those new elements. This is done because editing of a filter string other than by use of the baσkapace Backspace button does not delete previously entered filter information which- that corresponds to part of the prior filter to the right of the new edit .
If the current old element of the loop 2834 is an ambiguous, fixed length element that extends beyond some sequences in a new ambiguous length element that has been added to the end of the new filter string by operation of functions 2822 through 2832, function 28.40 causes functions 2842 through 2850 to be performed. Function 2842 performs a loop for each character sequence represented by the new ambiguous length element that has been added to the filter string. The loop performed for each such character sequence of the new ambiguous length element includes a loop 2844 performed for each character sequence which agrees with
I the current old ambiguous fixed length element of the loop 2834. This inner loop 2844 includes a function 2846, which test to see if the old element matches and extends beyond the current sequence in the new element. If so, function 2848 ada adds to the list of character sequences represented by the new ambiguous length element a new sequence of characters corresponding to the current sequence from the new element plus the portion of the sequence from the old element that extends beyond that current sequence from the new element .
If the current old element is an ambiguous length element that contains any character sequences that extend beyond a new fixed length element that has been added to the new filter^ function 2900 of Figure FIG. 29 causes functions 2902 through 2910 to be performed. Function 2902 is a loop which is performed for each sequence represented by the old ambiguous length element. It is compriacd composed of a test 2904 that checks to see if that the current sequence from the old element matches and extends beyond the new fixed length element. If so, function 2906 creates a new character sequence corresponding to that extension from the old element that extends beyond the new. After this loop has been completedj_ a function 2908 tests to see if any new sequences have been created by the function 2906, and if so, they cause function 2910 to add that new ambiguous length element to the end of the new filter, after the new element. This new ambiguous length element represents the possibility of each of the sequences created by function 2906. Preferably a probability score is associated with each such new sequence based on the relative probability scores of eac of the character sequences which were found by the loop 2902 to match the current new fixed length element .
If the current old element is an ambiguous length element that has some character sequences that extend beyond some character sequences in a new ambiguous length element ' function 2912 causes functions 2914 through 2920 to be performed. Function 2914 is a loop that is performed for each character sequence in the new ambiguous length element . It is comprised composed of an inner loop 2916 which is performed for each character sequence in the old ambiguous length element . This inner loop is compriacd composed of functions 2918 and 2920..which, test to see if the character sequence from the old element matches and extends beyond the current character sequence from the new element. If so, they associate with the new ambiguous length element, a new character sequence corresponding to the current sequence from the new element plus the extension from the current old element character sequence. Once all the functions in the loop 2834 are completed, function 2924 returns from the call to filter edit with the new filter string which has been created by that call.
It should be appreciated that in many embodiments of various aspects of the invention a different and often more simple filter-editing scheme can be used. But it should be appreciated that one of the major advantages of the filter edit scheme shown in Figures FIGS. 28 and 29 is that it I enables one to enter an ambiguous filter quickly, such as by continuous letter recognition, and then to subsequently edit it by more reliable alphabetic entry modes, or even by subsequent continuous letter recognition. For example, this scheme would allow a filter entered by the continuous letter recognition to be all or partially replaced by input from discrete letter recognition, ICA word recognition, or even handwriting recognition. Under this scheme, when a user edits an earlier part of the filter string, the information contained in the latter part of the filter string is not destroyed unless the user indicates such an intent, which in the embodiment shown is by use of the backspace character.
Returning now to Figure FIG. 17, when the call to filter edit in function 1718 returns, function 1724 calls displayChoiceList for the selection with the new filter string that has been returned by the call to filter edit.
Whenever filtering input is received, either by the results of recognition performed in response to the pressing of the filter key described above with regard to function 1612 of FigurcFIG. 16, or by any other means, functions 1722 through 1738 are performed.
Function 1724 tests to see if the system is in one at a- timcone-at-a-time recognition mode and if the filter input has been produced by speech recognition. If so, it causes functions 1726 to 1730 to be performed. Function 1726 tests to see if a filter character choice window, such as window
3906 shown in Figure FIG. 39, is currently displayed. If so, function 1728 closes that filter choice window and■ function 1730 calls filter edit with the first choice filter character as input. This causes all previous characters in the filter string to be treated as an unambiguously defined filter sequence. Regardless of the outcome of the test of function 1726, a function 1732 calls filter edit for the new filter input which is causing operation of function 1722 and the functions listed below it. Than Then, function 1734 calls displayChoiceList for the current selection and the new filter string. Then^_ if the system is in the one at a timcone-at-a-time mode, functions 1736 and 1738 call 'the filter character choice routine with the filter string returned by filter edit arid with the riewly recognized filter input character as the selected filter character.
Figure FIG. 30 illustrates the operation of the filter character choice subroutine 3000. It includes a function 3002 which tests to see if the selected filter character with which the routine has been called corresponds to an either an ambiguous character or an unambiguous character in the current filter string having multiple best choice characters associated with it. If this is the case, function 3004 sets a filter character choice list equal to all characters associated with that character. If the number of characters is more than will fit on the filter character choice list at one time, the choice list can have scrolling buttons to enable the user to see such additional characters . Preferably the choices are displayed in alphabetical order to make it easier for the user to more rapidly scan for a desired character. The filter character choice routine of -Figure FIG. 30 also includes a function 3006 which tests to see if the selected filter character corresponds to a character of an ambiguous length filter string element in the current filter string. If so, it causes' functions 3008 through 3014 to be performed. Function 3008 tests to see if the selected filter character is the first character of the ambiguous length element. If so, function 3010 sets the filter character choice list equal to all the first characters in any of the ambiguous element's associated character sequences. If the selected I filter character does not correspond to the first character of the ambiguous length element_ functions 3012 and 3014 set the filter character choice li t equal to all characters in any character sequences represented by the ambiguo us element that are preceded by the same characters as ÷β-ι .n_the selected filter character in the current, first choice. Once— either functions 3002 and 3004 or functions 3006 though 3014 have created a filter character choice list, function 3016 displays that choice list in a window, such as the window 3906 shown in Figure FIG. 39
If the SIP program receives a selection by a user of a filter character choice in a filter character choice window, function 1740 causes functions 1742 through 1746 to be performed. Function 1742 closes the filter choice window in which such a selection is—has been made. Function 1744 calls the filter edit function for the current filter string with the character thatwhich has been selected in the filter choice window as the new input. Then function 1746 calls the displayChoiceList routine with the new filter string returned by filter edit.
If a drag upward from a character in a filter string, of the type shown in the correction window-J-s 4526 and 4538 of Figure FIG. 45, function 1747 causes functions 1748 through 1750 to be performed. Function 1748 calls the filter- character choice routine for the character which has been dragged upon, which causes a filter character choice window to be generated for it if there are any other character choices associated with that character. If the drag is released over a filter choice character in this window, function 1749 generates, a selection of the filter character choice over which the release takes place. Thus it causes the operation of the functions 1740 through 1746 which have just been described. If the drag is released other than on a choice in the filter character choice window, function 1750 closes the filter choice window.
If a re-utterance is received other than by pressing of
selection so as to perform re -recognition using the new utterance .
just been described can be used to dictate a sequence of text. In this particular sequence, the interface is illustrated as being in the one at a timcone-at-a-time mode_^_ which is a discrete recognition mode that causes a correction window with a choice list to be displayed every time a discrete utterance is recognized. In Figure FIG. 31, numeral 3100 points to the I screerishot of the PDA screen showing the user tapping the Talk button 1102 to σommcnta commence dictation starting in I a new linguistic context. As indicated by the highlighting of the large Large vocabulary Vocabulary button 1132^ the I SIP recognizer is in the large vocabulary mode. The sequence of separated dots on the Ceontinuous/Ddiscrete j button 1134 indicates that the recognizer is in a discrete recognition mode. It is assumed the SIP is in the Press And Click To End Of Utterance Recognition duration mode described with regard to numerals 1810 to 1816 of Figure FIG. 18. As a result/) the click of the Talk button causes recognition to take place until the end of the next utterance. ' Numeral 3102 represents an utterance by the user of the word "this!' .() Numeral 3104 points to an image of the screen of the PDA after a response to this utterance by placing the recognized text 3106 in the SIP text window 1104, outputting this text to the application window 1106, and by displaying a correction window 1200 which includes
I the recognized word in the first choice window 1202 and a first choice list 1208.
In the example of Figure FIG. 31, the user taps the capitalized Capitalization button 1222 as pointed to buy-by the numeral 3108. This causes the PDA screen to have the appearance pointed to by —3110 in which the current first choice and the text output in the SIP buffer and the application window is changed to having initial capitalization.
In the example the user clicks the "continue"Continue button 1104 as pointed to by numeral 3102 and than utters the word "is" as pointed to by ehe— umeral 3114. In the example, it is assumed this utterance is mis -s—recognized as the word "its" causing the PDA screen to have the appearance pointed to by numeral 3116, in which a new correction window 1200 is displayed having the mis-s-recognized word as its • first choice 3118 and a new choice list for that recognition
1208.
Figure -FIG. 32 represents a continuation of this example, in which the user clicks the choice word "is" 3200 in the image pointed to by numeral 3202. This causes the PDA screen to have the appearance indicated by the numeral 3204 in which the correction window has been removed, and corrected text appears in both as—the SIP buffer window and the application window.
In the screenshot pointed to by numeral 3206 the user is shown tapping the letter name vocabulary button 1130, which changes the current recognition mode to the letter name vocabulary as is indicated by the' highlighting of the button 1130. As is indicated above with regard to functions 1410 and 1412^ the tapping of this button commences speech recognition according to the current recognition duration mode. This causes the system to recognize the subsequent utterance of the letter name "e" as pointed to by numeral 3208
In order to emphasize the ability of the present interface to quickly correct recognition mistakes, the example assumes that the system miss--recognizes this letter as the letter "p" 321l as indicated by the correction window that is displayed in one at a time-one-at-a-time mode in response to the utterance 3208. As can be seen in the correction window pointed to by 3210, the correct letter "e" is, however, one of the choices shown in the correction window. In the view of the correction window pointed to by numeral 3214, the user taps on the choice 3212, which causes the PDA screen to have the appearance pointed to by numeral 3216 in which the correct letter is entered both in the SIP buffer and the application window. Figure FIG. 33 illustrates a continuation of this example, in which the user taps on the Punctuation Vocabulary key button 11,024 as indicated in the screenshot pointed to by button 11,024. This starts utterance recognition causing the utterance of the word "period" pointed to by the numeral 3300, which changes the recognition vocabulary to the punctuation vocabulary as to Cr punctuation mark " . " is shown in the first choice window followed by that punctuation mark's name to make it easier for the user to recognize.
Since, in the example, this is the correct recognition, the user confirms it and starts recognition of a new utterance using the letter name vocabulary by pressing the button 1130, as shown in the screenshot numeral 3306, and saying the utterance 3308 of the letter This process of entering letters followed by periods is repeated until the PDA screen has the appearance shown by numeral 3312. At this point it is assumed the user drags across the text "e. il. v. i. s." as shown in the screenshot 3314 which causes that text to be selected and which causes the correction window 1200 in the screenshot 3400 near the upper left -iand corner of Figure FIG. 34 to be displayed. Since it is assumed that the selected text string is not in the current vocabulary, there are no alternate choices displayed in this choice list. In the view of the correction window pointed to by 3402, the user taps the Word Form button 1220, which calls the word form list routine described above with regard to Figure FIG. 27. Since the selected text string includes spaces, it is treated as a multiple—word selection causing the portion of the routine shown in Figure FIG. 27 illustrated by functions 2716 through 2728 to be performed. This includes a choice list such as that pointed to by 3404 including a choice 3406 in which the spaces have been removed from the correction window's selection. In the example, the user taps .the Edit button 1232 next to the closest choice 3406. As indicated in the view of the correction window pointed to by numeral 3410j_ this causes the choice 3406 to be selected as the first choice, as indicated in the view of^the correction window pointed to by 3412^ Tt-he user taps όn/jOzlαe capitalized Capitalization button 1222 until the first choice becomes all capitalized at which point the correction window has the appearance indicated in the screenshot 3414. At this point the user clicks on the Punctuation Vocabulary button 1124 as pointed to by 3416 and says the utterance "comma" pointed to by 3418. In the example it is assumed that this utterance is correctly recognized causing a correction window 1200 pointed to by the numeral '3420 to be displayed and the former first choice "e. il .v.i . s . " to be outputted as text.
Figure FIG. 35 is a continuation of this example. In it, it is assumed that the user clicks the Large Vocabulary button as indicated by numeral 3500 and then says the utterance "the" 3502. This causes the correction window 3504 to be displayed. The user responds by confirming this recognition by again pressing the large vocabulary button as indicated by 3506 and saying the utterance "embedded" pointed to by 3508. In the example, this causes the correction window 3510 to be displayed in which the utterance has been miss--recognized as the word "imbedded" and in which the desired word is not shown on the first choice list. Starting at this point, as is indicated by the comment 3512, a plurality of different correction options will be illustrated.
Figure -FIG. 36 illustrates the correction option of scrolling through the first and second choice list associated with the miss-recognition. In the view of the correction window pointed to by 3604, the user shown tapping the pa'ge down scroll button 3600 in the scroll bar 3602 of the correction window causes the first choice list 3603 to be replaced by the first screenful■__ of the second choice list 3605 as indicated in the view of the correction window
3606. As can be seen in this view, the slide bar 3608 of the correction window has moved down below a horizontal bar
3609, which defines the position in the scroll bar associated with the end of the first choice list. In the example, the desired word is not in the portion of the alphabetically ordered second choice list shown in view
3606^_ and thus the user presses the Page. Down button of the scroll bar as indicated by 3610^— T-fehis causes the correction 'window to have the appearance shown in view 3612 in which a new screenful!, of alphabetically listed choices is shown. In the example, the desired- word "embedded" is shown on this choice list as is indicated by the 3616. In the example, the user clicks on this choice button 3619 associated with this desired choice as shown in the view of
I the correction window pointed to by 3618. This causes the correction window to have the view pointed to by 3620 in which this choice is displayed in the first choice window. In the example, the user taps the Capitalized button as pointed to by numeral 3622 which causes this first choice to have initial a_-capitalization as shown in the screenshot 3624.
Thus it can be seen that the SIP user interface provides a rapid way to allow a user to select from among a relatively large number of recognition choices. In the embodiment shown± the first choice list is compriacd composed of up to six choices, and the second choice list can include up to three additional screens of up to 18 additional choices . Since the choices are arranged alphabetically and since all four screens can be viewed in less than a second, this enables the user to select from among up to 24 choices extremely faot-very quickly .
Figure FIG. 37 illustrates the method of filtering■ choices by dragging across an initial part of a choice, as has been described above with regard to functions 1664 through 1666 of FiguroFIG. 16. In the example of this: •FJGfigure— it is assumed that the first choice list includes a choice 3702 shown in the view of the correction window pointed to by 3700^ which includes the first six characters o,f the desired word "embedded" . As his illustrated in the correction window 3704(^ the user drags across these initial six letters and the system responds by displaying a new correction window limited to recognition candidates that start with an unambiguous filter corresponding to the six characters, as is displayed in the screenshot 3706. In this screenshot the desired word is the first choice and- the first six unambiguously confirmed letters of the first choice are shown highlighted as indicated by the box 3708, and the filter cursor 3710 is also illustrated.
Figure FIG. 38 illustrates the method of filtering choices by dragging across two choices in the choice list that has been described above with regard to functions 1706 through 1708 of Figure—FIG. 17. In this example, the correction window 3800 displays the desired choice "embedded" as it occurs alphabetically between the two displayed numeral 3802 and 3804. As shown in the view 3806, the user indicates that the desired word falls in this range of the alphabet by dragging across these two choices. This causes a new correction window to be displayed in which the possible choices are limited to words which occur in the selected range of the alphabet, as indicated by the screenshot 3808. In this example, it is assumed that the desired word is selected as a first choice and as a result of the filtering caused by the selection shown in 3806. In this screenshot the portion of the first choice which forms an initial portion of the two choices selected in the view
3806 is indicated as unambiguously confirmed portion of the 3812 is placed after that confirmed filter portion.
Figure FIG. 39 illustrates a method in which alphabetic filtering is used in ono at a timcone-at-a-time mode to help select the desired word choice. In this example, the user presses the Filter button as indicated in the correction window view 3900. It is assumed that the default filter vocabulary is the letter named name vocabulary. Pressing the Filter button starts speech recognition for the next utterance and the user says the letter "e" as indicated by 3902. This causes the correction window 3904 to be shown in which it is assumed, that the filter character has been misa- recognized as in "pVfj In the embodiment shown, in one at a timcone-at-a-time mode^ alphabetic input also has a choice list displayed for its recognition. In this case, it is a
I filter character choice list window 3906 of the type described above with regard to the filter character choice subroutine of Figure FIG. 30. In the example, the user selects the desired filtering character, the letter "<-/") as shown in the view 3908, which causes a new correction window 3900 to be displayed. In the example, the user decides to enter an additional filtering letter by again pressing the Filter button as shown in the view 3912, and then says the utterance "m3914" 3914. This causes the correction window 3916 to be displayed, which displays the filter character choice window 3918. In this correction window, the filtering character has been correctly recognized and the user could either confirm it by speaking an additional filtering character or by selecting the correct letter as is shown in the window 3916. This confirmation of the desired filtering character causes a new correction window to be displayed with the filter strain "em" treated as an unambiguously confirmed filter's string. In the example shown in screenshot 3920, this causes the desired word to be recognized.
Figure FIG. 40 illustrates a method of alphabetic filtering with AlphaBravo, or ICA word, alphabetic spelling. In the screenshot 4000, the user taps on the AlphaBravo button 1128. This changes the alphabet to the ICA word alphabet, as described above by functions 1402 through 1408 of Figurcl4FIG. 14. In this example, it is assumed that the D,isplay_Alpha_On_Double_Click variable has not been set . Thus the function 1406 of Figurcl4 FIG. 14 will display the list of ICA word-J-s 4002 shown in the screenshot 4004 during the press of the AlphaBravo button 1128. In the example, the user enters the ICA word "echor/j which represents the• letter "e""followed by a second pressing out of the AlphaBravo key as shown at 4008 and the utterance of a second ICA word "Mike" which represents the letter "m" . In the example, the inputting of these two alphabetic filtering characters successfully creates an unambiguous filter string composed of the desired letters "em" and produces recognition of the desired word, "embedded" .
Figure FIG. 41 illustrates a method in which the user selects part of a choice as a filter and then uses AlphaBravo spelling to complete the selection of a word which is not in the system's vocabulary, in this case the made up word "embedded".
In this example, the user is presented with the correction window 4100 which includes one choice 4100, and which includes the first six letters of his -the desired word. As shown in the correction window 4104, the user drags across these first six letters causing those letters to be unambiguously confirmed characters of the current filter string. This results in a correction window 4106. The screenshot 4108 shows the display of this correction window in which the user drags from the filter button 1218 and releases on the Discrete/Continuous button 1134, changing it from the discrete filter dictation mode to the continuous filter dictation mode, as is indicated by the continuous line on that button shown in the screenshot 4108. In screenshot 4110, the user presses the alpha button again and says an utterance containing the following ICA words "Echo, Delta, Echo, Sierra, Tango". This causes the current filter string to correspond to the spelling of the desired word. Since there are no words in the vocabulary matching this filter string, the filter string itself becomes the first choice as is shown in the correction window 4114. In the view of this window shown at 4116, the user taps on the check button to indicate selection of the first choice,
recognition, and correction of continuous speech. In the screenshot 4200 the user clicks the £lear button 1112 described above with regard to functions 1310 through 1314 of i*±gtτa?e 13. This causes the text in the SIP buffer 1104 to be cleared without causing any associated change with the corresponding text in the application window 1106»as is indicated by the screenshot 4204. In the screenshot 4204 the user clicks the Gontinuous/r)Lscrete button 1134 which causes it to change from discrete recognition indicated on the button by a sequence of dots in the screenshot 4002 to do a continuous line shown in screenshot 4204. This starts speech recognition according to the current recognition duration mode, and the user says a continuous utterance of the following words "large vocabulary interface system from voice signal technologies periodl"/7 as indicated by numeral 4206. The system responds by recognizing this utterance and placing a recognized text in the SIP bufferJL104■ and through $r the operating system to the application window 1106, as shown in the screenshot 4208. Because the recognized text is slightly more than fits within the SIP window at one time, the user scrolls in the SIP window as shown at. numeral 4210 and then taps on the word "vocabularies" 4214, to cause functions 1436 through 1438 of Figure 14 to select that word and generate a response the correction w example the desired worcfe list of this correction w correction window 4220 user taps on this word to cause it to be selected which will replace the word "vocabularies" in both the SIP buffer in the application window with that selected word.
Continuing now in Figure 43, this correction is shown by the screenshot 4300. In the example, the user selects the four mistaken words "enter faces men rum" by dragging across them as indicated in view 4302. This causes functions 1502 and 1504 to display a choice window with the dragged words as the selection, as is indicated by the view 4304.
& 4..4 illust_rat.es how the correction window shown at the bottom of Figure 43 can be corrected by a combination of horizontal and vertical scrolling of the correction window and choices that are displayed in it. Numeral 4400 points to a view of the same correction window shown at 4304 in Fii,-qqguuurrrgee 43. In it mot only as) a vertical scroll bar 4602 that is displayed but also a horizontal scroll bar 4402 in this view^ phe user εmown tapping the page down button 3006 in the vertical scroll barvwhich causes the portion of the choice list displayed to mo)ve from the display of the one ^- <^>
(X Is page alphabetically ordered first choice list shown m the view 4400 to the first page of the second alphabetically ordered choice list shown in the view 4404. In the example none of the recognition candidates in this portion of the second choice list start with a character sequence matching the .desired recognition output, which is "interface system fromi'tπ Thus the user again taps the page down scroll button
3600 as is indicated by numeral 4408. This causes the correction! window to have the appearance shown at 4410 in which two of the displayed choices 4412 start with a character sequence matching the desired recognition output.
In order to see if the ending of these recognitionQI candidates matched the desired output the user scrolls like word on the horizontal scroll bar 4402 as shown at 4414.
I
This allows the user to see that the choice 4418 matches the desired output. As is shown at is 4420, the user taps on
Figure 45 illustrates the use of an ambiguous filter created by the recognition' of continuously spoken letter names and edited by filter character choice windows can be used to rapidly correct an erroneous dictation. In this example, the user presses the talk button 1102 as shown at 4500 and then utters the word "trouble" as indicated at 4502. In the example it is assumed that this utterance is misa trecognized as the word "treble" as indicated at 4504. In the example, the user taps on the word "treble" as indicated 4506i\which causes the correction window shown at 4508 to be shown. Sands the desired word is not shown as any of the choices the user caps the filter button 1218 as shown at 4510 and makes a continuous utterance 4512 containing the names of each of the letters in the desired word "troubleV In this example it is assumed that the filter recognition mode is set to include continuous letter name recognition.
In the example the system responds to recognition of the utterance 4512 by displaying the choice list 4518. In this example it is assumed that the result of the recognition of this utterance is to cause a filter strain to be created whic * is comprised of one ambiguous length element . As has been described above with regard to functions 2644 through 2652, an ambiguous length filter element allows any recognition candidate© that contains in the corresponding portion of its initial character sequence one of the character sequences wtrtTσh a're represented by that ambiguous element. In the correction window 4518 the portion of the first choice word 4519 that corresponds to an ambiguous filter element is indicated by the ambiguous filter indicator 4520. Since the filter uses ambiguous element, the choice list displayed contains,_best scoring
.recognition candidates that start with different initial character sequences including ones with length less than the- portion of the first choice wfeiefe corresponds to a matching character sequence represented by the ambiguous element .
In the example, the user drags upward from the first character of the first choice, which causes operation of functions 1747 through 1750 described above with regard to
17. This causes a filter choice window 4526 to be display. As shown in the correction window 4524, the user drags up to the initial desired character/ the letter " v'jfi and releases the drag at that location which causes se
or ambiguous filter element/sand causes a new correction window to be displayed with the new filter as is indicated at 4528.
As is shown in this correction window the first choice 4530 is shown with a__€_~an unambiguous filter indicator 4532 for βi
causes that character and all the characters that preceded in the first choice to be defined unambiguously in the current filter strain.jth.is is indicated in the new correction window 454C which is shown as a result of the selection in which the'first choice 4542 is the desired word, and the unambiguous portion of the filter-is indicated by the unambiguous filter indicator 4544 and .the, remaining portion of the ambiguous filter element, t«ϊτ_πτ stays in the filter string by operations of functions 2900 through 2910 as shown in F-ig -se*' 29.
-F±TI " 46 illustrates that the SIP recognizer allows the user to also input text and filtering information by use of a character recognizer similar to the character recognizer __arbhr comes standard with that Windows CΞ operating system.
As shown in the screenshot 4600 of this figure, if the user drags up from the function key functions 1428 and 1430 of Figure 14 it will display a punch and menu 4602 and if the user releases on the menu's character recognition entry 4604 the character recognition mode described in Figure 47 will be turned on.
As shown in Figth?e 47, this causes function 4702 to display the character recognition window 4608, shown in Figuaae 46, and then to enter an input loop 4704 which is repeated until the user selects to exit the window by selecting another input option on the function menu 4602.
When in this loop, if the user touches the character recognition window, function 4906 records "ink" during the continuation of such a touchΛwhich records the motion if any of the touch across the surface of the portion of the displays touch screen corresponding to the character recognition window. If the user releases a touch in this window, functions 4708 through 4714 are performed. Function
4710 performance character recognition on the "ink" cμrrently in the window. Function 4712 clears the character recognition window, as indicated by the nume iJal 4610 in
F-igurug 46. And function 4708 supplies the corresponding recognized character to the SIP buffer and the operating system.
■Figure 48 illustrates that if the user selects the handwriting recognition option in the function menu shown in the screenshot 4600, a handwriting recognition entry window 4008 will be displayed in association with the SIP as is shown in screenshot 4802.
The operation of the handwriting mode is provided in Figw-e 49. When this mode is entered function 4902 displays the handwriting recognition window, and then a loop 4903 is entered until the user selects to use another input option. In this loop, if the user touches the handwriting recognition window in anyplace other then the delete button 4804 shown in Parg κe 48, the motion if any during the touch is recorded as "ink" by function 4904. If the user touches down in the wrelk button area 4806 shown in -Eigs-se 48 function 4905 causes functions 4906 through 4910 to be performed. Function 4906 performs handwriting recognition on any "ink" previously entered in the handwriting recognition window. Function 4908 supplies the recognized output to the SIP buffer and the operating system, and function 4910 clears the recognition window. If the user presses the φelete button 4804 shown in JEigiwre 48 functions
4912 and 4914 clear the recognition window of any "ink
It should be appreciated that the use of the recognition button 4806 allows the user to both instruct the system to recognize the "ink" that was previously in the handwriting recognition at -the g me"time.* that -he -or otre~~~* starts^ the writing of a new word to be recognized
A l?xgur 50 shows the keypad 5000/which can also be selected from the function menu. /
Having character recognition, handwriting recognition, and keyboard input methods rapidly available as part of the speech recognition SIP Jrfe is often extremely advantageous because it lets the user switch back and forth between these different modes in a fraction of a second depending upon which is most convenient at the current time. And it allows the outputs of all of these modes to be used in editing text in the SIP buffer.
As shown inrF±gτ_r Te5 5511,, in one embodiment of the SIP bufferΛif the user drags u_ from the filter button 1218 a window . 55110000 iiss ddiissppllaayy wwfefeaarrssfafa.. pprroovviiides the user with optional filter entry mode options. These include options of using a letter-name speech recognition, AlphaBravo speech recognition, character recognition, handwriting recognition, and the keyboard window, as alternative methods of entering filtering spellings. It also enables a user to select whether any of the speech recognition modes are discrete or continuous and whether the letter name recognition character recognition and handwriting recognition entries are to be treated as ambiguous in the filter string. This user interface enables the user to quickly select fe filter entry mode which is appropriate for the current time and place . For example, in a quiet location where one does not have to worry about offending people by speaking, continuous letter name recognition is often very useful. However, in a location where there's a lot of noise, but a user feels that speech would not be offensive to -hi.y neighbors, AlphaBravo recognition might be more appropriate. In a location such as a library where speaking might be offensive to others silent filter entry methods such as character recognition, handwriting recognition or keyboard input might be more appropriate. )
Ftgtrre 52 provides an example of how character recognition can be quickly selected to filter a recognition. 5200 shows a portion of a correction window in which 'the user has pressed the filter button and dragged up, causing the filter entry mode menu 5100 shown in Figure 51 to be displayed, and then selected the character recognition option. As is shown in screenshot 5202 this causes the character recognition entry window 4608 to be displayed in a location that allows the user to see the entire correction window. In the screenshot 5202 the user has drawn the
VΛ-5 character "e" and when he releases his styl-HBt from the drawing of that character the letter "e" will be entered into the filter string causing a correction window 5204 to be displayed in the example the user th-sui enters an additional character "m" into the character recognition window as indicated at 5206, and when he releases his styL Va»aSt from the drawing of this letter the recognition of the character "m" causes the filter string to include "e" as shown at 5208. ύ_? 53 starts with a partial screenshot 5300 where the user has tapped and dragged up from the filter key 1218 to cause the display of the filter entry mode menu, and has selected the handwriting option. This displays a screen such as 5302 with a handwriting entry window 4800 displayed at a location that does not block a view of the correction window1. In the screenshot 5302 the user has handwritten in
* ~) a continuous-.cursive script the letters "embed" and then is
7 filter string indicated by the ambiguous filter indicator 5304 is displayed in the first choice window corresponding to the recognized characters as shown by the correction window 5306. Eiig re 54 shows how the user can use a keypad window 5000 to enter alphabetic filtering information.
F _igu&re 5.5 illustrates how speech recognition can be used to collect handwriting recognition. Screenshot 5500 shows a handwriting entry window 4800 displayed in a position for entering text into the SIP buffer window 1104. In this screenshot the user _.» just finish-rtig writing a word. Numerals 5502 through 5510 indicate the handwriting of
recogn t on of the pr or written word. Numeral 5512 po nts to a handwriting recognition window where the user makes a final tap on the. ".rά-S' button to cause recognition of the last handwritten word "speecn" In the example of Figtrre 55, after this sequence of handwriting input has been recognized, the SIP buffer window 1104 in the application window 1106 had the appearance shown in the screenshot 5514
window 5518 to be shown. In the example, the user taps the re-utterance button 1216 and discr etly re -utters the desired words "much... slower/V? By operation of a slightly modified version of the "get" choices function described above with regard to F-_grre 23-this will cause the recognition scores from recognizing the utterance 5520 to be combined with the recognition results from combining the handwritten -"^ree-" in the input pointed to by numerals 5504 and 5506 to select a best scoring recognition candidate, which in the case of the example is the desired words as shown at numerals 5522..
It should also be appreciated that the user could have presse the new button in the correction window 5518 instead of the (rse-^gjrdd buttonkin which case the utterance 5520 would have used the output of speech recognition to replace the handwriting outputs wfeiβt had been selected as shown at 5516.
recognition of the two words selected at 5516 in Figulee 55.
■Figtτr<e 57 illustrates an alternate embodiment 5700 of the SIP speech recognition interface in which there are two separate top-level buttons 5702 and 5704 to select between discrete and continuous speech recognition, respectively.
It will be appreciated that it is (©^matter of design choice'
switch between the more rapid and more natural ^continuous speech recognition^) versus the more reliable^".although more
scrollable score ordered oice list rather than the two alphabetically ordered choice lists created by the routine in Figure 22. The only portions of its language that differs from the language contained in Figure 22 is underlined, with the exception of the fact that functions he
basic phone number keypad to functions that are used in various modes or menus of the disclosed cell phone speech
if the user presses the one key when in he editor mode.
The entry mode menu is used to select among various text and alphabetic entry modes available on the system. Figure 69 displays the functions that are available on the numerical phone key pad when the user has a correction window displayed, which can be ffij use from the editor mode by pressing the "2" key. Fiure 70 displays the numerical
options ava lable n the correction window by pressing the τ,
"3" key. In addition to changing navigational modes while /)-/ in a correction window it also allows the user to vary the function that is performed when a choice is selected. EJigtrre 72 illustrates the numerical phone key mapping during a key Alpha mode, in which the pressing of a phone key having letters associated with it will cause a prompt to be shown on the cell phone display asking the user to say the ICA word associated with the desired one of the sets of letters associated with the pressed key. This mode is selected by double -clicking the "3" phone key when ajg. entry mode menu shown in Figure 68.
i Sigwzβ 73 shows a basic keys menu, which allows the user to rapidly select from among a set of the most common punctuation and function keys used in text editing, or by pressing the "1" key to see a menu that allows a selection of less commonly used punctuation marks. The basic keys menu .is selected by pressing a J' 9.1 in 'the editor mode illustrated in Fi gnre 67. Figure 74 illustrates the edit oυprtι.i_.o-_n.ι mmet.nιιuu/^\wrhii_.c1.h11 stseeleecutLeedu b_.yy prieesastsinigy "■Ou' i-tni tuhiiee ecuditor This contains a menu wlα c_h allows a user to perform basic tasks associated with use of the editor whieb. are not available in the other modes or menus ,
At the top of each of the numerical phone key mappings shown in■Figures 67 through 74 is a title bar that is shown at the top of the cell phone display when that menu or roiru-these figures /η /star'with the fr layed options are 68, 70, 71, 73/ and ENU" . Thi.s is ITused between the command lists shown in the menus shown m the others of list displays commands wjαiβn are mode even when that command list is not in the editor mode associated with the command list of 67 or the key Alpha mode associated with E-igu-re 72, normally the text editor window will be displayed even though the phone keys have the functional mappings shown in those figures. Normally when in the correction window mode associated with the command list shown' in Figure
69, a correction window is sJaQne on the cell phones display.
In all these modes, the user can access the command list to see the current phone key mapping as is, illustrated in -.r--/ /
F-_gtH?e 75 by merely pressing the/ftfpnu ey-) * as is pointed to —^ by the numerals 7500 in that figure. In the example shown in Figure 75, a display screen 7502 shows a window of the editor mode before the pressing of the røtenu button. When the user presses the Menu button the first page of the
I additional options associated with the current mode at the time the command list is entered, they can also be selected from the command list by means of scrolling the highlightr
7512 and using the ψ01 key. In the example shown in Figure 75 a phone call indicator 7514 having the general shape of a telephone handset is indicated at the left of each title bar to indicate to the user that the cell phone is currently in a telephone call. In this case extra functions are available in the editor which allow the user to quickly select to mute the microphone of the cell found, to record only audio from the user side of the phone conversatior and to play the playback only to the user side of the phone' conversation.
=DRAGON PAD 7 Riguiάs 76 through 78 provide a more detailed pseudocode description of the functions of the editor mode^ than is shown by the mere command listings shown in FrguTes ^3
67 and 75. This pseudocode is represented as one input loop
7602 in which the editor responds to various user inputs.
If the user inputs one of the navigational commands indicated by numeral 7603, by either pressing one of the navigational keys or speaking a corresponding navigational command, the functions invented under it as in -F-igure 76 are performed.
These include a function, 7604 that tests to see if the editor is currently in word/wide navigational mode. This is the most common mode of navigation in the editor, and it can be quickly selected by pressing the "3"' key twice from the editor. The first press selects the navigational mode menu shown in Mga?-,. and the second press selects the word/line navigational mode from that menu. If the editor is m wordV Tlme mode function 7606 through 7624 are performed/
If the navigational input is a word -left or word-right command, function 7606 causes function 7608 through 7617 to be performed. Functions 7608 and 7610 test to see if extended selection is on, and if so, they move the cursor one word to the left or right, respectively, and extend the previous selection to that word. If extended selection is not on, function 7612 causes functions 7614 to 7617 to be performed. Functions 7614 and 7615 test to see if either the prior input was a word left/right command of a different direction than the current command or if the current command would put the cursor before or after the end of text. If either of these conditions is true, the cursor is placed to the left or right out of the previously selected word, and that previously selected word is unselected. If the conditions in the test of function 7614 are not met then function 7617 will move the cursor one word to the left or the right out of its current position and make the word that has been moved to the current selection.
The operation of function 7612 through 7617 enable word left and word right navigation to allow a user to not only move the cursor by a word but also to select the current word at each move if so desired. It also enables the user to rapidly switch between the cursor whieli corresponds to a selected word or cursor wb_L___α represents an insertion point before or after a previously selected word.
If the user input has been a line up or a line down command, function 7620 moves the cursor to the nearest word on the line up or down from the current cursor position, and if extended selection is on, function 7624 extends the current selection through that new current word.
As indicated by numeral 7626 the editor also includes programming for responding to navigational inputs when the editor is in other navigation modes that can be selected from the edit navigation menu shown in Figure 70.
If the user selects "OK" either by pressing the button or using voice command, function 7630 tests to see if the editor has been called to enter text into another program, such as to enter text into a field of a Web document or a dialog box, and if so function 7632 enters the current context of the editor into that other program at the current text entry location in that program and returns. If the test 7630 is not met, function 7634 exits the editoi isaving its current content and state f-όr^possible later use.
If the in the editor, function 7638 calls the display menu routine for the editor b d ib d b hi ll h
set the recognition vocabulary to the editor's command vocabulary, and commands speech recognition using the last press of the double -click to determine the duration of that recognition.
If the user makes a sustained press of the menu key, function 7650 enters help mode for the editor. This will provide a quick explanation of the function of the editor mode and allow the user to' explore the' editor' s hierarchical command structure by pressing its keys and having a brief explanation produced for the portion of that hierarchical command structure reached as a result of each such key pressed.
If the user presses the / il -key en in the editor, function 7654 turns on recognition according to current recognition settings, including vocabulary and recognition duration mode. The .talk button will often be used as the major button used for initiating speech recognition in the cellphone embodiment.
If the user selects the md button, function 7658 goes to the phone mode, such as the quickly make or answer a phone call. It saves the current state of the editor so that the user can return to it when such a phone call is over.
A * shown m ■ Fi 3ure 77, if the user selects the entry mode menu illustrated in _£gbH?e 68, function 7702 causes that menu to be displayed. As will be described below in greate'r detail, this menu allows the user to quickly select between dictation modes somewhat as buttons 1122 through
1134 shown in Figugfe 11 did in the PDA embodiment. In the embodiment shown, the entry mode menu has been associated with the "1" key because of the "1" key's proximity to the talk key. This allows the user to quickly switch dictation modes and then continue dictation using the talk button.
If the user selects "choice listr/H functions 7706 and 7708 set the correction window navigational mode to be page/item navigational mode, which is best for scrolling r through and selecting recognition candidate choices. They then can call the correction window routine for the current selectionΛwhich causes a correction window somewhat similar to the correction window 1200 shown in -δL-gri-εe 12 to be displayed on the screen of the cellphone. If there currently is no cursor, the correction window will be called with an empty selection. If this is the case, it can be
I used to select one or more words using alphabetic input, word completion, and/or the addition of what are more utterances. The correction window routine will be described in greater detail below.
If the user selects "filter choices" such as by double - clicking on the "2" key, function 7712 through 7716 set the correction window navigational mode to the word/character mode used for navigating in a first choice or filter string. They than call the correction window routine for the current selection and treat the second press of the double -click, if one has been entered, as the speech key for recognition duration purposes.
In most celiaphones, the "2" key is usually located directly below the navigational key. This enables the user to navigate in the editor to a desired word or words that need correction and then single-^ress the nearby "2" key to see a correction window with alternate choices for the selection, or to double -click on the "2" key and immediately start entering filtering information to help the recognizer selects? a correct choice.
the user to switch between continuous and discrete button and
se ect on command because of its proxm ty .to the navigational controls and the "2" key-Λwhich is used for bringing up correction windows. 7
If the user chooses the select all command, such as by double-clicking on the "5" key function 7736 selects all the text in the current document . J
If the. user selec Jts' "6" key or any of the assocc riaeted _J commands A__Lc <aarree ccuurrrreefi;nt:ly active, which can include play start, play stop^pr records stop, function 7740 tests to see
headphone of the cellphone itself.
If, on the other hand-the system is recording audio when the "6" button is pressed, function 7750 turns recording off . record command, function 7754 turns audio recording on.
Then function 7756 tests to see if the system is currently on a phone call and if the record only me setting 7511 shown in Figure 75 is in the off state. If so, function 7758 records audio from the other side of the phone li ne as well as from the phone's microphone or microphone input jack.
If the user presses the "7" key or otherwise selects the capitalized menu command, function 7762 displays a Hrf capitalized menu .asisteh offers the user the choice to select between modes that cause all subsequently entered text to be either in all lowercase, all initial caps, or all capitalized. It also allows the user to select h viπy tliS" one or more words currently selected, if any, i_o changed to all lowercase, all initial caps, or all capitalized form.
start types, word tense types, word part of speech types, and other word types such as possessive or non possessive form, singular or plural nominative forms, singular or plural verb forms, spelled or not spelled forms and homonyms, if any exist.(\
~ & H
I
As shown in ϊ^igu-r 78, if the user presses the -nine key or selects the basic key's menu command, function 7802 displays the basic key's menu shown in Figure 73, which allows the user to. select the entry of one of the punctuation marks or input character that can be selected from that menu as text input .
If the user double -click" on the "9" key or selects the New Paragraph Command, function 7806 enters a New Paragraph Character into the editor's text.
If the user selects the "*" key or the escape command, functions 7810 to 7824 are performed. Function 7810 tests to see if the editor has been called to input or edit text in another program, in which case/) function 7812 returns from the call to the editor with the edited text for insertion to that program. If the editor has not been called for such purpose, function 7820 prompts the user with the choice of exiting the editor, saving its contents and/or canceling escape. If the user selects to escape, functions 7822 and 7824 escape to the topGleyel of the phone mode described above with regard to Figare 63. If the user double-clicks on the "*" key or selects the task list function, function 7828 goes to the task list, as such a double-click does in most of the cellphones, operating modes-^ and menus .
It the user presses the "0" key or selects the edit options menu command, function 7832 is the edited options menu described above briefly with regard to Figure 74. If the user double -clicks on the "0" key or selects the undo command, function 7836 undoes the last command in the editor, if any.
It the user presses the "#" key or selects the backspace command, function 7840 tests to see if there's a current selection. If so, function 7842 deletes it. If there is no current selection and if the current smallest navigational unit is a character, word, or outline item, functions 7846 and 7848 delete backward by that smallest current navigational unit .
FTgCTres 79 and 80 illustrate the options as provided by the entry mode menu discussed above with regard to Figure 68. '
When in this menu, if the user pr'esses the "1" key or otherwise selects large vocabulary recognition, functions 7906 through 7914 are performed. These set the recognition vocabulary to the large vocabulary. They treat the press of the "1" key as a speech key for recognition duration purposes. They also test to see if a correction window is displayed. If so, they set the recognition mode to discrete recognition, based on the assumption that in a correction window, the user desires the more accurate discrete recognition. They add any new utterance or utterances received in this mode to the utterance list of the type described above, and they call to the display the choice list routine of Figure 22 to display a new correction window for any re-utterance received.
In the cellphone embodiment shown, the "1" key has been selected for large vocabulary in the entry mode menu because it is the most common recognition vocabulary and thus the user can easily select-eo it by clicking the "1" key twice from the editor. The first click selecting the entry mode menu and the second click selecting the large vocabulary recognition.
it the user presses the "2" key when in entry mode, the system will be set to a letter -name recognition of the type described above. If the user double -cliαkr on that key when the entry mode menu is displa at a time when the user is in a correction window, function 7926 sets the recognition vocabulary to the letter -name vocabulary and indicates that the output of that recognition is to be treated as an ambiguous filter. In the preferred embodiment, the user has the capability to indicate under the entry preference option associated with the "9" key of the menu .whether or not such filters are to be treated as .ambiguous length filters or not. The default setting is to let such recognition be treated as an ambiguous length filter in continuous letter - name recognition, and a fixed length ambiguous filter in response to the discrete letter -name recognition.
At the user presses the "3" key, recognition is set to the AlphaBravo mode. If the user double -click on the "3" key^^re^cpgnition is set to the key^ "Alpha" mode as described
(rbi αghtT briefly with regard to FigSre 72. This mode is similar to AlphaBravo mode except that impressing f^one of the number keys "2" through "9" will cause the user to be prompted to one of the ICA words associated with the letter s on the pressed key and the recognition will favor recognition of one from that limited set of ICA words, so as
If the user presses the "5" key, the recognition vocabulary is limited to a punctuation vocabulary.
If the user presses the "6" key, the recognition vocabulary is limited to the contact name vocabulary described above.
Figure 86 illustrates the key Alpha mode which has been described above to some extent with regard to figure 72. As indicated in figure 86, when this note is entered the navigation mode is set to the word/character navigation mode normally associated with alphabetic entry. Then function' 8604 overlays the keys listed below it with the functions indicated with each such key. In this mode, pressing the talk key turns on recognition with the AlphaBravo vocabulary according to current recognition settings and responding to key press according to the current recognition duration setting. The 1 key continues to operate as the entry edit mode so that the user can press it to exit the key Alpha mode. A pressing of the numbered phone keys 2 throug 19
If the user presses the zero button, function 8628 enters a key punctuation mode that response to the pressing of any phone key having letters associated with it by recogn t on y a ternat ng between press ng t e ta utton and the "4" key.
If the user sel cts selection start or selection stop, as by toggling the "5" function 7728 toggles extended selection on and off, ding whether that mode was currently on or o Then iction 7730 tests to see
As shown in Figure 78, i ;he user presses the "9" key or selects the basic key's menu command, function 7802 which
selected
selects the , Paragraph
If the user sheets the "*" key or the escape command,
double-click does in most of the cell phones, operating modeS,Λand menus .
or outline item, by that smallest
Figures 79 and 80 ill trate'the options as provided by the entry mode menu discusse ve with regard to Figure 68.
When in this itnsnu, if the user presses he "1" key or
In the cell phohe embodiment shown, the "1" key has been selected for large* Dulary in the entry mode menu because it is the most _n recognition vocabulary and thus the user can eas ly select it by clicking the "1" key twice from the editor, the firs>t click selecting the entry mode menu and th.≡r second click selecting the large vocabulary recognition.
If the user presses the "3" key, recognition is set to the AlphaBravo mode. If the user double -clicks on the "3" key, _recognition is set to the key "Alpha" mode as described^brought briefly) with regard to Figure 72. This mode is similar to AlphaBravo mode except tha pressing one of the number keys "2" through "9" will cause the user to be prompted to one of the ICA words associated with the letters on the pressed key and the recognition will favor recognition of one from that limited set of ICA words, so as to provide very reliable alphabetic entry even under
allows the user to sel t the ent y of one of the punctuation marks or inpir character that can be selected from that menu as text , inpu
If the user double -cl ck oh1 the "9" key or selects the New Paragraph Command, function 7806 enters a New Paragraph Character into the editor's text.
double-click does in most of the cellphones, operating modes and menus .
menu and the second click selecting the large vocabulary recognition.
the output of that, recognit be treated as an ambiguous filter preferred embodiment, the user has the capability to in unde; the entry preference option associated with the "9" the menu whether or not such filters are to be treated ambiguous length filters or not . The default settin to let such recognition be treated as an ambiguo' length silter in continuous letter - name recognition, a: L fixed 1' ambiguous filter in response to the discrete letter -name recognition.
At the user presses the recognition is set to the AlphaBravo mode the us όuble -click on the "3" key, recognition is set the"y "Alpha" mode as described brought briefly with regan Figure 72. This mode is similar to AlphaBravo mode Sx. ept that in pressing of one of the number keys "2" thro- will c'ause the user to be prompted to one of the words ssociated with the letters on the pressed key an recogni ion will favor recognition of one frbm that 1lmi eα, set of ICA words, so as
•to provide very reliable alphabetic entry even under
to say the selection, preferably preceding it by a --text—-ttr= / I^Λ speech or pre-recorded saying of the word "selection"/"] If there is no selection when text -to-speech is toggled on, t-ext—to spjsech starts saying the current text at the current cursor location until the end of the current document or until the user provides input other than cursor movement
functionality to be used without requiring being able to see the cell phones screen.
The text Ύ-toT-Xspeech submenu also includes a choice that allows the user to play the current selection whenever he or she desires to do so as indicated by functions 8924 and 8926 and functions 8928 and 8930 that allow the user to toggle continuous play on or off whether or not the machine is in a
TTS on or TTS. off mode. As Indicated by the top-level choices in the edit options menu at 8932, a double -lclick of the^4\key toggles text -to-speech on or off as if the user had pressed the 4 key, then waited for the text-to-speech
°^ t. menu to be displayed and then again pressed the 4 key. . n ,
The 5 key in the edit options menu selects the outline menu wfeilch includes a plurality of functions that let a user navigate in an expand and contract headings and an outline mode. If the user double -clicks on the 5 key, the system toggles between totally expanding and totally contracting the current outline element in which the editors cursor is located.
Ii M If the user selects the 6 key and audio menu is displayed as a submenu, some of the options of which are displayed indented under the audio menu item 8938 in the combination of -Kru es menu includes an item selected by the 1 key -wM-eh gives the user finer control over audio navigation speed that is provided by use of the 6 button in the edit now menu described above with regard to g_ιr___ι 84 and 70. If the user selects the 2 key, he or she will see a submenu that allows the user to audio
\
■ UL playback settings such as volume and speed and whether audio associated with recognized words is to be played and/or audio recorded without .associated recognized words.
Figure 90 starts with items selected by the 3, 4, '5, 6 it J* and 7 keys under the audio menu described above, starting with numeral 8938 in figure 89. If the user presses the 3 key, a recognized audio options dialog box 90O0 will be displayed wh*efe, as is described by numerals 9002 through
9014, gives the user the option to select to perform speech recognition on any audio contained in the current selection in the editor, to recognize all audio in the current document, to decide whether or not previously recognized audio is to be read recognized, and to set parameters to determine the quality of, and time required by, such recognition. As indicated' at function' 9012, this dialog box provides an estimate of recognizing the current selection with the current quality settings and, if a task of recognizing a selection is currently underway, status on the current job. This dialog box allows the user to perform recognitions on relatively large amounts of audio as a background task or at times with, a phone is not being used for other purposes, including times when it is plugged into an auxiliary power supply.
H U If the user selects the 4 key in the audio menu, the user is provided with a submenu that allows him to select to delete certain information from the current selection. This includes allowing the user to select to delete all audio
aud o, or to delete text from t e es re select on.
Deleting recognition audio from recognized text greatly reduces the memory associated with the storage of such text and is often a useful thing to do once the user has decided that he does not need the text -associated audio to help him her determine its intended meaning. Deleting text but not audio from a portion of media is often useful where the text has been produced by speech recognition from the audio but is sufficiently inaccurate to be of little use.
In the audio menu, the 5 key allows the users to select whether or not text that has associated recognition audio is marked, such as by underlining to allow the user to know if such text has playback that can be used to help understand it or, in some embodiments, will have an acoustic representation from which alternate recognition choices can
if the recording of recognition audio is turned off, such audio will be capped for some number of the most recently recognized words so that it will be available for correction playback purposes .
(| H In the audio menu, the 7 key selects a transcription mode dialog box. This causes the dialog box to be displayed, that allows the user to select settings to be used in a .£rarτ_rcr-iat;ion mode that is described below with that is designed to make it easy for user to transcribe prerecorded audio by
<___=_ cot. iϋwanwyto as the search string. As w ll be illustrated below, the speech recognition text editor can be used to enter a different search string, if so desired. If the user ii double-clicks on the 8 key, this will be interpreted as a find again command .which will search again for the previously entered -search string.
If the user selects the 9 key in the edit options menu, a vocabulary menu is displayed whj i c.llows the user to determine which words are in the current vocabulary, to select between different vocabularies, and vadd words to a given vocabulary. If the user either presses or double - clicks the 0 button when in the edit options menu, an undo function will be performed. A double click accesses the undo function from within the edit options menu so as to i. " provide similarity with the fact that a double -click on 0 accesses the undo function from the editor or the correction
I window. In the edit options menu, the pound key operates as a redo button.
Figure 94 illustrates the . These are the rules that govern the operation of text^ to- speech generation when tex -to-speech operation has been selected through the text? -to-speech options described above with regard to functions 8908 to 8932 of f±giaase 89.
If a text -to-speech keys mode has been turned on by operation of the 1 key when in the text-to-speech menu, as indicated by function 1909 above, function 9404 causes functions 9406 to 9414 to be performed. These functions enable a user to safely select phone keys without being able to see them, such as when the user is driving a car or is otherwise occupied. Preferably this mode is not limited to operation in the speech recognition editor that can be used in any mode of the cell phones operation. When, any phone key is pressed, function 9408 tests to see t the same key has been pressed within a TTS KeyTime, which is a short period of time such as a quarter or a third of a second. __"
function 9408 finds that the time since the release of the last key press of the same key is less than the TTS key time function 9414 the cel-Jphonefe software Irs. respond* to the key press, including any double -clicks, the same as it would as if the TTS key mode were not on.
be seen that the TTS keys mode allows the user to find a cell phone key by touch, to press it Jt> determine if it is the desired kevsa d, if so, to quickly
response other than the saying of its associated function, this mode allows the user to search for the desired key without causing any undesired consequences.
In some cell phone embodiments, the cell phone keys are designed so that they are merely t hli aann pushe audio feedback a,_ ho whi r.h j Rv f.h _ are and their current* p function, similar to that provided by function 9412, will be provided. This can be provided, for example, by having the
user's body to a key, can be detected by circuitry associated with the key. Such a system would provide an even faster way for a user to find a desired key by touch, since with it a user could receive feedback as to which keys he was touching merely by scanning a finger over the keypad in the vicinity of the desired key. It would also allow a user to rapidly scan for desired command name by li kewise scanning his fingers over successive keys until the desired command was found.
When TTS is on, if the system recognizes or otherwise^. receives a command input, functions 9416 and 9418 cause ' ' text-to-speech or recorded audio playback to say the name of the recognized command. Preferably such audio confirmation qf commands have an associated sound quality, such as in the form of the different tone of voice or different associated sounds, that distinguish the saying of command words from
use text -tol- speech to say the words w__a.ch have been recognized as the first choice for the utterance .
As indicated in functions 9426 through 9430, text-to- speech respon&e to the recognition of a filtering utterance in a similar manner.
When in text-to-speech mode, if the user moves the cursor to select a new word or character, functions 9432 to 9438 use text-to'-'speech to say that newly selected word or character. If such a movement of a cursor to a new word or character position extends an already started selection, after the saying of the new cursor position, functions 9436 and 9438 will say the word "selection" in a manner that indicates that it is not part of recognized text, and then proceeds to say the words of the current selection. If the user moves the cursor to be a non -selection cursor, such as is described above with regard to functions 7614 and 7615 of 9'4 use tOextj-to- speech' to say the two words that the cursor is located between.
When in te Wxt -to-speech mode, if a new correction rrrπ windows is displayed, functions 9444 and 9446 use text -co- speech to say the first choice in the correction window, dispel the current filter (a& any)' indicating which parts of it are unambiguous and which parts of it are ambiguous, and then use text-to-speech to say each candidate in the currently displayed portion of the choice list. For purposes of speed, it is best that differences in tone or sound be used to indicate which portions, of the filter are absolute or ambiguous .
If the user scrolls an item in the.- correction window, functions 9448 and 9450 use text -to-speech to say the currently highlighted choice and its selection number in response to each such scroll. If the user scrolls a page in a correction window, functions 9452 and 9454 use text, -to- speech to say that newly displayed choices a s well as indicating the currently highlighted choice.
When in correction mode, if the user enters a menu, functions 9456 and 9458 use text -to-speech or free recorded audio to say the name of the current menu and all of the choices in the menu and their associated numbers, indicating the current selection position. Preferably this is done with audio cues that indicate to a user that the words being said are menu options .
If the user scrolls up or down an item in a menu, functions 9460 and 9462 use text TO-speech or pre-recorded audio to say the highlighted choice and then, after a brief pause, any following selections on the currently displayed page of the menu. Figure 95 illustrates some aspects of the programming used in text'-lco-speech generation. If a word to be generated by text -to-speech is in the speech recognition programming ' s vocabulary of phonetically spelled words, function 9502 causes functions 9504 through 9512 to be performed. Function 9504 tests to see if the word has multiple phonetic spellings associated with different parts of speech, and if the word to be set using TTS has a current linguistic context indicating its current part of speech. If both these conditions are met, function 9506 uses speech recognition programming ' s part of speech indicating code to select the phonetic spelling associated with a part of speech found most probable by that part of speech indicating code as the phonetic spelling in the text -to-speech generation for the current word. If, 'on the other hand, there is only one phonetic spelling associated with the word or there is no context sufficient to identify the most probable part of speech for the word, function 9510 selects the single phonetic spelling for the word or its most common phonetic spelling. Once a phonetic spelling has been selected for the word to be generated either by function 9506 or function 9510, function 9512 uses the phonetic spelling selected for the word as a phonetic spelling to be used in the text -to/speech generation. If, as is indicated at 9514, the word to be generated by text -to-speech does not have a phonetic spelling, function 9514 and 9516 use pronunciation guessing software that is used by the speech recognizer to assign a phonetic spellings/to names and newly entered words for the text-to-speech generation of the word.
Figure 96 describes the operation of the transcription mode that can be selected by operation of the transcription menu of the edits options menu shown in figures ^89 and 90.
When the transcription mode is entered, function 9602 normally changes navigation mode to an audio navigation mode that navigates forward or backward five seconds and an audio recording in response to left and right navigational key input and forward and backward one second in response to u^O ^-. down navigational input. These are default values/which can be changed in the transcription mode dialog, box. During the mode, if the user clicks the play key, which is the 6 key in the editor > functions 9606 through 9614 are performed. Functions 9607 and 9608 toggle play between on and off. Function 9610 causes functions 9612 to be performed if the toggle is turning play on. If so, if there has been no sound navigation since the last time sound was played,
each successive playback will start slightly before the last one ended so the user will be able to recognize words that were only partially said in the prior playback and so that the user will better be able to interpret speech sounds as words by being able to perceive a little bit of the preceding language context. If the user presses the play key for more than a specified period of time, such as a third of the second, function 9616 causes functions 9618 through 9622 to be performed. These functions test to see if play is on, and if so they turn it off. They also turn on large vocabulary recognition during the press, in either continuous or discrete mode, according to present settings. They then insert the recognize text into the editor in the location in the audio being transcribed at which the last end of play took place. If the user double -clicks the play button, functions 9624 and 9626 prompt the user that audio recording is not available in transcription mode and that transcription mode can.be turhed off in the audio menu under the added options menu.
It can be seen that its transcription mode enables the user to alternate between playing a portion of previously recorded audio and then transcribing it by use of speech recognition by merely alternating between clicking and making sustained presses of the play key, which is the numbe_ε 6 phone key. The user is free to use the other functionality of the editor to correct any mistakes wh._ cji P^ have been made in the recognition during the transcription process, and then merely return to it by again pressing the 6 key to play the next segment of audio to be transcribed. Of course, it should be understood that the user will often not desire to perform a literal transcription out of the audio. For example, the user may play back a portion of a phone call and merely transcribe a summary of the more noteworthy portions .
Figure 97 illustrates the operation of a dialogue box editing programming that uses many features of the editor mode described above to enable users t o enter text and other information into a dialogue box displayed in the cell phones screen.
When a dialogue box is first entered, function 9702 displays an editor window showing the first portion^of the dialog box. If the dialog box is too large to fit ©jji one screen at one time, it will be displayed in a scrollable window. As indicated by function 9704, the dialog box responds to all that the editor mode described above 76 through 78 does, except as is indicated by the functions 9704 through 9726.
As indicated at 9707 and 9708, if the user supplies navigational input when in a dialog box, the cursor movement resporids in a manner similar to that in which it wouldiuhe ? editor except that it can normally only mov e to a control into which the user can supply input. Thus, if the user moved left or right a word, the cursor would move left or right to the next dialog box control, moving up or down lines if necessary to find such a control. If the user moves up or down a line, the cursor would move to the nearest -on1 the lines above or below the current cursor position. In order to enable the user to read extended portions of text that might not contain any controls, normally a cursor will not move more than a page even if there are no controls within that distance.
As indicated by functions 9700 and through 9716, if the cursor has been moved to a field and the user provides any input of a type wJ___« would input text into the editor, function 9712 displays a separate , editor window for the field, which displays the text currently in that field, if
I any. If the field has any vocabulary limitations associated with it, functions 9714 and 9716 limit the recognition in the editor to that vocabulary. For example, if the fie α were limited to state names, recognition that field would be so limited. As long as this field -editing window is displayed, function 9718 will direct all editor commands to perform editing within it. The user can exit this field - editing window by selecting OK, which will cause the text currently in the window at that time to be entered into the corresponding field in the dialog box window.
If the cursor in the dialog box is moved to a choice list and the user selects a text input command, function 9722 displays a correction window showing the current value in the list box as the first choice and other options provided in the list box as other available choices shown in a scrollable choice list. In this particular choice lists, the scrollable options are not only accessible by selecting an associated number but also are available by speech recognition using a vocabulary limited to those options.
If the cursor is in a check box or a radio button and the user selects any 9724 and 9726 change the state of the σheck^}box or radio button, by toggling whether the check box or radio button is selected.
, Figure 98 illustrates a help routine 9800,.which is the cell phone embodiments analog of the help mode described above with regard to figure 19 in the PDA embodiments. When this help mode is called when the cell phone is in a given state or mode of operation, function 9802 displays a scrollable help menu for the state that includes a description of the state along with a se lectable list of help options and of all of the state's commands. Figure 99 displays such a help menu_Eor the editor mode described above with regard to f±gures 67 and 76 through 78. Figure 100 illustrates such a help menu for the entry mode menu described above with regard to figtlr ^8 and f'iyure 79 and 80. As his shown in figtr_.es 99 and 100, each of these help menus includes a help options selection, which can be selected by means of a scrollable highlight and operation of the help key, which will allow the user to quickly jump to the various portions of the help menu as well as the other help related functions. Each help menu also includes a
As shown in figure 101, if the user in the editor mode makes a sustained press on the menu key as indicated at 10100, the help mode will be entered for the editor mode, causing the cell phone to display the screen 10102. This displays the selectable help options, option 9902, and displays the beginning of the brief description of the operation of the other mode 9900 as shown in figure 99. If the user presses the right arrow key of the cell phone, y-v which functions as a page right button, since, in help mode , the navigational mode is a page/line navigational mode as indicated by the characters "<PΛL" shown in screen 1102, the display will scroll down a page' as indicated by screen 10104. If the user presses the page right key again, the screen will again scroll down a page, causing the screen to have the appearance shown at 10106. In this example, the user has been able to read the summary of the function of the editor mode 9904 shown in figure 99 with just two clicks of the page right key.
If the user clicks the page right key^acjain causing the screen to scroll down a page as is shown in the screen shot 10108, the beginning of the command list associated with the editor mode can be seen. The user can use the navigational keys to scroll the entire length of the help menu if so desired. In the example shown, when the user finds the key number associated with the entry mode menu, he presses that key as shown at 10110 to cause the help mode to display the help menu associated with the entry mode menu as shown a t screen 10112.
It should be appreciated that whenever the user is in a help menu, he can immediately [PART OF SENTENCE MISSING] commands listed under the "selected by key" line 9910 shown in figure 99 by making the commands associated key press .
Thus, there is no need . for a user to scroll down to the portion of the help menu in which commands are listed in order to press the key associated with a command in order to see its function. In fact, a user who thinks he understands the function associated with the key can merely make a sustained press of the menu key and then type the desired key to see a brief explanation of its function and a list of the commands that are available under it.
The commands listed under the "select by OK" line 9912 shown in figures 99 and 100 have to be collected by scrolling the highlight to the commands line in the menu and selecting by use of the OK command. This is because the commands listed below the ine 9912 ar'e associated with keys that are used in the operation of the help menu itself. This is similar to the commands listed in screen 7506 of the editor mode command list shown in figure 75, which are also only selectable by selection with the OK command in that command list.
In the example of figure 101, it is assumed that the user knows that the entry preference menu can be selected by pressing and 9 in the entry mode menu, and presses that key as soon as he enters help for the entry mode menu as indicated by 10114. This causes the help menu for the entry preference menu to be shown as illustrated at 10116.
In the examp y followed by the escape key. The 1 key briefly calls the help menu for the dictation defaults option and the escape key returns to the entry preference menu at the locat ion and menu associated with the dictation defaults option, as shown by screen 10118. Such a selection of a key option followed by an escape allows the user to rapidly navigate to a desired portion of the help menu's command list merely by pressing the number of the key in that portion of the command and list followed by an escape.
In the example, the user presses the page right key as shown at 10120 to scroll down a page in the command list as indicated by screen 1122. In the example, it is assumed the user selects the option associated with the 5 key, by pressing that key as indicated at 10124 to obtain a description of the press continuous, click discrete to
1
As shown m figure 102, in the example, when the user returns to help for the entry preference menu, here she
as shown at screen 10206. The user then presses escape again to return to the help menu from which the entry preference menu had been called, which is help for the entry mode menu as shown at screen 10210. The user presses escape again to return to the help menu from which help for entry mode had been called, which is the help menu for the editor mode as shown in screen 10214. In the example, it is assumed that the user presses the page right key six times to scroll down to the bottom portion, 9908, shown in figure 99 of the help menu for the editor mode. If the user desires he can use a place command to access options in this portion of the help menu more rapidly. Once in the "other help" portion of the help menu, the user presses the down line button as shown at 10220 to select the editor screen option 10224 shown in the screen
10222. At this point, the user selects the OK button causing the help for the editor screen itself to be displayed as is shown in screen 10228. In the mode in which this screen is shown, phone key member indicators 10230 are used to label portions of the editor screen. If the user presses one of these associated phone numbers, a description of the corresponding portion of the screen will be displayed. In the example of figure 1'02, the user presses the 4 key, which causes an editor screen help screen 10234 to be displayed, which describes the function of the navigation mode indicator "<WAL" shown at the top of the editor screen help screen 10228.
In the example, the user presses the escape key three times as is shown to numeral 10236. The first of these escapes from the screen 10234 back to the screen 10228, giving the user the option to select explanations o f other of the numbered portions of the screen being described. In the example, the user has no interest in making such other selections, and thus has followed the first press of the escape key with two other rapid presses, the first of which escapes back to the help menu for the editor mode and the second of which escapes back to the editor mode itself.
As can be seen the figures 101 and 102, the hierarchical operation of help menus enables the user to rapidly explore the command structure on the cell phohe. This can be used either to search for a command that performs a desired function, or to merely learn the command structure in a linear order.
F gures 103 and 104 describe an example of a user continuously dictating some speech in the editor mode and then usingi the editor's interface to correct the resulting text output .
The sequence starts in figure 103 with the user making a sustained press of the talk button as indicated at 10300 during which he says the utterance 10302. This results in
I the recognition of this utterance, which in the example causes the text shown in screen 10304 to be displayed in the editor's text window 10305. The numeral 10306 points to the position of the cursor at the end of this recognized text, which is a non-selection cursor at the end of the continuous dictation.
assumed that the system has been set in a mode cause the utterance to be recognized usi ng continuous large vocabulary speech recognition. This is indicated by the characters "_LV" 10306 in the title bar of the editor window shown in screen 10304. ji ll
In the example, the user presses the 3 key to access the added navigation menu illustrated in figure 70 and 84 and then presses the 1 button to select the utterance This makes the cursor correspond to the first word of the text recognized for the most recent utterance as indicated at 10308 in screen 10310. Next, the user double-clicks the 7 key to select the capitalized cycle function described in figure
77. This causes the selected word to be capitalized as shown at 10312.
Next, the user presses the right button, which in the current word/line navigational mode, indicated by the navigational mode indicator 10314, functions as a word right button. This causes the cursor to move to the next word to the right, 10316. Next the user presses the 5 key to set the editor to an extended selection mode as describe d above with regard to functions 7728 through 7732 of figure 77. Then the user presses the word right again, which causes the cursor to move to the word 10318 and the extended selection 10320 to include the text "got itl'P
Next, the user presses thell_"key to select the choice list command of figure 77, which causes a correction window
10322 to be displayed for the selection 10320 as the first choice and with a first alphabetically ordered choice list shown as displayed at 10324. In this choice list, each choice is si_o_-© with an associated phone key number that can be used to select it.
In the example, it is assumed that the desired choice
10328, in which the desired word "product" is located.
As indicated by function 7706 in figure 77, when the user enters the correction window by a single press of the choice list button, the correction window's navigation of the set to the page/item navigational mode, as is indicated by the navigational mode indicator 10326 shown in screen
10332 J
H I'
In' the example, the user presses the 6 key to select the desired choice, which causes it to be inserted into the editor's text window at the location of the cursor selection, causing the editor text window to appear as shown at 10330.
Next, the user presses the word right key three times to place the cursor at the location 10332. In this case, the recognized word is "results" 'and a desired word is the singular form of that word "result." For this reason, the
*— user presses the word form list button, which causes a word■ form list correction window, 10334, to be displayed, -fefeaTE ^^ has the desired alternate form as one of its displayed choices. The user data selects the desired choice by pressing its associated phone key, causing the editor's text window to have the appearance shown at 10336.
I
As shown in figure 104, the user next presses the line down button to move the cursor down to the location 1400. The user then presses the 5 key to start an extended selection and presses the word key to move the cursor right one word to the location 10402, causing the current selection 10404 to be extended rightward by one word.
Next, the user double -clicks the 2 key to select a filter choices option described above with regard to function 7712 through 7716, figure 77. The second click of the 2 key is an extended click, as indicated by the down arrow 10406. During this extended press, the user continuously utters the letter string, "p, a, i, n, s, which are the initial letters of the desired word, "painstakingW. / In the example, it is assumed that the correction window is in the continuous letter name recogniti on mode as indicated by the characters "_abc" in the title bar of the correction 10412.
In the example, the recognition of the utterance 10408 i ι as filter input causes the correction window 10412 to show a set of choices that have been filtered against an dbiguous length filter corresponding to the recognition results from the recognition of that continuously spoken string of letter names. The correction window has a first choice, 10414, that starts with one of the character sequences associated with the ambiguous filter element. The portion of the first choice that corresponds to a sequence of characters associated with the ambiguous filter is indicated by the ambiguous filter indicator 10416. The' filter cursor, 10418, is located after the end of this portion of the first choice. ^-
Functions 8151 and 8162 of figure 81 cause a filter character choice window, 10422, to be displayed. gince the desired character is a "pY/0 the user presses the 7 key to choose it, which causes that character to be made an unambiguous character of the filter string, and causes a new correction window, 10424, to be displayed as a result of that change in the filter.
Next, the user presses the character down button four times, which due to the operation of function 8150 in figure 81, causes the filter cursor's selection to be moved four characters to the right in the first choice, which in the example is the letter "f""/ 10426. Since this is a portion ambiguous portion of the filter strength as indicated by the ambiguous filter marker 10428, the call to filter character choice in line 8152 of figure 81 will cause another character choice window to be displayed, as shown.
is the user presses that key to cause the correct character,
I
10430, to be inserted into the current filter strength and all the characters before it to be unambiguously confirmed, as indicated by numeral 10432.
At this time, the correct choice is shown associated with the phone key 6 and the user presses that phone key to cause the desired word to be inserted into the editor's text window as shown at 10434.
Next, in the example, the user presses the line down/" and word right keys to move the cursor selection down a li ne and to the right so as to select the text "period" shown at 10436. The user then presses the 8, or word form list key, which causes a word form list correction window 104738, to be displayed. The desired output, a period mark, is associated with the 4 phone key. The user presses that key and causes the desired output to be inserted into the text of the editor window as shown at 10440.
Figure 105 illustrates how'user can scroll a choice list horizontally by operation of functions 8132 and 8135• described above with regard to figure 81.
Figure 106 illustrates how the Key Alpha recognition mode can be used to enter alphabetic input into the editor's text window. Screen 10600 shows an editor text window in
XΛ which the cursor 10602 is shown. In this example, the user presses the 1 key to open the entry mode menu described above with regard to figure 79 and 68, resulting in the screen 10604. Once in this mode, the user double -'clicks the 3 key to select the Key Alpha recognition mode described above with regard to function 7938 of 'figure 79. This causes the system to be set to the Key Alpha mode described above with regard to figure 86, and the editor window to display the prompt 10606 shown in figure 106.
In the example, the user makes an extended press of the phone key as indicated at 10608, which causes a prompt window, 10610vto display the ICA words associated with each of the letters on the phone key that has been pressed. In response, the user makes the utterance "charleyVO 10612. This causes the corresponding letter "c" to' be entered into the text window at the former position of the cursor and causes the text window to have the appearance shown in screen 10614.
In the example, it is next assumed that the user presses the talk key while continuously uttering two ICA words, "alpha" and "bravo" as indicated at 10616. This causes the letters "a" and "b" associated with these two ICA words to be entered into the text window at the cursor as indicated by screen 10618. Next in the example, the user presses the 8 key, is prompted to say one of the three ICA words 'associated with that key, and utters the word
"uniform" to cause the fetter "u" to be inserted into the i editor's text window as shown at 10620.
Figure 7 provides an illustration of the same Key Alpha recognition mode being used to enter alphabetic filtering input . It shows that the Key Alpha mode can be entered when in the correction window by pressing the 1 key followed by a double-click on the 3 key in the same way thew it can be from the text editor,
Figures 106 and 109 show how a user can use the interface of the voice recognition text editor described above to address and to enter and correct text and e -mails in the cell phone embodiment .
"-- In figure 108, screen 10800 shows the e-mail option he selects the e -mail option when in the main menu, as illustrated in figure 66 .
In the example shown, it is assumed that the user wants to create a new e-mail message and thus selects the 1 option. This causes a new e-mail message window, 10802, to be displayed with the cursor located at the first editable location in that window. This is the first character in the portion of the e-mail message associated with the addressee of the message. In the example, the user makes an extended press of the talk button and utters the name "Dan Roth" as indicated by the numeral 10804.
In the example, this causes the slightly incorrect name, "Stan Rothj" ~to be inserted into the message's addressee line as a shown at 10806. The user responds by pressing the 2 key to select a choice list, 10806, for the selection. In the example, the desired name is shown on the choice list and the user presses the 5 key to select it, causing the desired name to be inserted into the addressee line as shown at 10808.
Next, the user presses the down line button twice to move the cursor down to the start of the subject line, as a shown in screen 10810., The user then presses the talk button while saying the utterance "cell phone speech interface^ 10812. In the example, this is slightly mis '- recognized as "sell phone is inserted at the cursor cause the e-mail edit window to have the appearance shown at 10814. In response, the user presses the line up button and the word left button to position the cursor selection at the 'position 10816. The user 'then presses' the 8 key to cause a word form list correction window, 10818, to be displayed . In the example, the desired output is associated with the 4 key, the user selects that key and causes the desired output to be placed in the cursor's position as indicated in screen 10820.
Next, the user presses the line down button twice to place the cursor at the beginning of the body portion of the e-mail message as shown in screen 10822. Once this is done, the user presses the talk button while continuously saying the, utterance "the new Elvis interface is working really
inserted at the cursor position as indicated by screen 10824.
In response, the user presses the line up key once and the word left key twice to place the cursor in the position shown by screen 10900 of figure 199. The user then presses the 5 key to start an extended selection and presses the word left key twice to place the cursor at the position
10902 'and to cause the selection to be extended as is shown by 10904. At this point, the user double -clicks on the 2 key to enter the correction window, 10906, for the current selection and, during that press, continuously says the characters "t, h, e, space, nf n This causes a new correction window, 10908, to be displayed with unambiguous filter 10910 corresponding to be continuously entered letter name
the word right key, which moves the filter cursor to the first character of the next word to the right, as indicated by numeral 10912. The user then presses the 1 key to enter the entry mode menu and presses the 3 key to select to select the AlphaBravo, or ICA word, input vocabulary. During the continuation of the press of the 3 key, the user says the continuous utterance, "echo, lima, victor, india, sierra^" 10914. This is recognized as detector sequence "ELVIS, "'which is inserted, starting with
I the prior. filter cursor position, into the first choice window of the correction window, 10916. In the example shown, it is assumed that AlphaBravo recognition is treated as unambiguous because of its reliability, causing the entered characters and all the characters before it in the first choice window to be treated as unambiguously confirmed, as is indicated by the unambiguous confirmation indication 10918 shown in screen 10916.
In the example, the user presses the OK key to select the current first choice because it is the desired output.
Figure 110 illustrates how re -utterance can be used to help obtain the desired recognition output. It starts with the correction window in the same state as was indicated by screen 10906 and figure 109. But in the example of figure 110, the user responds to the screen by pressing the 1 key twice, once to enter the entry menu mode, and a second time to select a large vocabulary recognition. As indicated by function 7908 through 7,914 in figure 79, if large vocabulary recognition is selected in the entry mode menu when a correction window is displayed, the system interprets this as an indication that the user wants to perform a re - utterance, that is, to add a new utterance for the desired output into the utterance list for use in helping to select the desired output. In the example, the user continues the second press of the 1 key while using discrete speech to say the three words "the" //"newt' 'Elvis" corresponding to the desired output. In the example .-rrroπe, it is assumed that the additional discrete utterance information provided by this new utterance list entry causes the system to correctly recognize the first two of the three words. In the example it is assumed that the third of the three words is not in the current vocabulary, which will require the user to spell that third word with filtering input, such as was done by the utterance 10914 in figure 109.
Figure 110 illustrates how the editor functionality can be used to enter a URL text string for purposes of accessing a desired web page on a Web browser whsch is part of the cell phone's software.
The browser option screen, 11,100, shows the screen that is displayed if the user selects the Web browser option associated with the 7 key in the main menu, as indicated on figure 66. In the example, it is assumed that the user desires to enter the URL of a desired web site and selects the URL window option associated with the 1 key by pressing that key. This causes the screen 11,102 to display a brief prompt instructing the user. The user responds by using continuous letter-name spelling to spell the name of a desired web site during a continuous press of the talk button. In the embodiment shown, the URL editor is always in correction mode so that the recognition of the utterance, 11,103, causes a correction window, 11,104, to be displayed. The user then uses filter string editing techniques of the type wh_3rθh have been described above to correct the originally mis -recognized URL to the desired spelling as- indicated at screen 11,106, at which time he selects the first choice, causing the system to access the desired web site.
Figures 112 through 114 illustrate how the editor interface can be used to navigate and enter text into the fields of Web pages .
I
Screen 11,200 illustrates the appearance of the cell phone's Web browser when it first accesses a new web site. A URL field, 11,201, is shown before the top of the web page, 11,204, to help the user identify the current web page. This position can be scrolled back to at any time if
I the user wants to see the URL of the currently displayed web page. When web pages are first entered, they are in a document/page navigational mode in which moving the left and right key will act like the page back and page forward controls on most Web browsers. In this case, the word "document" is substituted for "page" because the word "page" is used in other navigational modes to refer to a screen full of media on the cell phone display. If the user presses the up or down keys, the web page's display will be scrolled by a full display page (or screen) .
FIG. 116 illustrates how the cell phone embodiment shown allows a special form of correction window to be used as a list box when editing a dialog box of the type described above with regard to figure 115.
The example of figure 116 starts from the find di alog box being in the state shown at screen 11504 in figure 15.
From this state, the user presses the down line key twice to place the cursor in the "In:" list box, which defines in which portions of the cell phone ' s data the search conducted in response to the find dialog box is to take place. When the user presses the talk button with the cursor in this window, a list box correction window, 11512, is displayed- jιi__3h^shows the current selection in the list box as the current first choice and provides a scrollable list- of the other list box choices, with each such other choice being shown with associated phone key number. The user could scroll through this list and choose the desired choice by phone key number or by using a highlighted selection. In the example, the user continues the press of the talk key and says the desired list box value with the utterance,
11514. In list box correction windows, the active
in the list box of the dialog box as is indicated, 11518.
FIG. 117 illustrates a series of interactions between a user and the cell phone interface, which display some of the functions wJa±eii the interface allows the user to perform when making phone calls. The screen 6400 at figure 117 is the same top -level phone 'mode screen described above with regard to figure 64.
If when it is displayed the user selects the last navigation button, which is mapped to be name dial command, the system will enter the name dial mode, the basic functions of which are those illustrated in the pseudocode of figure 119. As can be seen from that figure, this -mode allows a user to select names from a contact Igfε by adding them, and if there is a mis -recognition, to correct it by alphabetic filtering by selecting choices from a potentially scrollable choice list in a correction window whrirσn is similar to those of the described above.
When the cell.phone enters the name dial mode, an initial prompt screen, 11700, is shown as indicated in figure 117. In the example, the user utters a name, 11702, during the pressing of the talk key. In name di al, such utterances are recognized with the vocabulary automatically limited to the name vocabulary, and the resulting
I recognition causes a correction window, 11704, to be displayed. In the example, the first choice is correct, so the user selects the OK key, causing the phone to initiate a call to the phone number associated with the named party in the user's contact list.
When the phone call is connected, a screen, 11706, is displayed having the same ongoing call indicator, 7414, described above with regard to figure 75. At the bottom of the screen, as indicated by the numeral 11708, an indication is given of the functions associated with each of the navigation keys during the ongoing call. In the example, the user selects the down button, which is ass ociated with the same Notes function described above with regard to figure 64. In response, an editor window, 11710, is displayed for the Notes outline with an automatically created heading item, 11712, being created in the Notes outline for the current call, labeling the party to whom it is made and its start and ultimately its end time. A cursor, 11714, is then placed at a new item indented under the calls heading.
In the example, the user says a continuous utterance, 11714, during the pressing of the talk button because recognized text corresponding to that utterance to be inserted into the notes outline at the cursor as indicated in screen 11716. Then the user double -clicks the 6 key to start recording, which causes an audio graphic representation of the sound to be placed in the notes to editor window at the current location of the cursor. As indicated at 11718, audio from portions of the phone call in which the cell phone operator is speaking al4< underlined in the audio graphics to make' it easier for the user to keep track of who's been talking j_how' long in the call and, if r desired, to be able to better search for portions of the recorded audio in which one or the other of the phone calls to parties was speaking.
In the example of figure 117, the user next doubleclicks on the star key to select the task list. This shows a screen, 11720, that lists the currently opened tasks, on the cell phone. In the example, the user selects the task associated with the 4 phone key, which is another notes editor window displaying a different location in the notes outline. In response, the phone keys display shows a screen, 11722, of that portion of the notes outlined.
In the example, the user presses the up key three times to move the cursor to location 11724 and then presses the 6 key to start playing the sound associated with the audio graphics representation at the cursor, as indicated by the motion between the cursors of screens 11726 and 11728. Unless the play only to me option, 7513, described above with regard to figure 75 is on, the playback of the audio in screen 11728 will be -played to both sides of the current phone call, enabling the user of the cell phone to share audio recording with the other party during the cell phone call .
FIG. 118 illustrates that when an edit window is recording audio, such as is shown in screen 11717 near the bottom middle of figure 117, the user can turn on speech recognition during the recording of such an audio to cause the audio recorded during that portion to also have speech recognition performed upon it. In the example shown during the rec rdi'ng shown in screen 11717, the user presses the talk button and speaks the utterance, 11800. This causes the text associated with that utterance, 11802, to be inserted in the editor window, 11806. Audio recorded after the duration of the recognition is recorded merely with audio graphics . Normally this would be used in the methods
I in which the user tries to speak clearly during an utterance, such as the utterance 11800, which is to be recognized, and then would feel free to talk more casually during portions of conversation or dictation hieii are being recorded only with audio. Normally audio is recorded in association with speech recognition so that the user could later go back, listened-' to and correct any dictation such as the dictation 11802, which was incorrectly recognized during a recording.
FIG. 119 illustrates how the system enables the user to select a portion of audio, such as the portion 11900 shown in that figure^by a combination of the extended selection key and play or navigation keys, and then to select the recognized audio dialog box discussed above with regard to functions 9000 through 9014 of figure 90 to have the selected text recognized as indicated at 11902. In the example of figure 119, the user has selected the show recognized audio option, 9026, shown in figure 90, which causes the recognized text, 11902, to be underlined, indicating that it has a playable audio associated with it.
FIG. 120 illustrates how a user can select a portion, 12,000, of recognized text that has associated recorded audio, and then select to have that text stripped from its associated recognized audio by selecting the option 9024, shown in figure 90, in a submenu under the editor options menu. This leaves just the audio, 12,002, and its corresponding audio graphic representation, remaining in the portion of media where the recognized text previously stood.
FIG. 121 illustrates how the function 9020, of FIG. 90, from under the audio menu of the edit options menu allows the user to strip the recognition audio that has been associated with a portion, 12100, of recognized text from that text as indicated at 12102 in FIG. 21.
FIGS. 122 through 125 provide illustrations of the operation of the digital dialed mode described in pseudocode in FIG. 126. If the user selects the digit dial mode, such as by pressing the 2 phone key when in the main menu, as illustrated at function 6552 of FIG. 65 or by selecting the left navigational button when the system is in the top -level phone mode shown in screen 6400 and FIG. 64, the system will enter the digital dial mode shown in FIG. 126 and will display a prompt screen, 12202, which prompts the user to say a phone number. When the user says an utterance of a phone number, as indicated at 12204, that utterance will be recognized. If the system is quite confident that the recognition of the phone number is correct, it will automatically dial the recognized phone number as indicated at 12206. If the system is not that confident of the phone number's recognition, it will display a correction window, ' 12208. If the correction window has the desired number as the first choice as is indicated 12210, the user can merely select it by pressing the OK key, which causes the system to dial the number as indicated at 12212. If the correct choice is on the first choice list as is indicated at 12214, the user can merely press the phone key number associated with that choice because the system to dial the number as is indicated at 12216
If the correct number is neither the f :iirst choice^nor in the first choice list as indicated in thlee screery 12300, shown at the top of FIG. 123, the user can check to see if the desired number is on one of the screens of the second choice list by either repeatedly pressing the page down key as indicated by the number 12302, or repeatedly pressing the item down key as is indicated at 12304. If by scrolling
I through the choice list in either of these methods the user sees the desired number, the user can select it either by pressing its associated phone key or by moving the choice highlight to it and then pressing the OK key. This will cause the system to dial the number as indicated at screen 12308. It should be appreciated that because the phone numbers in the choice list are numerically ordered, the user is able to find the desired number rapidly by scrolling through the list. In the embodiment shown in these figures, digit change indicators, 12310, are provided to indicate the digit column of the most significant digit by which any choice differs from the choice ahead of it on the list. This makes it easier for the eye to scan for the desired phone number.
FIG. 124 illustrates how the digit dial mode allows the user to navigate to a digit position in the first choice and correct any error wlT±CTr exists within it. In FIG. 124, this is done by speaking the desired number, but the user is also allowed to correct the desired number by pressing the appropriate phone key.
As illustrated in FIG. 125,. the user is also able to edit a misperceived phone number by inserting a missing digit as well as by replacing a mis -recognized one.
The invention described above has many aspects -wteirch can be used for the entering and correcting of speech recognition as well as other forms of recognition on many different types of computing platforms, including all those shown in FIGS . tfeele through eight . A lot of the features of the invention described with regard to FIG. 94 can be used in situations where a user desires to enter and/or edit text without having to pay close visual attention to those tasks. For example, this could allow a user to listen to e - mail and dictate responses while walking in a Park, without the need to look closely at his cell phone or other dictation device. One particular environment in which such audio feedback is useful for speech recognition and other control functions, such as phone dialing and phone control, is in an automotive arena, such as is illustrated in FIG. 126.
In the embodiment by computer, 12600, which is communication system, 1260 12604. In many embodiments, the car's electronic system will have a short range wireless transceiver such as a Blue Tooth or other short range transceiver, 12606. These can be used to communicate to a wireless headphone, 2608, or the user's cell phone, 12610, so that the user can have the advantage of accessing information stored on his normal cell phone while using his car. Preferably, the cell phone/wireless transceiver, 12602, can be used not only to send and receive cell phone calls
I but also to send and receive e-mail, digital files, such as text files wfejj_*t can be listened to and edited with the functionality described above, and audio Web pages.
The input device for controlling many of the functions described above with regard to the shown cell phone , * embodiment can be accessed by a phone keypad, 12212. that is preferably located in a position such as on the steering wheel of the automobile, which will enable a user to a access its keys without unduly distracting him from the driving function. In fact, with a keypad having a location similar to that shown in FIG. 126, a user can have the forefingers of one hand around the rim of the steering wheel while selecting keypad buttons with the thumb of the same hand. In such an embodiment, preferably the system would have the TTS keys function described above with regard to
I
9404 through 9414 of FIG. 94 to enable the user to determine which key he is pressing and the function of that key without having to look at the keypad. In other embodiments, the touch sensitive keypad that responds to a. mere touching of its phone keys with such information could also be provided witi-rfi would be even easier and more rapid to use.
FIG.s 127 and 128 illustrate that most of the capabilities described above with regard to the cell phone embodiment can be used on other types of phones, such as on the cordless phone shown in FIG. 127 or on the landline found indicated at FIG. 128.
It should be understood that the foregoing description and drawings are given merely to explain and illustrate, and that the invention is not limited thereto except insofar as the interpretation of the appended claims are so limited.
Those skilled in the art who have the disclosure before them will be able to make modifications and variations therein without departing from the scope of the invention.
The invention of the present application, as broadly claimed, is not limited to use with any one type of operating system, computer hardware, or computer network, and, thus, other embodiments of the invention could use differing software and hardware systems.
Furthermore, it should be understood that the program behaviors described in the claims below, like virtually all program behaviors, can be performed by many different' programming and data structures, using substantially different organization and' sequencing. ' This is because programming is an extremely flexible art in which a given idea of any complexity, once understood by those skilled in the art, can be manifested in a virtually unlimited number of ways. Thus, the claims are not meant to be limited to the exact functions and/or sequence of functions described Λ*"^*- in the IsISβEF. This is particularly true since the pseudocode described in the text above has been highly simplified to let it more efficiently communicate that which one skilled in the art needs to know to implement the invention without burdening him or her with unnecessary details . In the interest of such simplification, the structure of the pseudo-code described above often differs significantly from the structure of the actual code that a skilled programmer would use when implementing the invention. Furthermore, many of the programmed behaviors w4_r±τrr_- are shown being performed in software in the specification could be performed in hardware in other embodiments .
In the many embodiment of the invention discussed above, various aspects of the invention are shown occurring together which could occur separately in other embodiments of those aspects of the invention.
It should be appreciated that the present invention extends to methods, apparatus systems, and programming recorded in machine -readable form, for all the features and aspects of the invention which have been described in thi s application is filed including its specification, its drawings, and its original claims.