BACKGROUNDWithin the field of computing, many scenarios involve the evaluation of a dialogue between a user and a device in order to identify and fulfill the requests of a user. For example, speech-to-text systems may be developed and applied to translate a verbal expression into a formal request, and the results may be provided in the form of speech rendered by a text-to-speech engine. Many such evaluation techniques may be devised and utilized, including those that include a speech recognizer that identifies spoken words, and/or a language parser that arranges the recognized words into parts of speech and phrases that conform with the standards of the spoken language, in order to achieve an automated understanding of the user's request.
SUMMARYThis Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key factors or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The accuracy and/or capabilities of expression-based user interfaces may be enhanced by incorporating more sophisticated expression evaluation techniques. As a first such example, many expression evaluation techniques are configured to recognize collections of spoken words; to identify possible translations of the spoken words according to a language model with a score representing an accuracy probability; and to select, among competing translations, the highest-probability translation for further evaluation. Subsequently received expressions are then evaluated in the context of the highest-probability translation of the earlier expression. However, such selection may not be configured to continue tracking the accuracy probability of a second possible translation that may initially have a lower accuracy probability, but that may exhibit growing accuracy probability in the context of the subsequently received expressions. For example, a user may submit an ambiguous query, but may later request a modification of the query, e.g., by indicating that the device has chosen incorrectly among two possible translations of the user's expression, or by changing the subjects of an otherwise static request (e.g., requesting a list of movies in a particular movie genre, and then asking to restrict the request with a range of release dates). If the device does not continue tracking lower-probability but nevertheless possible translations, the system may demonstrate an impairment of understanding the context of the continuing dialogue with the user.
As a second such example, the propagation of information between stages in a multi-stage dialogue evaluation system may be difficult to implement in a flexible but also efficient manner. In particular, some techniques may utilize knowledge sources to enable a selection among possible translations, but limiting the use of knowledge sources at such a comparatively late stage in the translation process may not take full advantage of such information. Instead, a model-based carry-over technique may be implemented that utilizes the knowledge source at an earlier stage, and that formulates, estimates, and/or compares dialogue hypotheses using a generative and/or discriminative hypothesis modeling. Techniques designed in this manner may be capable of reducing the set of dialogue hypotheses under comparison and/or adjusting the hypothesis probabilities of the dialog hypotheses in view of domain-based knowledge.
Presented herein are techniques for evaluating a dialogue with a user. An embodiment of such techniques may enable communication with a user of a device by generating a dialogue hypothesis set comprising at least two dialogue hypotheses respectively having a hypothesis probability; ranking the dialogue hypothesis set according to the hypothesis probabilities; after the ranking, upon identifying a low-ranking dialogue hypothesis having a hypothesis probability below a hypothesis retention threshold, discarding the low-ranking dialogue hypothesis; after the discarding, using a knowledge source, adjusting the hypothesis probabilities of the respective dialogue hypotheses; after the adjusting, re-rank the dialogue hypothesis set according to the hypothesis probabilities; and, for a high-ranking dialogue hypothesis having a hypothesis probability exceeding a hypothesis confidence threshold, executing an action fulfilling the high-ranking dialogue hypothesis.
Another embodiment of the techniques presented herein may enable communication with a user of a device by generating a dialogue hypothesis set; based on respective expressions of the user within the dialog, apply an expression recognizer and a natural language processor to store in the dialogue hypothesis set at least one dialogue hypothesis of the expression; for the previous dialogue hypotheses in the dialogue hypothesis set that were generated for a previous expression of the dialogue, updating the subject of the at least one slot of the previous dialogue hypothesis; using a knowledge source, adjusting the hypothesis probabilities of the respective dialogue hypotheses; ranking the dialogue hypothesis set according to the hypothesis probabilities; and, for a high-ranking dialogue hypothesis having a hypothesis probability exceeding a hypothesis confidence threshold, executing an action fulfilling the high-ranking dialogue hypothesis. These and other embodiments and variations of such technique are presented herein.
To the accomplishment of the foregoing and related ends, the following description and annexed drawings set forth certain illustrative aspects and implementations. These are indicative of but a few of the various ways in which one or more aspects may be employed. Other aspects, advantages, and novel features of the disclosure will become apparent from the following detailed description when considered in conjunction with the annexed drawings.
DESCRIPTION OF THE DRAWINGSFIG. 1 is an illustration of an exemplary scenario featuring an evaluation of an dialogue with a user.
FIG. 2 is an illustration of an exemplary scenario featuring an evaluation of an dialogue with a user in accordance with the techniques presented herein.
FIG. 3 is an illustration of a first exemplary method of evaluating a dialogue with a user in accordance with the techniques presented herein.
FIG. 4 is an illustration of a second exemplary method of evaluating a dialogue with a user in accordance with the techniques presented herein
FIG. 5 is a component block diagram illustrating an exemplary system for evaluating a dialogue with a user in accordance with the techniques presented herein.
FIG. 6 is an illustration of an exemplary computer-readable medium including processor-executable instructions configured to embody one or more of the provisions set forth herein.
FIG. 7 is an illustration of an exemplary scenario featuring variations in the communication of errors to the user in accordance with a variation of the techniques presented herein.
FIG. 8 is an illustration of an exemplary scenario featuring an evaluation of a dialogue hypothesis set in view of a sequence of expressions received from the user and comprising a dialogue in accordance with the techniques presented herein.
FIG. 9 is an illustration of an exemplary computing environment wherein a portion of the present techniques may be implemented and/or utilized.
DETAILED DESCRIPTIONThe claimed subject matter is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to facilitate describing the claimed subject matter.
A. INTRODUCTIONFIG. 1 is an illustration of anexemplary scenario100 featuring an exemplary technique for evaluating a dialogue between a device and auser102. In thisexemplary scenario100, upon detecting anexpression106 spoken by theuser102 as part of adialogue104, the device utilizes aspeech recognizer108 to recognize thewords110 of theexpression106, and anatural language parser112 to translate theexpression106 into aparsed expression114. Such parsing may be applied to eachexpression106 received from theuser102 in a sequence comprising amulti-turn dialogue104.
As further illustrated in theexemplary scenario110 ofFIG. 1 at afirst time point118, theuser102 initiates anew dialogue104 with the device by speaking theexpression106 “show me movies.” Thespeech recognizer108 receives a recording of the expression106 (such as “show,” “me, “and “movies,” as well as possibly incorrect recognizedwords110, such as “meme,” recognized as the word “me” and the leading portion of the word “movies”). Anatural language parser112 may endeavor to arrange the identifiedwords110 into aparsed expression114, such as by matching the identifiedwords110 with a part ofspeech116 in a known model of a phrase in the language spoken by theuser102. In this manner, the device may recognize theexpression106 of theuser102 as a request to show movies of some type.
At asecond time point120, theuser102 may speak asecond expression106, including only the term “action.” Thespeech recognizer108 may again be applied, thus recognizing theexpression106 as either the word “action,” or as the term “faction,” which may (e.g.) refer to a movie having this title. Thenatural language parser112 may use various determinative criteria to select the term “faction” as more probable than the word “action” (e.g., a popular movie named Faction may currently be playing in theaters). Additionally, thenatural language parser112 may combine the term “faction” as an additional term with the previously evaluatedparsed expression114, and may conclude that theuser102 is asking to see the movie Faction. However, this evaluation may result in an error due to the incorrectly recognizedword110 of theexpression106.
At athird time122, theuser102 may perceive the source of the error, and may attempt to correct it by specifying a different genre (e.g., speaking theexpress106 “no, comedies”), intended as a contrast with the previous request for movies in the “Action” genre. Accordingly, thespeech recognizer108 may identify theindividual words110 “no” and “comedies.” Viewed in the context of the secondparsed expression114, thisexpression106 may be perceived as a request to substitute the genre of “comedies” for the previously specified genre of “action.” However, the device may simply evaluate theexpression106 in isolation of theprevious expression106, and may therefore interpret theexpression106 of theuser102 as indicating the opposite request, i.e., to exclude all comedies from a set of movies. Accordingly, thenatural language processor112 may arbitrarily translate the “no comedies” into a set offilters118 to be applied to a current query (e.g., excluding films in the genre of “comedy” from the result set). In this manner, the device may interact with theuser102 to identify the parsedexpressions116 with a highly probable evaluation of the request spoken by theuser102.
In theexemplary scenario100 ofFIG. 1, the dialogue evaluation system results in an incorrect evaluation of thedialogue104 of theuser102 for at least several reasons. As a first example, the language evaluation system does not track multiple hypotheses. For example, theword110 “faction” appeared to be the higher-probability parsedexpression114 at thesecond time120, and so was selected for thedialogue104, while theword110 “action” was determined to have lower probability and was discarded. However, at thethird time point122, theexpression106 of the user (“no, comedies”) has no connection with theword110 “faction,” but is semantically related with theword110 “action” as an indication of an alternative genre selection. The connection may have been revealed by tracking theword110 “action” as a lower but nevertheless plausible probability, but instead is lost, resulting in a loss of information for disambiguating theexpression106 at thethird time point122. That is, in thisexemplary scenario100, there is no way to reevaluate afirst expression106 in the context of a subsequent expression. As a second example, the language evaluation system is incapable of disambiguating the phrases “no comedies” and “no, comedies” while associating thewords110 with the parts ofspeech116. This inability results from a lack of semantic guidance as to the carry-over model; e.g., the language evaluation system has no source of information as to patterns of language that may enable an assessment of the probabilities ofvarious translations114 in the context of thedialogue104. For at least these reasons, the language evaluation system in the exemplary scenario ofFIG. 100 demonstrates inadequate proficiency in evaluating thedialogue104 with theuser102.
B. PRESENTED TECHNIQUESPresented herein are techniques that may facilitate the evaluation of adialogue104 with auser102 in order to fulfill the requests expressed therein.
In accordance with these techniques, for therespective expressions106 of thedialogue104, a set of dialogue hypotheses are identified and tracked, along with a hypothesis probability of the respective dialogue hypotheses. Such tracking may enable a retroactive identification of and recovery from a language ambiguity in a precedingexpression106; e.g., past precedingexpressions106 may later be reinterpreted in the context oflater expressions106, and paths ofdialogue104 that appeared less probable earlier in thedialogue104 before may end up having a higher, and perhaps highest, probability in the dialogue hypothesis set. As a second example, the carry-over effect of parsedexpressions114 for clarification, modification, and/or reversal bylater expressions106 may be guided by a model-based system. Various techniques, including carefully tailored rules, machine-based learning using annotated training sets, and combinations thereof, may be used to develop carry-over models reflecting typical patterns ofdialogue104 in a particular language, and the use of such model-based carry-over techniques may promote the accurate determination of hypothesis probabilities.
FIG. 2 presents an illustration of anexemplary scenario200 featuring the evaluation of adialogue104 in accordance with the techniques presented herein. In thisexemplary scenario200, auser102 engages indialogue104 with a device through a sequence ofexpressions106 that are respectively evaluated by developing a dialogue hypothesis set202, comprising a set ofdialogue hypotheses204 respectively having ahypothesis probability206 as an estimate of the accurate interpretation of thedialogue104.
As illustrated in theexemplary scenario200 ofFIG. 2, at afirst time point220, theuser102 speaks theexpression106 “show me movies,” which the language evaluation system interprets as one of two dialogue hypotheses204: “show me movies” (having a higher hypothesis probability206), and analternative dialogue hypothesis204 “show my movies,” having a less probable but neverthelessplausible hypothesis probability206. While thedialogue hypothesis204 having thehigher hypothesis probability206 may be tentatively accepted, the lower-probability dialogue hypothesis204 is retained in the dialogue hypothesis set202.
At asecond time222, theuser102 next speaks theexpression106 “action.” Aknowledge source208 is accessed for assistance with interpreting theexpression106 in the context of thedialogue104, and theknowledge source208 provides two relevant facts210: that a movie entitled “Faction” is now popular, and that theuser102 appreciates movies in the “action” genre. Accordingly, the hypothesis probabilities206 of theprevious dialogue hypotheses204 that are already in the dialogue hypothesis set202 are updated to reflect both thesecond expression106 and the relatedfacts210 in theknowledge source208. In particular, theword110 “action” is inserted as a subject214 into aslot212 of theprevious dialogue hypothesis204 “show me movies,” as the language model may indicate that a noun describing a type of content (such as a movie) may be preceded by an adjective describing a genre of such movies (such as the action genre). While thisdialogue hypothesis204 remains highly probable, it may be determined to be less probable than anew dialogue hypothesis204 relating to the Faction movie, and/or may be determined to be an unusual pattern of dialogue. Accordingly, thehypothesis probability206 of the “show me action movies”dialogue hypothesis204 may be marginally reduced, while anew dialogue hypothesis204 may be added for the expression “show me the movie called Faction,” with ahigh hypothesis probability206. Conversely, the secondprevious hypothesis dialogue204 for the phrase “show my movies” may be determined to be less probably interpreted as the updatedexpression106 “show my action movies,” less in accordance with typical dialogue patterns according to a carry-forward model, and/or unsupported by the knowledge source208 (e.g., theuser102 may not have any personal movies matching the adjective “action”). Accordingly, thehypothesis probability206 of thisdialogue hypothesis204 may be further reduced. Thedialogue hypotheses204 of the dialogue hypothesis set202 are then re-ranked according to the updatedhypothesis probabilities206 after adjustment in view of theknowledge source208. Again, thedialogue hypothesis204 having thehighest hypothesis probability206 in the dialogue hypothesis set202 may be tentatively accepted, but the lower-probability dialogue hypotheses204 may be retained in the dialogue hypothesis set202 for further evaluation.
At athird time point224, theuser102 may speak thisexpression106 “no, comedies.” Thisexpression106 may be evaluated in the context of theknowledge source208, which may reveal that theuser102 also likescomedy movies210. Additionally, the context of thisthird expression106 in the context of the dialogue hypothesis set202 may be highly correlated with theprevious dialogue hypothesis204 of “show me action movies,” since it appears highly probable that theuser102 is asking to change a previously specified genre of movies. Accordingly, after updating theslots212 of thedialogue hypothesis204 from thecurrent subject214 of “action” to the updatedsubject214 of “comedy,” thehypothesis probability206 of thisdialogue hypothesis204 is increased to reflect the contextual consistency of the sequence ofexpressions106 in the dialogue104 (e.g., the pairs ofexpressions106 reflect natural and typical transitions therebetween according to the language model). Additionally, anew dialogue hypothesis204 may also be inserted into the dialogue hypothesis set202 for theexpression106 “no comedies” (indicating that theuser102 only wishes to view action movies that are not also comedies). The carry-over model may indicate that thisdialogue hypothesis204 is less probable (e.g., thatusers102 infrequently request filtered sets of movies through this pattern of expressions106), and may therefore provide a lower but neverthelesshigh hypothesis probability206 to thisnew dialogue hypothesis204.
As further illustrated at thethird time point224 in theexemplary scenario200 ofFIG. 2, the otherprevious dialogue hypotheses204 may be determined to be less probable in the context of the third expression106 (e.g., it may not be possible to determine a significant nexus between thedialogue hypothesis204 and the current expression106), and the hypothesis probabilities206 of theseprevious dialogue hypotheses204 may be significantly reduced. Accordingly, thedialogue hypotheses204 of the dialogue hypothesis set202 are now re-ranked according to the updatedhypothesis probabilities206 after adjustment in view of theknowledge source208. Notably, thehypothesis probability206 for thedialogue hypothesis204 “show my action movies” may now appear to be sufficiently reduced (e.g., below a hypothesis retention threshold of 60%) that it is removed216 from the dialogue hypothesis set202. Indeed, this adjustment may be determined even before reevaluating thisdialogue hypothesis204 in the context of the knowledge source208 (e.g., there may be no relevant information that may render thisdialogue hypothesis204 plausible), and the removal may be performed before needlessly reevaluating thedialogue hypothesis204 with theknowledge source208, thereby enhancing the efficiency of the evaluation system. Conversely, thehypothesis probability206 for thehighest dialogue hypothesis204 may now appear to be sufficiently high (e.g., above a hypothesis confidence threshold) to prompt the execution of anaction218 in fulfillment of thedialogue expression204, such as showing a list of available movies in the comedy genre. Nevertheless, thedialogue hypotheses204 having lower but stillplausible hypothesis probabilities206 are still retained in the dialogue hypothesis set202; e.g., theuser102 may subsequently indicate that the highest-probability dialogue hypothesis204 is incorrect, and that theuser102 actually did intend to request movies that are in the “action” genre and not also in the “comedy” genre.
C. TECHNICAL EFFECTSAs illustrated in theexemplary scenario200 ofFIG. 2, the evaluation of thedialogue104 using a dialogue hypothesis set202 may exhibit one or more technical advantages over the dialogue evaluation illustrated in theexemplary scenario100 ofFIG. 1. As a first such example, by developing and tracking a set ofdialogue hypotheses204, including those that are not the highest-probability dialogue hypothesis204 at a particular time but that may later be reevaluated in the context oflater expressions106, the dialogue evaluation may retroactively discover and recover from language ambiguities. As a second such example, by using aknowledge source208 in various ways, including during the model-based carry-over wherein previous dialogue hypotheses are updated based on asubsequent expression106, the dialogue evaluation system more accurately identifies the hypothesis probabilities206 of thedialogue hypotheses204. As a third such example, these techniques may be suitable for the formulation, estimation, and/or comparison of dialogue hypotheses using discriminative approaches based on a conditional probability distribution among thedialogue hypotheses204, and/or using generative approaches involving a joint probability distribution ofpotential dialogue hypotheses204. As a fourth such example, by representing thedialogue hypotheses204 as a collection ofslots212 that may be filled and updated with various subjects212 (e.g., replacing a first genre of “action” with an updated genre of “comedies”), the language evaluation system enables the clarification, modification, and updating ofprevious expressions106 that is consistent with typical speech patterns in the natural language of theuser102 and thedialogue104. As a fifth such example, by reevaluating the “show my action movies”dialogue hypothesis204 and removing216 it from the dialogue hypothesis set202 even before considering it in the context of theknowledge source208, the dialogue evaluation system may avoid unhelpful continued evaluation of low-probability dialogue hypotheses204, thereby economizing the computational resources of the dialogue evaluation system. Such economy may, e.g., reduce the latency of the dialogue evaluation system between receiving theexpressions106 of theuser102 and executing theaction218 for the highest-probability dialogue hypothesis204. These and other advantages may be achievable through the development and use of dialogue evaluation systems in accordance with the techniques presented herein.
D. EXEMPLARY EMBODIMENTSFIG. 3 presents an illustration of an exemplary first embodiment of the techniques presented herein, illustrated as anexemplary method300 of evaluating adialogue104 with auser102. Theexemplary method300 may be implemented, e.g., as a set of instructions stored in a memory component (e.g., a memory circuit, a platter of a hard disk drive, a solid-state storage device, or a magnetic or optical disc) of a device having a processor, where the instructions, when executed on the processor, cause the device to operate according to the techniques presented herein. Theexemplary method300 begins at302 and involves executing304 the instructions on the processor of the device. In particular, the execution of the instructions on the processor causes the device to generate306 a dialogue hypothesis set202 comprising at least twodialogue hypotheses204 respectively having ahypothesis probability206. The execution of the instructions on the processor also causes the device to rank308 the dialogue hypothesis set202 according to the hypothesis probabilities206 of therespective dialogue hypotheses204. The execution of the instructions on the processor also causes the device to, after theranking308, upon identifying a low-rankingdialogue hypothesis204 having ahypothesis probability206 that is below a hypothesis retention threshold, discard310 the low-rankingdialogue hypothesis204.
The execution of the instructions on the processor also causes the device to, after the discarding310, using aknowledge source208, adjust312 the hypothesis probabilities206 of therespective dialogue hypotheses204. The execution of the instructions on the processor also causes the device to, after the adjusting312, re-rank314 the dialogue hypothesis set202 according to the hypothesis probabilities206 of therespective dialogue hypotheses204. The execution of the instructions on the processor also causes the device to determine whether a high-rankingdialogue hypothesis204 exists that has ahypothesis probability206 exceeding a hypothesis confidence threshold. If so, the execution of the instructions on the processor may cause the device to execute318 anaction218 fulfilling the high-rankingdialogue hypothesis204; and if not, then the device may await an additional expression106 (optionally prompting theuser102 foradditional expressions106 providing more or clarifying information), and may then return to the generating306 ofdialogue hypotheses106. By generating and tracking the hypothesis probabilities206 of a dialogue hypothesis set202 in this manner, the execution of the instructions on the processor causes the device to evaluate thedialogue104 with theuser102 in accordance with the techniques presented herein, and so theexemplary method300 ends at320.
FIG. 4 presents an illustration of an exemplary second embodiment of the techniques presented herein, illustrated as anexemplary method400 of evaluating adialogue104 with auser102. Theexemplary method400 may be implemented, e.g., as a set of instructions stored in a memory component (e.g., a memory circuit, a platter of a hard disk drive, a solid-state storage device, or a magnetic or optical disc) of a device having a processor and a dialogue hypothesis set202, where the instructions, when executed on the processor, cause the device to operate according to the techniques presented herein. Theexemplary method400 begins at402 and involves executing404 the instructions on the processor of the device. In particular, the execution of the instructions on the processor causes the device to, forrespective expressions106 of thedialogue104, apply408 an expression recognizer (e.g., a speech recognizer or a language recognizer) and a natural language processor to theexpression106. This application enables the device to store410 in the dialogue hypothesis set202 at least onedialogue hypothesis204 for theexpression106, where therespective dialogue hypotheses204 respectively comprise at least oneslot212 that is associated with a subject214 of theexpression106, and ahypothesis probability206; and to, for respectiveprevious dialogue hypotheses204 in the dialogue hypothesis set202 that were generated for aprevious expression106 of thedialogue104, update412 the subject214 of the at least oneslot212 of theprevious dialogue hypothesis202.
The execution of the instructions on the processor also causes the device to, using aknowledge source208, adjust414 the hypothesis probabilities206 of therespective dialogue hypotheses204. The execution of the instructions on the processor also causes the device to rank416 the dialogue hypothesis set202 according to the adjusted hypothesis probabilities206. The execution of the instructions on the processor also causes the device to determine whether a high-rankingdialogue hypothesis204 exists that has ahypothesis probability206 exceeding a hypothesis confidence threshold. If so, the execution of the instructions on the processor may cause the device to execute420 anaction218 fulfilling the high-rankingdialogue hypothesis204; and if not, then the device may await an additional expression106 (optionally prompting theuser102 foradditional expressions106 providing more or clarifying information), and may then perform theevaluation406 of theadditional expressions106. By generating and tracking the hypothesis probabilities206 of a dialogue hypothesis set202 in this manner, the execution of the instructions on the processor causes the device to evaluate thedialogue104 with theuser102 in accordance with the techniques presented herein, and so theexemplary method400 ends at422.
FIG. 5 presents an illustration of a third exemplary embodiment of the techniques presented herein, illustrated as anexemplary system506 for evaluating adialogue104 with auser102. One or more components of theexemplary system506 may be implemented, e.g., as instructions stored in a memory component of adevice502 that, when executed on aprocessor504 of thedevice502, cause thedevice502 to perform at least a portion of the techniques presented herein. Alternatively (though not shown), one or more components of theexemplary system506 may be implemented, e.g., as a volatile or nonvolatile logical circuit, such as a particularly designed semiconductor-on-a-chip (SoC) or a configuration of a field-programmable gate array (FPGA), that performs at least a portion of the techniques presented herein, such that the interoperation of the components completes the performance of a variant of the techniques presented herein.
Theexemplary system506 includes a dialogue hypothesis set202, comprising at least twodialogue hypotheses204 respectively having at least oneslot214 with which a subject212 of thedialogue104 may be associated, and ahypothesis probability206. Theexemplary system506 also includes anexpression evaluator508 that, for therespective expressions106 of thedialogue104, applies to the expression106 a language recognizer (e.g., speech and/or gesture recognizer510) that identifies the language elements (e.g., words110) of theexpression106, and a natural-language parser512 that organizes the language elements into a parsed expression114 (e.g., a contextualized arrangement ofwords110 in a sequence that matches a parts-of-speech pattern that is typical in the language of the expression106). Theexpression evaluator508 also includes a model-based carry-overcomparator514 that, for respectiveprevious dialogue hypotheses204 stored in the dialogue hypothesis set202 in response to previously evaluatedexpressions106, update the subject212 of the at least oneslot214 of theprevious dialogue hypothesis204. Theexpression evaluator508 also includes adialogue hypothesis generator516 that stores in the dialogue hypothesis set202 at least twodialogue hypotheses204, including the hypothesis probabilities206 thereof. Theexpression evaluator508 also includes adialogue hypothesis augmenter518 that, using aknowledge source208, adjusts the hypothesis probabilities206 of therespective dialogue hypotheses204 of the dialogue hypothesis set202.
The exemplary system also includes adialogue hypothesis comparator520, including adialogue hypothesis ranker522 that ranks the dialogue hypothesis set202 according to the hypothesis probabilities206, and anaction selector524 that, upon identifying a high-rankingdialogue hypothesis204 having ahypothesis probability206 that exceeds a hypothesis confidence threshold, executes anaction218 fulfilling the high-rankingdialogue hypothesis204. In this manner, the architecture and interoperation of the components of theexemplary system506 ofFIG. 5 enable thedevice502 to evaluate thedialogue104 with theuser102 in accordance with the techniques presented herein.
Still another embodiment involves a computer-readable medium comprising processor-executable instructions configured to apply the techniques presented herein. Such computer-readable media may include, e.g., computer-readable storage devices involving a tangible device, such as a memory semiconductor (e.g., a semiconductor utilizing static random access memory (SRAM), dynamic random access memory (DRAM), and/or synchronous dynamic random access memory (SDRAM) technologies), a platter of a hard disk drive, a flash memory device, or a magnetic or optical disc (such as a CD-R, DVD-R, or floppy disc), encoding a set of computer-readable instructions that, when executed by a processor of a device, cause the device to implement the techniques presented herein. Such computer-readable media may also include (as a class of technologies that are distinct from computer-readable storage devices) various types of communications media, such as a signal that may be propagated through various physical phenomena (e.g., an electromagnetic signal, a sound wave signal, or an optical signal) and in various wired scenarios (e.g., via an Ethernet or fiber optic cable) and/or wireless scenarios (e.g., a wireless local area network (WLAN) such as WiFi, a personal area network (PAN) such as Bluetooth, or a cellular or radio network), and which encodes a set of computer-readable instructions that, when executed by a processor of a device, cause the device to implement the techniques presented herein.
An exemplary computer-readable medium that may be devised in these ways is illustrated inFIG. 6, wherein theimplementation600 comprises a computer-readable storage device602 (e.g., a CD-R, DVD-R, or a platter of a hard disk drive), on which is encoded computer-readable data604. This computer-readable data604 in turn comprises a set ofcomputer instructions606 configured to operate according to the principles set forth herein. In a first such embodiment, the processor-executable instructions606 may be configured to cause a device to perform amethod608 of configuring a device to evaluate adialogue104 with auser102, such as theexemplary method300 ofFIG. 3 or theexemplary method400 ofFIG. 4. In a second such embodiment, the processor-executable instructions606 may be configured to implement one or more components of a system of evaluating adialogue104 with auser102, such as theexemplary system506 ofFIG. 5. Some embodiments of this computer-readable medium may comprise a computer-readable storage device (e.g., a hard disk drive, an optical disc, or a flash memory device) that is configured to store processor-executable instructions configured in this manner. Many such computer-readable media may be devised by those of ordinary skill in the art that are configured to operate in accordance with the techniques presented herein.
E. VARIATIONSThe techniques discussed herein may be devised with variations in many aspects, and some variations may present additional advantages and/or reduce disadvantages with respect to other variations of these and other techniques. Moreover, some variations may be implemented in combination, and some combinations may feature additional advantages and/or reduced disadvantages through synergistic cooperation. The variations may be incorporated in various embodiments (e.g., theexemplary method300 ofFIG. 3; theexemplary method400 ofFIG. 4; theexemplary system506 ofFIG. 5; and the exemplary computer-readable storage device602 ofFIG. 6) to confer individual and/or synergistic advantages upon such embodiments.
E1. Scenarios
A first aspect that may vary among embodiments of these techniques relates to the scenarios wherein such techniques may be utilized.
As a first variation of this first aspect, the dialogue evaluation techniques presented herein may be implemented on many types of devices, such as a workstation or server; a laptop, tablet, or palmtop portable computer; a communicator, such as a phone or text messaging device; a media player, such as a portable music player or a television; a gaming device, such as a game console or a portable game player; and/or a wearable computing device, such as an earpiece or a pair of glasses. Additionally, the techniques presented herein may be implemented across a set of devices, such as a client device that receives theexpressions106 from theuser102 and forward theexpressions106 to a server providing a dialogue evaluation service, which may evaluate thedialogue104 and indicate to the client device theactions218 to be executed in fulfillment of thedialogue104, or a set of peer devices that interoperate to collect and to evaluate the expressions of the user102 (e.g., a set of devices positioned around a residence or office of theuser102 that enable a continuous andconsistent dialogue104 as theuser102 moves throughout the residence or office).
As a second variation of this first aspect, the respective components of an embodiment of the techniques presented herein (e.g., theexpression evaluator508, the speech and/orgesture recognizer510, thelanguage parser512, the model-based carry-overcomparator514, thedialogue hypothesis generator516, thedialogue hypothesis augmenter518, theexpression evaluator520, thedialogue hypothesis ranker522, and/or theaction selector524 in theexemplary system506 ofFIG. 5) may be developed in many ways. As a first such example, such components may comprise a collection of rules developed by users, optionally including theuser102 of thedevice502, that perform various aspects of the evaluation of thedialogue104. As a second such example, such components may include various machine learning techniques, such as artificial neural networks, Bayesian classifiers, and/or genetically derived algorithms, that have been developed through training with annotated training sets. As a third such example, such components may include a “mechanical Turk” aspect, wherein difficult-to-evaluate data sets are forwarded to humans who may respond with the correct results of evaluation to be used by the device for the current evaluation and/or the future evaluation of similar types ofexpressions106. Such components may also be implemented as a combination of such techniques, e.g., an artificial neural network that is also constrained by a set of rule-based heuristics.
As a third variation of this first aspect, a device may receive and evaluateexpressions106 provided by theuser102 in a variety of languages, including one or more natural languages (e.g., English, French, and German); one or more regional or contextual language dialects (e.g., a casual speaking style and a formal speaking style); and/or one or more technical languages (e.g., a programming language, or a grammatically constrained language that is adapted for interaction with a particular type of device). Additionally, theexpressions106 may also be provided in a nonverbal language, such as physical language elements with various body parts (e.g., hand signals or body language), and/or an accessibility language that enables interaction withusers102 according to their physical capabilities. Theuser102 may also utilize a combination of such languages (e.g., physically pointing at an entry on a display while saying, “show me that one”). The device may therefore comprise, e.g., a camera that detects a physical gesture of theuser102, and a gesture recognizer that identifies anexpression104 indicated by the physical gesture. A device may also include a language identifier that identifies the language of theexpression104, and/or a language translator that translates theexpression104 from the language of theuser102 into a second language that the device is capable of evaluating, and/or that translates textual or vocalized output into the language of theuser102. These and other scenarios and resources may be compatible with and adaptable to various implementations of the dialogue evaluation techniques presented herein.
E2. Language Parsing
A second aspect that may vary among embodiments of the techniques presented herein involves the application of language parsing to anexpression106 of thedialogue104 with theuser102.
As a first variation of this second aspect, language parsing may be facilitated with reference to a language model, such as a carry-over model that identifies language patterns in the language of thedialogue104. For example, a language model may indicate that afirst expression104 initiating a request may often be followed bysubsequent expressions104 that modify the request, such as a first expression that alters the one or more subjects214 (e.g., “show me action movies . . . now how about comedies?”); a second expression that filters the one or more subjects (e.g., “show me action movies . . . show me the second one”); and a third expression that navigates among options (“show me movies . . . show me music . . . let's go back to movies”). The language patterns within these forms of dialogue may inform the language parsing, and may be implemented, e.g., in a model-based carry-over comparator that suggests rules for transitions betweenexpressions104 and the corresponding transformation ofdialogue hypotheses204.
As a second variation of this second aspect, adevice502 may utilize a slot- and subject-based approach to representing thedialogue hypotheses204, which may facilitate the flexibility of the updating ofdialogue hypotheses204 in response to the evaluation ofsubsequent expressions106 of thedialogue104. For example, upon receiving from theuser102 anexpression106 in the context of thedialogue104, parse theexpression108 into one ormore dialogue hypotheses204 respectively comprising one ormore slots214 respectively associated with a subject212 of theexpression106. For example, alanguage parser512 may identify the language pattern “subject-verb-object” in the language of theexpression106, and may respectively associate a first noun subject212, a verb subject212, and a second noun subject212 in the corresponding sequence of theexpression106 to therespective slots214 of thedialogue hypothesis204. Additionally, alanguage parser512 may update aprevious dialogue hypothesis204 by replacing aprevious subject212 of the dialogue with asubstitute subject212. As a first such example, thelanguage parser512 may replace a current knowledge domain within the knowledge source for an alternative knowledge domain within the knowledge source that is different from the current knowledge domain (e.g., a request for information about a movie, which may be fulfilled by reference to a movie database, may be replaced with a request for information about the musical score in an audio soundtrack of the movie, which may be fulfilled by reference to a music database). As a second such example, thelanguage parser512 may replace a subject genus within a current knowledge domain of theknowledge source208 with a subject species within the subject genus within the current knowledge domain (e.g., transitioning from a request for information about movies in the “action” movie genre to a request for information about a specific movie in the “action” movie genre). As a third such example, thelanguage parser512 may replace a selectedaction218 to be applied to a second subject of the dialogue hypothesis204 (i.e., theaction218 to be executed if thehypothesis probability206 of thedialogue hypothesis204 is determined to exceed a hypothesis confidence threshold) with an alternative action that is different from the selected action (e.g., transitioning from a request to show information about a movie to a request to view the movie). These and other language parsing techniques may facilitate the evaluation ofdialogue104 with theuser102 in accordance with the techniques presented herein.
E3. Dialogue Hypothesis Generation and Ranking
A third aspect that may vary among embodiments of the techniques presented herein relates to the manner of assigninghypothesis probabilities206 todialogue hypotheses204, and/or of ranking thedialogue hypotheses204 of the dialogue hypothesis set202.
As a first variation of this third aspect, many types of ranking techniques may be utilized, such as an “N-best” list, a priority queue, a Gaussian distribution, or a histogram (e.g., a histogram identifying trends in the hypothesis probabilities206 of the respective dialogue hypotheses204).
As a second variation of this third aspect, many aspects may be used to formulate and/or compare thedialogue hypotheses204, as well as to estimate the hypothesis probabilities206. For example, the techniques presented herein may achieve the formulation, estimation, and/or comparison of dialogue hypotheses using discriminative approaches based on a conditional probability distribution among thedialogue hypotheses204, and/or using generative approaches involving a joint probability distribution ofpotential dialogue hypotheses204.
As a third variation of this third aspect, many techniques may be used to assign thehypothesis probability206 to adialogue hypothesis204. For example, adialogue104 may comprise at least twoexpressions106 of theuser102, and the hypothesis probabilities206 may be selected and/or updated in view of the sequence ofexpressions106 of the dialogue104 (e.g., the entire sequence, or a recent portion thereof, may be reevaluated to verify that thedialogue hypothesis204 satisfies the sequence ofexpressions106, not just the set ofexpressions106 evaluated individually and in isolation).
As a fourth variation of this third aspect, thehypothesis probability206 of adialogue hypothesis204 may be identified either in relation to the other dialogue hypotheses204 (e.g., the current highest dialogue hypothesis of the dialogue hypothesis set202); in relation to an objective standard (e.g., a 0-to-100 hypothesis probability scale); and/or in relation to a model (e.g., a probability tier or standard deviation range within a hypothesis probability distribution).
As a fifth variation of this third aspect, various techniques may be utilized to determine when adialogue hypothesis204 is sufficiently probable that anaction218 is to be executed in fulfillment of the dialogue hypothesis204 (e.g., when thehypothesis probability206 of thedialogue hypothesis204 exceeds a hypothesis confidence threshold; when thehypothesis probability206 exhibits a sharply positive trend; and/or when thehypothesis probability206 sufficiently exceeds the hypothesis probabilities206 of theother dialogue hypotheses204 by a threshold margin). Alternatively or additionally, various techniques may be utilized to determine when adialogue hypothesis204 is sufficiently improbable that thedialogue hypothesis204 is to be discarded (e.g., when thehypothesis probability206 of thedialogue hypothesis204 is reduced below a hypothesis retention threshold; when thehypothesis probability206 exhibits a sharply negative trend; and/or when thehypothesis probability206 is sufficiently below the hypothesis probabilities206 of theother dialogue hypotheses204 by a threshold margin).
As a sixth variation of this third aspect, someexpressions106 of theuser102 may directly affect the assignment, adjustment, and/or ranking ofhypothesis probabilities206 ofrespective dialogue hypotheses204. As a first such example, upon identifying anexpression106 of theuser102 that declines a high-ranking dialogue hypothesis204 (e.g., “not that one”), an embodiment may reduce thehypothesis probability206 of the high-rankingdialogue hypothesis204, thereby enabling lessprobable dialogue hypotheses206 that may more accurately reflect the intentions of theuser102 to be exposed and/or acted upon. As a second such example, upon identifying at least two high-rankingdialogue hypotheses204 respectively having ahypothesis probability206 that are within a hypothesis proximity range (e.g., a “tie”) and that may be difficult to disambiguate, an embodiment may present to the user102 a disambiguation query (e.g., “did you mean that you want to see comedy movies instead of action movies, or comedy movies that are also action movies?”); and upon receiving a response to the disambiguation query from theuser102, the embodiment may adjust the hypothesis probabilities206 of therespective dialogue hypotheses204 in view of the response. Many such variations in the assignment, adjustment, and/or ranking ofhypothesis probabilities206 to thedialogue hypotheses204 may be utilized in embodiments of the techniques presented herein.
E4. Knowledge Sources
A fourth aspect that may vary among embodiments of the techniques presented herein involves the nature, contents, and uses of theknowledge source208 of the device(s) in the evaluation ofdialogue104 with theuser102.
As a first variation of this fourth aspect, the device(s) upon which the techniques are implemented may utilize many types of knowledge sources208. As a first example, theknowledge source208 may include a user profile of the user102 (e.g., a social network profile), which may indicate interest and tastes in various topics that may arise in thedialogue104, and may therefore facilitate more accurate assignment ofhypothesis probabilities206 of thedialogue hypotheses204. As a second such example, theknowledge source208 may include an execution of anearlier action218 in response to anearlier dialogue104 with the user102 (e.g., the types of requests that theuser102 has requested in the past, and theactions218 executed in response to such requests). As a third such example, theknowledge source208 may include a current environment of the device (e.g., the physical location of the device may provide information that informs the evaluation of thedialogue104 with the user102).
As a second variation of this fourth aspect, an embodiment may enable theknowledge source208 to be expanded by the addition of new knowledge domains; e.g., thedevice502 may communicate with a new source of media (e.g., a source of streamed television content) that provides one or more subjects212 (e.g., the names of television shows) and/or one or more actions218 (e.g., “play television show”; “describe television show”; and “subscribe to television show”), and may therefore add thesubjects212 and/oractions218 of the new knowledge domain to the knowledge source in order to expand the dialogue fulfillment capabilities of thedevice502.
As a third variation of this fourth aspect, an embodiment may utilize theknowledge source208 while performing several elements of the evaluation of theexpression106 and thedialogue104. As a first such example, theknowledge source208 may supplement a speech and/orgesture recognizer510; e.g., a movie database may provide the names and pronunciation of popular movie titles and actor names that may be spoken by auser102. As a second such example, theknowledge source208 may supplement alanguage parser512; e.g., a movie database may specify language patterns that are associated with queries that may be spoken by auser102, such as “what movies featured (actor name)?”, that may facilitate the organization of language elements into a parsedexpression116. As a third such example, theknowledge source208 may inform a model-based carry-overcomparator514, thedialogue hypothesis generator516, and/or thedialogue hypothesis augmenter518. For example, from a large user profile that describes a large amount of detail about theuser102, the model-based carry-overcomparator514 may identify and distinguish includedfacts210 that are relevant to anexpression106, and/or an estimation of thehypothesis probability206 and/or the ranking of thedialogue hypotheses204, from excludedfacts210 that are not related to theexpression106, estimation ofhypothesis probabilities206, and/or ranking of dialogue hypotheses204 (e.g., the evaluation of anexpression106 concerning a movie genre may includefacts210 about the movies in the genre that theuser102 has recently viewed, and may excludefacts210 about the user's interests in movie soundtracks that may be not be deemed relevant to the evaluation). These and other techniques for generating and using aknowledge source208 may be included in variations of the techniques presented herein.
E5. Error Recovery
A fifth aspect that may vary among embodiments of these techniques involves the manner of responding to errors that may arise during the evaluation of theexpressions106 anddialogue104 with theuser102.
As a first variation of this fifth aspect, if an embodiment Identifies an error in response to anaction218 fulfilling a high-rankingdialogue hypothesis204, the embodiment may reduce thehypothesis probability206 of the high-rankingdialogue hypothesis204. For example, if the user provides a request such as “show me the movie Faction,” but no such movies are found in a movie database because the identified media is actually a television show, then the reduction of thedialogue hypothesis204 for the high-rankingdialogue hypothesis204 relating to movies may be reduced in order to expose the lower-ranking but moreprobable dialogue hypothesis204 relating to television shows. Alternatively or additionally, the embodiment may, upon identifying the failure while executing theaction218 for the high-rankingdialogue hypothesis204, report to theuser102 an action error indicating the failure of the action218 (e.g., “no movies found with the title “Faction”).
As a second variation of this fifth aspect, an embodiment may respond to different types of errors in a different manner, which may indicate to the user the source of difficulty in evaluating thedialogue104. For example, an embodiment may, upon identifying a failure to parse anexpression106 of thedialogue104, report a parsing error to theuser102 that indicates the failure to parse theexpression106, where the parsing error is different than an action error indicating a failure of an action. Additionally, where the error arises in a speech/gesture recognizer that identifies language elements of theexpression106, the embodiment may present an expression recognizer error indicating to theuser102 indicating a failure to recognize theexpression106; and where the error arises in a language parser that parses the expression elements to generate adialogue hypothesis204, the embodiment may present a language parsing error (that is different from the expression recognizer error) indicating to theuser102 the failure to parse theexpression104.
FIG. 7 presents anillustration700 of a set of examples of various types of error messages arising in various components of the dialogue evaluation pipeline. In a first example710, afirst expression106 is submitted where theuser102 is too far from a microphone, and only portions ofwords110 may be detected by aspeech recognizer108. Upon detecting thefailure702 of thespeech recognizer108 to detect thewords110, the embodiment may generate anexpression recognizer error704, such as “I didn't hear you.” In a second example712, asecond expression106 is submitted comprising a set ofwords110 that are individually recognizable, but that are not coherent as a phrase of the English language (e.g., “I today for hear comedy very yes”). Upon detecting a success of aspeech recognizer108 but afailure702 of alanguage parser512 to parse theexpression106, the embodiment may present to the user102 a parsing error706 (e.g., “I didn't understand your question; please rephrase your request”). In a third example714, upon receiving athird expression106 that is both recognizable and parseable but that is not actionable (e.g., a request for a movie for which the embodiment has no information), the embodiment may identify the success of thespeech recognizer108 and thelanguage parser512 but the failure of theaction218, and may therefore present to theuser102 an action error708 (e.g., “that movie is not available”). In this manner, the embodiment may notify theuser102 of the type of error encountered while evaluating thedialogue104 with the user.
FIG. 8 presents an illustration of anexemplary scenario800 featuring variations in several aspects of the techniques presented herein. At afirst time point804, auser102 initiates adialogue104 with an embodiment using afirst expression106. A dialogue hypothesis set202 may be generated with at twodialogue hypotheses204 respectively having ahypothesis probability206, but the hypothesis probabilities may be too close to act on one with confidence (e.g., it may not be clear whether the user is asking to see a list of comedies, or is asking about a specific comedy title). The embodiment may therefore present adisambiguation query802 that prompts theuser102, at asecond time point806, to provide asecond expression106 that disambiguates thedialogue hypotheses204, i.e., selecting a second dialogue hypothesis204 (for which thehypothesis probability206 is increased) over a first dialogue hypothesis204 (for which thehypothesis probability206 is reduced). Thefirst dialogue hypothesis204 may be provisionally retained in the dialogue hypothesis set202, in case theuser102 changes thedialogue104 to request thefirst dialogue hypothesis204. However, thehypothesis probability206 of thesecond dialogue hypothesis204 may be associated with at least twoactions218, such as a request to view details about the movie (e.g., “I want to see this film” as a general expression of interest, or “I want to see this film” as a request directed to the embodiment to present the film). The embodiment may therefore execute afirst action218, such as presenting a description of the movie in which theuser102 appears to be interested. At athird time808, theuser102 may present athird expression106 requesting adifferent action218, such as playing the movie for theuser102, and the embodiment may accordingly adjust the hypothesis probabilities206 of thedialogue hypotheses204 and execute theaction218 associated with thedialogue hypothesis204 having the highest adjustedhypothesis probability206. Notably, the second andthird expressions106 may be difficult to understand or act upon in isolation; their semantic value may only be evaluated by the system in the context of thedialogue104 comprising the sequence of expressions including thefirst expression106. Various embodiments may incorporate many such variations of the techniques presented herein.
F. COMPUTING ENVIRONMENTThe techniques discussed herein may be devised with variations in many aspects, and some variations may present additional advantages and/or reduce disadvantages with respect to other variations of these and other techniques. Moreover, some variations may be implemented in combination, and some combinations may feature additional advantages and/or reduced disadvantages through synergistic cooperation. The variations may be incorporated in various embodiments to confer individual and/or synergistic advantages upon such embodiments.
FIG. 9 and the following discussion provide a brief, general description of a suitable computing environment to implement embodiments of one or more of the provisions set forth herein. The operating environment ofFIG. 9 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment. Example computing devices include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
Although not required, embodiments are described in the general context of “computer readable instructions” being executed by one or more computing devices. Computer readable instructions may be distributed via computer readable media (discussed below). Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types. Typically, the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
FIG. 9 illustrates an example of asystem900 comprising acomputing device902 configured to implement one or more embodiments provided herein. In one configuration,computing device902 includes at least oneprocessing unit906 andmemory908. Depending on the exact configuration and type of computing device,memory908 may be volatile (such as RAM, for example), non-volatile (such as ROM, flash memory, etc., for example) or some combination of the two. This configuration is illustrated inFIG. 9 by dashedline904.
In other embodiments,device902 may include additional features and/or functionality. For example,device902 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic storage, optical storage, and the like. Such additional storage is illustrated inFIG. 9 bystorage910. In one embodiment, computer readable instructions to implement one or more embodiments provided herein may be instorage910.Storage910 may also store other computer readable instructions to implement an operating system, an application program, and the like. Computer readable instructions may be loaded inmemory908 for execution by processingunit906, for example.
The term “computer readable media” as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions or other data.Memory908 andstorage910 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed bydevice902. Any such computer storage media may be part ofdevice902.
Device902 may also include communication connection(s)916 that allowsdevice902 to communicate with other devices. Communication connection(s)916 may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connectingcomputing device902 to other computing devices. Communication connection(s)916 may include a wired connection or a wireless connection. Communication connection(s)916 may transmit and/or receive communication media.
The term “computer readable media” may include communication media. Communication media typically embodies computer readable instructions or other data in a “modulated data signal” such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” may include a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
Device902 may include input device(s)914 such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, video input devices, and/or any other input device. Output device(s)912 such as one or more displays, speakers, printers, and/or any other output device may also be included indevice902. Input device(s)914 and output device(s)912 may be connected todevice902 via a wired connection, wireless connection, or any combination thereof. In one embodiment, an input device or an output device from another computing device may be used as input device(s)914 or output device(s)912 forcomputing device902.
Components ofcomputing device902 may be connected by various interconnects, such as a bus. Such interconnects may include a Peripheral Component Interconnect (PCI), such as PCI Express, a Universal Serial Bus (USB), Firewire (IEEE 1394), an optical bus structure, and the like. In another embodiment, components ofcomputing device902 may be interconnected by a network. For example,memory908 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, acomputing device920 accessible vianetwork918 may store computer readable instructions to implement one or more embodiments provided herein.Computing device902 may accesscomputing device920 and download a part or all of the computer readable instructions for execution. Alternatively,computing device902 may download pieces of the computer readable instructions, as needed, or some instructions may be executed atcomputing device902 and some atcomputing device920.
G. USE OF TERMSAlthough the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
As used in this application, the terms “component,” “module,” “system”, “interface”, and the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
Various operations of embodiments are provided herein. In one embodiment, one or more of the operations described may constitute computer readable instructions stored on one or more computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein.
Moreover, the word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word exemplary is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims may generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form.
Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes”, “having”, “has”, “with”, or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising.”