Movatterモバイル変換


[0]ホーム

URL:


US9348479B2 - Sentiment aware user interface customization - Google Patents

Sentiment aware user interface customization
Download PDF

Info

Publication number
US9348479B2
US9348479B2US13/315,047US201113315047AUS9348479B2US 9348479 B2US9348479 B2US 9348479B2US 201113315047 AUS201113315047 AUS 201113315047AUS 9348479 B2US9348479 B2US 9348479B2
Authority
US
United States
Prior art keywords
application
user
skin
emotional state
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/315,047
Other versions
US20130152000A1 (en
Inventor
Weipeng Liu
Matthew Robert Scott
Huihua Hou
Ming Zhou
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing LLCfiledCriticalMicrosoft Technology Licensing LLC
Priority to US13/315,047priorityCriticalpatent/US9348479B2/en
Assigned to MICROSOFT CORPORATIONreassignmentMICROSOFT CORPORATIONASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: LIU, Weipeng, ZHOU, MING, HOU, HUIHUA, SCOTT, MATTHEW ROBERT
Publication of US20130152000A1publicationCriticalpatent/US20130152000A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLCreassignmentMICROSOFT TECHNOLOGY LICENSING, LLCASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: MICROSOFT CORPORATION
Application grantedgrantedCritical
Publication of US9348479B2publicationCriticalpatent/US9348479B2/en
Activelegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

The customization of an application user interface with a skin package based on context data that includes the emotional states of a user may strengthen the emotional attachment for the application by the user. The customization includes determining an emotional state of a user that is inputting content into an application. A skin package for the user interface of the application is selected based on the emotional state of the user. The selected skin package is further applied to the user interface of the application.

Description

BACKGROUND
Users of popular applications, such as language input method editor applications, may develop emotion attachments with such applications. A user may express an emotional attachment with an application by customizing a visual appearance of the user interface provided by the application. Such customization is commonly referred to as “skinning”, and may be achieved with the use of custom graphics that alter the appearance of the user interface. Other skinning technologies may include the application of animation and sound to the user interface of the application.
SUMMARY
Described herein are techniques for adaptively applying skins to a user interface of an application based on the emotional sentiment of a user that is using the application. A skin may alter the user's interactive experience with the user interface by supplementing the user interface with custom images, animation and/or sounds. Accordingly, by adaptively applying skins to the user interface, the look and feel of the user interface may be changed to correspond to the user's emotional sentiment throughout the usage of the application by the user.
The emotional state of the user may be detected based in part on content that the user inputs into the application or communication that the user transmits through the application. In this way, the sentiment aware skin customization of the application user interface may strengthen the emotional attachment for the application by the user. Accordingly, the user may become or remain a loyal user of the application despite being offered similar applications from other vendors. Sentiment aware skin customization may be applied to a variety of software. Such software may include, but are not limited to, office productivity applications, email applications, instant messaging client applications, media center applications, media player applications, and language input method editor applications. Language input method editor applications may include applications that are used for non-Roman alphabet character inputs, such as inputs of Chinese, Japanese, and/or Korean.
In at least one embodiment, the customization of a user interface of the application includes determining an emotional state of a user that is inputting content into an application. A skin package for the user interface of the application is selected based on the emotional state of the user. The selected skin package is further applied to the user interface of the application.
This Summary is provided to introduce a selection of concepts in a simplified form that is further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference number in different figures indicates similar or identical items.
FIG. 1 is a block diagram that illustrates an example scheme that implements sentiment aware user interface customization.
FIG. 2 is an illustrative diagram that shows the example components of a skin application engine that provides sentiment aware user interface customization.
FIG. 3 shows illustrative user interfaces that are customized by the skin application engine according to emotional sentiments of a user.
FIG. 4 shows an illustrative user interface of a helper application that is customized by the skin application engine according to an emotional sentiment of a user.
FIG. 5 is a flow diagram that illustrates an example process for selecting and applying a skin package to a user interface of the application based on an operation scenario type and an emotional state.
FIG. 6 is a flow diagram that illustrates an example process for classifying context data related to a user into one of multiple predefined emotional states.
FIG. 7 is a flow diagram that illustrates an example process for selecting a skin package for the user interface of the application by considering the confidence values associated with the operation scenario type and the emotional state.
DETAILED DESCRIPTION
The embodiments described herein pertain to techniques for adaptively applying skins to a user interface of an application based on the emotional sentiment of a user that is using the application. A skin may alter the user's interactive experience with the user interface of an application by supplementing the user interface with custom images, animation and/or sounds. Accordingly, by adaptively applying skins to the user interface, the look and feel of the user interface may be changed to correspond to the user's emotional sentiment throughout the usage of the application by the user. The emotional state of the user may be determined from content that the user inputs into the application or communication that the user transmits through the application, in conjunction with other sources of data. The sentiment aware skin customization of the user interface of the application may strengthen the emotional attachment for the application by the user.
Sentiment aware skin customization may be applied to a variety of software. Such software may include, but are not limited to, office productivity applications, email applications, instant messaging client applications, media center applications, media player applications, and language input method editor applications. Language input method editor applications may include applications that are used for non-Roman alphabet character inputs, such as inputs of Chinese, Japanese, and/or Korean. Various examples of techniques for implementing sentiment aware user interface customization in accordance with the embodiments are described below with reference toFIGS. 1-7.
Example Scheme
FIG. 1 is a block diagram that illustrates anexample scheme100 for implementing askin application engine102 that performs sentiment aware user interface customization. Theskin application engine102 may be implemented by anelectronic device104. Theskin application engine102 may acquirecontext data106. Thecontext data106 may be acquired from anapplication108 that is operating on theelectronic device104, as well as from other sources. Thecontext data106 may include user inputs of content into theapplication108. For example, in a scenario in which the application is an instant message client application, the user inputs may include a current message that a user is typing and/or previous messages that the user has transmitted through the instant message client application.
Thecontext data106 may further include application specific data and environmental data. The application specific data may include the name and the type of the application, and/or a current state of the application (e.g., idle, receiving input, processing data, outputting data). The environment data may include data on the real-world environment. For example, the environmental data may include a time at each time the user inputs content, the weather at each time the user inputs content. The environmental data may also concurrently or alternatively include current system software and/or hardware status or events of theelectronic device104. Additionally, thecontext data106 may include user status data collected from personal web services used by the user. The collected user status data may provide explicit or implicit clues regarding the emotional state of the user at various times.
Once theskin application engine102 has acquired thecontext data106, theskin application engine102 may classify thecontext data106 into one of multiple predefinedemotional states110, such as theemotional state112. The predefined emotional states may include emotional states such as happiness, amusement, sadness, anger, disappointment, frustration, curiosity, and so on and so forth. Theskin application engine102 may then select askin package114 from theskin package repository116 that is best suited to theemotional state112 and anoperation scenario type118 of theapplication108. Each of the skin packages in theskin package repository116 may include images, animation and/or sound. Accordingly, theselected skin package114 may provide a full multimedia experience to the user. In some instances, theskin package114 that is selected by theskin application engine102 may reflect theemotional state112. In other instances, theskin application engine102 may select theskin package114 to alter theemotional state112. Subsequently, theskin application engine102 may apply theselected skin package114 to the user interface of theapplication108. In various embodiments, theskin application engine102 may apply other skin packages from theskin package repository116 to the user interface of theapplication108 based on changes in the determined emotional state of the user.
Electronic Device Components
FIG. 2 is an illustrative diagram that shows the example components of askin application engine102 that provides sentiment aware user interface customization. Theskin application engine102 may be implemented by theelectronic device104. In various embodiments, theelectronic device104 may be a general purpose computer, such as a desktop computer, a tablet computer, a laptop computer, a server, and so forth. However, in other embodiments, theelectronic device104 may be one of a camera, a smart phone, a game console, a personal digital assistant (PDA), or any other electronic device that interacts with a user via a user interface.
Theelectronic device104 may includes one ormore processors202,memory204, and/or user controls that enable a user to interact with the electronic device. Thememory204 may be implemented using computer readable media, such as computer storage media. Computer-readable media includes, at least, two types of computer-readable media, namely computer storage media and communication media. Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device. In contrast, communication media may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. As defined herein, computer storage media does not include communication media. Theelectronic device104 may have network capabilities. For example, theelectronic device104 may exchange data with other electronic devices (e.g., laptops computers, servers, etc.) via one or more networks, such as the Internet.
The one ormore processors202 and thememory204 of theelectronic device104 may implement components that include acontext collection module206, a context normalization module208, asentiment analysis module210, anapplication classification module212, askin selection module214, askin renderer module216, askin design module218, and auser interface module220. Thememory204 may also implement adata store222.
Thecontext collection module206 may collect thecontext data106 from theapplication108, theelectronic device104, and/or other sources. Thecontext data106 may include user inputs of content into theapplication108 in a recent time period (e.g., a time period between 10 minutes ago and the current time). For example, in a scenario in which the application is an instant message client application, the user inputs may include a current message that a user is typing and/or previous messages that the user has transmitted through the instant message client application. In another example, the user inputs may be text that is inputted into a word processing application in the recent time period. In various embodiments, thecontext collection module206 may extract emotion terms from the user inputs as context data. The emotion terms may be verbs, adjectives, or other descriptors that may explicitly or implicitly reflect the emotional state of the user. In such embodiments, thecontext collection module206 may use machine learning techniques, such as natural language processing (NLP), computational linguistics, and/or text analytics to recognize and extract such emotion terms.
In some embodiments, thecontext collection module206 may have the ability to extract emotion terms from user inputs that are in different languages. In such embodiments, thecontext collection module206 may use one of thedictionaries224 to recognize and translate non-English words or characters that are inputted by the user into standard English words, and then perform the emotion term extraction. However, the emotion term extraction may also be performed by using another language as the standard language in alternative embodiments. For example, thecontext collection module206 may perform the translation of user inputs and emotion term extraction according to languages such as Spanish, French, Chinese, Japanese, etc.
Thecontext data106 that is collected by thecontext collection module206 may further include application specific data. The application specific data may include the name and the type of the application. For example, the name of the application may be the designated or the trademark name of the application. The type of the application may be a general product category of the application, e.g., productivity, business communication, social networking, entertainment, etc. The application specific data may also include states of the application in a recent time period. In the example above, the application specific data may include an instant messaging status message (e.g., online, away, busy, etc.), a status of the application (e.g., application recently opened, updated, last used, etc.), and/or so forth.
Thecontext data106 may further include environmental data. The environment data may include a time at each time the user inputs content, the weather at each time the user inputs content, and other environmental indices at each time the user inputs content. Thecontext collection module206 may obtain such environmental data from service applications (e.g., a clock application, weather monitoring application, etc.) that are installed on theelectronic device104. The environmental data may also include system software and/or hardware status or events of theelectronic device104 in a recent time period. For example, the system status of theelectronic device104 may indicate how recently theelectronic device104 was turned on, the idle time of theelectronic device104 prior to a current user input of content, current amount and type of system resource utilization, recent system error messages, and/or so forth.
Thecontext data106 may further include user status data from a recent time period. Thecontext collection module206 may acquire the user status data from personal web services used by the user. For example, the user status data may include social network service profile information, messages exchanged with other social network members, and/or postings on a blog page or a forum. Such user status data may provide explicit or implicit clues regarding the emotional state of the user at various times. Accordingly, thecontext collection module206 may obtain the clues by performing emotion term extraction on the profiles, messages, and/or postings as described above, with the implementation of appropriate language translations.
In some embodiments in which theapplication108 is a communication application, thecontext collection module206 may be configured to obtain context data related to an interlocutor that is exchanging communications with the user rather than collecting context data on the user. For example, the communication application may be an instant messaging client application. In such embodiments, an electronic device used by the interlocutor who is exchanging communications with the user via a corresponding communication application may have a counterpart skin application engine installed. The counterpart skin application engine may be similar to theskin application engine102. Accordingly, thecontext collection module206 may be configured to obtain context data, such as the content inputted by the interlocutor, user status, etc., from the counterpart skin application engine. In this way, a skin package that is selected based on the emotional state of the interlocutor may be eventually applied to the user interface of theapplication108.
In various embodiments, thecontext collection module206 is configured to collect the context data related to a user, such as the application specific data, the environmental data, the user status data, from the user after obtaining permission from the user. For example, when a user elects to implement the sentiment aware user interface skinning for theapplication108, thecontext collection module206 may display a dialog box that indicates to the user that personal information is to be collected from the user, identifying each source of information. In this way, the user may be given the opportunity to terminate the implementation of the sentiment aware user interface skinning for theapplication108. In some embodiments, after the user consents, thecontext collection module206 may display one or more other dialog boxes that further enable the user to selectively allow thecontext collection module206 to collect context data from designated sources. For example, the user may allow thecontext collection module206 to collect user inputs of content to one or more specific applications, but not user inputs of content into other applications. In another example, the user may allow thecontext collection module206 to collect the user inputs and the application specific data, but deny thecontext collection module206 permission to collect the user status data. Accordingly, the user may be in control of safeguarding the privacy of the user while enjoying the benefits of the sentiment aware user interface skinning.
The context normalization module208 may normalize the collected context data, such as thecontext data106, into context features. Each of the context features may be expressed as a name value pair. In one instance, a name value pair may be “weather: 1”, in which the value “1” represents that the weather is sunny. In another instance, a name value pair may be “application type: 3”, in which the value “3” represents that theapplication108 is a instant messaging client application. In a further instance, a name value pair may be “emotion term: 12434”, in which the emotion term is a word or phrase that thecontext collection module206 extracted from a user input. In such an instance, the value “12434” may represent the word “happy”. Accordingly, the context normalization module208 may continuously receive context data from thecontent collector module206, and normalize the context data into context features for analysis by thesentiment analysis module210.
Thesentiment analysis module210 may classify the normalized context data in the form of context features into one of the predefinedemotional states110. The context data may be thecontext data106. Thesentiment analysis module210 may also generate a confidence value for the classification. The classification confidence value may be expressed as a percentage value or a numerical value in a predetermined value scale. For example, thesentiment analysis module210 may classify a set of context features as corresponding to a predefined emotional state of “happy” with a classification confidence value of “80%”.
In various embodiments, thesentiment analysis module210 may use one or more machine learning or classification algorithms to classify the context features into one of the predefinedemotional states110 and generate a corresponding classification confidence value. The machine learning algorithms may include supervised learning algorithms, unsupervised learning algorithms, semi-supervised learning algorithms, and/or so forth. The classification algorithms may include support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engine, and/or so forth. In other embodiments, thesentiment analysis module210 may employ one or more of directed and undirected model classification approaches, such as naïve Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and/or other probabilistic classification models to achieve these goals.
Theapplication classification module212 may determine anoperation scenario type118 of theapplication108 using a heuristic engine. Each of the operation scenario types may have a corresponding level of usage formality. The heuristic engine may be periodically updated by an external application information service so that the heuristic engine may stay current on the latest updates and changes to theapplication108. Accordingly, the heuristic engine may continuously or periodically poll theapplication108 for application information during the usage of theapplication108 by a user. The application information may include data such as application process names, field classes, an application object model, and screen pixel information of the output data that is generated by the application and presented on a display. Based on such application information, and using visual interpretation techniques such as optical character recognition (OCR), the heuristic engine may leverage heuristic rules and statistical information to determine that theapplication108 is operating in one of multiple operation scenario types. For example, the multiple operation scenario types may include an “online chat” operation scenario type, a “document authoring” operation scenario type, an “email composition” operation scenario type, and so forth.
The heuristic engine of theapplication classification module212 may also assign a type confidence value to the classification of the application into an operation scenario type. The type confidence value may be expressed as a percentage value or a numerical value in a predetermined value scale. For example, theapplication classification module212 may classify theapplication108 into the “online chat” operation scenario type with a type confidence value of “90%”.
Theskin selection module214 may select a skin package from theskin package repository116 based on the determined emotional state of the user and the determined operation scenario type of theapplication108, as well as their respect confidence values. In various embodiments, theskin selection module214 may assess whether the classification confidence value of a classified emotional state meets a corresponding predefined confidence threshold. If the classification confidence value of the emotional state is below the predefined confidence threshold, theskin selection module214 may consider the emotional state of the user as unknown. However, if the classification confidence value of the emotional state meets or is above the predefined confidence threshold, theskin selection module214 may determine that the user is in the emotional state.
Likewise, theskin selection module214 may assess whether the type confidence value of a classified operation scenario type of theapplication108 meets a corresponding predefined confidence threshold. If the type confidence value of the operation scenario type is below the predefined confidence threshold, theskin selection module214 may consider the operation scenario type of theapplication108 as unknown. However, if the type confidence value of the operation scenario type meets or is above the predefined confidence threshold, theskin selection module214 may determine that theapplication108 has the operation scenario type.
Accordingly, once theskin selection module214 has determined the emotional state, theskin selection module214 may select a skin package that is mapped to the emotional state. In some embodiments, the skin package selected by theskin selection module214 may correspond to the emotional state. For example, a “happy” skin package that shows cheerful images may be selected by theskin selection module214 when the emotional state of the user of theapplication108 is classified as “happy.” In other embodiments, theskin selection module214 may be configured to select a skin package to alter the emotional state of the user. For example, when the emotional state of the user is classified as “sad”, theskin selection module214 may select the “happy” skin package as a way to cheer up the user.
The selection of the skin package may be further based on the determined operation scenario type. Such selection of a skin package may be implemented when there are multiple skin packages with different levels of usage formality mapped to the same emotional state. For example, when the emotional state of the user is determined to be “happy”, theskin selection module214 may select a more formal “happy” skin package when the determined operation scenario type of theapplication108 is “document authoring.” In contrast, theskin selection module214 may select a less formal “happy” skin package for the “happy” emotional state when the determined operation scenario type of theapplication108 is “online chat”. The usage formality of a skin package may refer to the appropriateness of the skin package content (e.g., images, sounds, animation) in different social contexts. For instance, a more formal skin package is more likely to be acceptable in a professional environment but may be perceived as awkward or out of place in a causal social environment. In contrast, a less formal skin package is less likely to be acceptable in a professional social environment, but is more likely to be acceptable in a casual social environment. In some embodiments, when the operation scenario type of theapplication108 is determined to be unknown, and there are multiple skin packages that correspond to the determined emotional state, theskin selection module214 may select the most formal skin package that corresponds to the emotional state.
The mapping of skin packages in theskin package repository116 to emotional states may enable theskin selection module214 to select skin packages as described above. The mapping may be stored in the metadata of each skin package. In some instances, a single skin package may be mapped to multiple emotional states. For example, a “happy” skin package may be mapped to both the “happy” emotional state and the “amused” emotional state. Thus, such a “happy” skin package may be selected by theskin selection module214 for either of the emotional states. In other instances, a single emotional state may be mapped to multiple skin packages. For example, as described above, two “happy” skin packages with different levels of usage formality may be mapped to the same “happy” emotional state. In additional instances, a combination of the above mappings of skin packages to emotional states may be present in theskin package repository116.
In some embodiments, there may be a designated default skin package that is selected by theskin selection module214 when the emotional state of the user is ascertained to be unknown, such as in a scenario in which a classified emotional state has a low confidence value. Theskin selection module214 may also select the default skin package when no skin package has been mapped to a particular determined emotional state of the user. The default skin package may include neutral content that may be suitable for various emotional states.
It will be appreciated that since theskin selection module214 takes the classification confidence value of a classified emotional state into consideration when selecting a skin package, abrupt or unwarranted changes in skin selection may be reduced. Accordingly, the classification confidence value used by theskin selection module214 may be adjusted to balance timeliness of changes in user interface appearance in response to emotional state detection with annoyance that frequent user interface appearance changes may bring to the user.
Theskin renderer module216 may apply the skin package selected by theskin selection module214 to the user interface of theapplication108. For example, theskin renderer module216 may apply theskin package114 to the user interface. Theskin package114 may include images, sounds, and/or animation that provide a rich multimedia emotional experience for the user. Thus, the application of theskin package114 may change the user interface appearance of theapplication108, as well as provide additional features that are previously unavailable in the user interface of theapplication108. Such additional features may include the ability to play certain sounds or animate a particular portion of the user interface.
In some embodiments, theskin package114 may include a sentiment engine that plays different sounds and/or displays different animations based on the emotion terms detected by thecontext collection module206. For example, when thecontext collection module206 informs the sentiment engine that the user has inputted the word “happy” into theapplication108, the sentiment engine may cause the applied skin to play a laughter sound track and/or move an animated smiley face across the user interface of theapplication108. In other words, the sentiment engine that is included in theskin package114 may leverage functions of the skin application engine102 (e.g., the context collection module206) to enhance the features provided by theskin package114.
In certain embodiments, the images and animations that are provided by theskin package114 may be displayed outside of the user interface of theapplication108. For example, when the user interface of theapplication108 is a window that occupies a portion of a displayed desktop work area, an animation in theskin package114 may dance across the entire width of the desktop work area, rather than just the portion occupied by the user interface. In another example, an image in theskin package114 may protrude from the user interface of theapplication108, or otherwise modify the boundaries of the user interface of theapplication108.
Theskin design module218 may enable the user to design skin packages. In various embodiments, theskin design module218 may include a skin design assistant functionality. The assistant functionality may present the user with a sequence of user interface dialog boxes and/or skin design templates that lead the user through a series of steps for designing a skin package. In various instances, the assistant functionality may enable the user to create images, animation, and/or sounds, and then integrate the created content into a particular skin package. Alternatively or concurrently, the assistant functionality may enable the user to associate images, animation, and/or sounds selected from a library of such content to create the particular skin package. In some instances, the assistant functionality may also enable the user to incorporate a sentiment engine in the skin package. The assistant functionality may further enable the user to input metadata regarding each created skin package.
The metadata inputted for a created skin package may map the skin package to a corresponding emotional state (e.g., happy, sad, etc.) and/or a corresponding operation scenario type (e.g., document authoring, online chat, etc.). In some instances, the metadata inputted for the created skin package may also include configuration data that enable a sentiment engine that is included in the skin package to play different sounds or displays different animation based on the emotion terms detected by thecontext collection module206. The inputted metadata may be saved as a part of the created skin package. For example, the metadata may be saved as an extensible markup language (XML) file that is included in the created skin package.
Theuser interface module220 may enable the user to interact with the modules of theskin application engine102 using a user interface (not shown). The user interface may include a data output device (e.g., visual display, audio speakers), and one or more data input devices. The data input devices may include, but are not limited to, combinations of one or more of keypads, keyboards, mouse devices, touch screens, microphones, speech recognition packages, and any other suitable devices or other electronic/software selection methods.
In some embodiments, the user may adjust the threshold values used by theskin selection module214 via theuser interface module220. Further, theuser interface module220 may provide a settings menu. The settings menu may be used to adjust whether theskin selection module214 is to select a skin package that corresponds to the emotional state of the user or a skin package that alters the emotional state of the user. Theuser interface module220 may also enable the user to specify through one or more dialog boxes the type of context data (e.g., user inputs, environmental data, application specific data, etc.) that the user allows thecontext collection module206 to collect, and/or one or more applications from which user inputs may be collected. In other embodiments, theuser interface module220 may display the user interface of theskin design module218.
In other embodiments, theuser interface module220 may enable the user to select skin packages from a skin package library226 that resides on aserver228, and download the skin packages to theelectronic device104 via anetwork230. For example, the skin package library226 may be a part of an online store, and the user may purchase or otherwise acquire the skin packages from the online store. The downloaded skin packages may be stored in theskin package repository116. Thenetwork230 may be a local area network (“LAN”), a larger network such as a wide area network (“WAN”), and/or a collection of networks, such as the Internet. Protocols for network communication, such as TCP/IP, may be used to implement thenetwork230.
Thedata store222 may store thedictionaries224 that are used by thecontext collection module206. Additionally, thedata store222 may also storeapplications232 that may be skinned by theskin application engine102. Theapplications232 may include theapplication108. Further, theskin package repository116 may be stored in thedata store222. Thedata store222 may further store additional data or other intermediate products that are generated or used by various components of theskin application engine102, such thecontext data106, the operation scenario types234, and the predefinedemotional states110.
While the context normalization module208, thesentiment analysis module210, and theskin selection module214 are described above as being implemented on theelectronic device104, such modules may also be implemented on a server in other embodiments. For example, the server may be thenetworked server228, or any server that is part of computing cloud. In other words, the analysis of context data and the selection of an emotional skin package may be performed by a computing device that is separate from theelectronic device104. Likewise, while theskin design module218 is described above as being part of theskin application engine102, theskin design module218 may be a standalone skin design application in other embodiments. The standalone skin design application may be implemented on another computing device.
FIG. 3 shows illustrative user interfaces that are customized by the skin application engine according to emotional sentiments of a user. Each of theuser interfaces302 and304 may be a user interface for an instant messaging client application. Theuser interface302 may include amessage input portion306 that displays messages entered by a user, and aresponse message portion308 that displays messages entered by an interlocutor that is chatting with the user. Theskin application engine102 may determine based on the context data in this scenario to apply a “happy” skin package to theuser interface302. The context data may include the content the user inputted into themessage input portion306, among other context information. The “happy” skin package may include cheerful and upbeat images and animations. In some embodiments, the “happy” skin package may also include cheerful and upbeat sounds.
As shown by theuser interface302, because thesentiment analysis module210 is capable of using normalizedcontext data106 rather than rely solely on user inputted content to determine an emotional state of the user, thesentiment analysis module210 may accurately detect the emotional state of the user in many scenarios. For example, theskin application engine102 may classify the emotional state of the user as “happy” despite the user input of the emotion term “crying” in themessage input portion306. In contrast, a conventional keyword-based sentiment analysis engine may have determined from the presence of the word “crying” that the emotional state of the user is “sad”.
Likewise, theuser interface304 may include amessage input portion310 that displays messages entered by a user, and aresponse message portion312 that displays messages entered by an interlocutor that is chatting with the user. In contrast to the example above, theskin application engine102 may determine based on the context data in this scenario to apply a “sad” skin package to theuser interface304. The context data may include the content the user inputted into themessage input portion310, among other context information. As shown, the “sad” skin package may include somber and sympathetic images and animations. In some embodiments, the “sad” skin package may also include somber and sympathetic sounds. Nevertheless, in other embodiments, theskin application engine102 may apply a different skin package (e.g., a happy skin package) to theuser interface304 for the purpose of altering the emotional state of the user.
FIG. 4 shows an illustrative user interface of ahelper application402 that is customized by theskin application engine102 according to an emotional sentiment of a user. Thehelper application402 may be a language input method editor that runs cooperatively with another application, such as aprincipal application404, to enable the input of non-Roman alphabet characters into theprincipal application404. For example, the language input method editor may enable the input of Chinese, Japanese, Korean, or Indic characters into theprincipal application404. Theprincipal application404 may be an instant messaging client application, a word processing application, an email application, etc. In some embodiments, the helper application may be installed on and executed from theelectronic device104. In other embodiments, thehelper application402 may be a cloud-based application that may interact with theprincipal application404 without being installed on theelectronic device104.
Theskin application engine102 may customize theuser interface406 of thehelper application402 with askin package408 based on context data. The context data may include context information that is related to thehelper application402, theprincipal application404, and/or a combination of both applications. For example, the context data may include content that the user inputted into theprincipal application404, thehelper application402, or content that the user inputted into both applications.
In some embodiments, theskin package408 that is applied to theuser interface406 may include animage410 that protrudes from theuser interface406. Accordingly, theskin package408 may modify the boundaries of theuser interface406. Theskin package408 may also include ananimation412 and asound414.
Example Processes
FIGS. 5-7 describe various example processes for implementing sentiment aware user interface customization. The order in which the operations are described in each example process is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement each process. Moreover, the operations in each of theFIGS. 5-7 may be implemented in hardware, software, and a combination thereof. In the context of software, the operations represent computer-executable instructions that, when executed by one or more processors, cause one or more processors to perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and so forth that cause the particular functions to be performed or particular abstract data types to be implemented.
FIG. 5 is a flow diagram that illustrates anexample process500 for selecting and applying a skin package to a user interface of the application based on an operation scenario type and an emotional state. Atblock502, thesentiment analysis module210 ofskin application engine102 may determine an emotional state of a user based on receivedcontext data106 associated with a user. The associatedcontext data106 may include content that the user inputs into theapplication108 or communication that the user transmits through theapplication108, in conjunction with other sources of data. As a part of the emotional state determination, thesentiment analysis module210 may further assign a classification confidence value to the determined emotional state.
Atblock504, theskin application engine102 may ascertain an operation scenario type of theapplication108. In various embodiments, theapplication classification module212 of theskin application engine102 may continuously or periodically poll theapplication108 for application information during the usage of theapplication108 by the user. The application information may include data such as application process names, field classes, an application object model, and screen pixel information. Based on such application information, theapplication classification module212 may leverage heuristic rules and statistical information to determine that theapplication108 is operating in one of the multiple operation scenario types. As a part of the emotional state determination, theapplication classification module212 may further assign a confidence value to the determined emotional state.
Atblock506, theskin application engine102 may select a skin package from theskin package repository116 for a user interface of the application. In various embodiments, theskin selection module214 of theskin application engine102 may make the selection based on at least one of the determined emotional state of the user and the operation scenario type of the application, as well as their respective confidence values. In some instances, the skin package that is selected by theskin application engine102 may reflect the determined emotional state. In other instances, theskin application engine102 may select the skin package to alter the emotional state of the user. The selected skin package may include images, animation, and/or sound that provide a full multimedia experience to the user.
Atblock508, theskin application engine102 may apply the selected skin package to theapplication108. Thus, the skin package may change the user interface appearance of theapplication108, as well as provide additional features that are previously unavailable in the user interface of theapplication108. Such additional features may include the ability to play certain sounds or animate a particular portion of the user interface. Subsequently, theprocess500 may loop back to block502 so that theskin application engine102 may reassess and determine the emotional state of the user based on newly received context data, and apply a new skin package if the emotional state of the user changes.
In some embodiments, rather than determining the emotional state of the user, theskin application engine102 may obtain an emotional state of an interlocutor that is engaged in an online interaction with the user. Accordingly, the skin package selected for theapplication108 may be based on the received context data associated with the interlocutor.
FIG. 6 is a flow diagram that illustrates anexample process600 for classifying context data related to a user into one of multiple predefined emotional states. Theprocess600 may further describeblock502 of theprocess500.
Atblock602, thecontext collection module206 may collectcontext data106 associated with a user. The associated context data may include content that the user inputs into theapplication108 or communication that the user transmits through theapplication108, in conjunction with other sources of data in a recent time period. The other sources of may include application specific data, environmental data, and/or user status data from a recent time period. Each of the recent time periods may have a predetermined duration. The types and/or sources of context data that thecontext collection module206 collects may be configured by the user.
Atblock604, the context normalization module208 may normalize thecontext data106 into context features. Each of the context features may be expressed as a name value pair. In one instance, a name value pair may be “weather: 1”, in which the value “1” represents that the weather is sunny. In another instance, the name value pair may be “application type: 3”, in which the value “3” represents that theapplication108 is an instant messaging client application.
Atblock606, thesentiment analysis module210 may classify the context features into one of the multiple predefined emotional states. Thesentiment analysis module210 may also generate a classification confidence value for the classification. The classification confidence value may be expressed as a percentage value or a numerical value in a predetermined value scale. In various embodiments, thesentiment analysis module210 may use one or more machine learning and/or classification algorithms to classify the context features into one of the predefinedemotional states110 and generate a corresponding classification confidence value.
FIG. 7 is a flow diagram that illustrates anexample process700 for selecting a skin package for the user interface of the application by considering the confidence values associated with the operation scenario type and the emotional state. Theprocess700 may further describeblock506 of theprocess500.
Atblock702, theskin selection module214 may assess the classification confidence value of a classified emotional state. The classified emotional state may have been selected from multiple predefinedemotional states110 by thesentiment analysis module210 based on thecontext data106 related to a user. Accordingly, if theskin selection module214 determines that the classification confidence value does not meet a corresponding predefined confidence threshold (“no” at decision block704), theprocess700 may continue to block706. Atblock706, theskin selection module214 may determine that the emotional state of the user is unknown. Accordingly, theskin selection module214 may select an emotionally neutral skin package for an application.
In some embodiments, the emotional neutral skin package selected by theskin selection module214 may be a skin package that corresponds to the operation scenario type of theapplication108. In such embodiments, the emotionally neutral skin package may be selected from multiple emotionally neutral skin packages.
Returning to decision block704, if theskin selection module214 determines that the classification confidence value at least meets the corresponding predefined confidence threshold (“yes” at decision block704), theprocess700 may continue to block708.
Atblock708, theskin selection module214 may assess the type confidence value of the determined operation scenario type for theapplication108. The operation scenario type of theapplication108 may be determined from application information such as application process names, field classes, an application object model, and screen pixel information. Accordingly, if theskin selection module214 determines that the type confidence value at least meets a corresponding predefined confidence value (“yes” at decision block710), theprocess700 may continue to block712.
Atblock712, theskin selection module214 may select a skin package for the emotional state and the operation scenario type of theapplication108. In some instances, the skin package that is selected by theskin selection module214 may reflect the emotional state. In other instances, the selected skin package may alter the emotional state.
Returning to decision block710, if theskin selection module214 determines that the type confidence value does not meet a corresponding predefined confidence value (“no” at decision block710), theprocess700 may continue to block714. Atblock714, theskin selection module214 may select a default skin package for the emotional state. The default skin package may be a skin package that corresponds to or alters the emotional state, but which is one of the most formal skin packages.
The sentiment aware customization of the user interface of an application with a skin package based on context data that includes an emotional state of a user may strengthen the emotional attachment for the application by the user. Accordingly, the user may become or remain a loyal user of the application despite being offered similar applications from other vendors. Further, the sentiment aware customization may be applied to a variety of software. Such software may include, but are not limited to, office productivity applications, email applications, instant messaging client applications, media center applications, media player applications, and language input method editor applications.
CONCLUSION
In closing, although the various embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended representations is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as exemplary forms of implementing the claimed subject matter.

Claims (19)

What is claimed is:
1. A computer storage medium storing computer executable instructions that, when executed, cause one or more processors to perform acts comprising:
receiving a user selection, the user selection identifying:
one or more sources from which context data is to be used for generating emotional state determinations, the one or more sources including an application, and
one or more other sources from which context data is not to be used for making emotional state determinations;
determining an emotional state of a user that is inputting content into the application, the content including inputted textual communication for transmission to another application through the application as part of an online chat session, and the emotional state being determined based at least partly on at least one word or phrase in the textual communication being associated with the emotional state;
determine a classification confidence value for the determination of the emotional state of the user;
selecting a skin package for a user interface of the application based at least on the classification confidence value for the emotional state meeting a confidence value threshold; and
applying the skin package to the user interface of the application to at least alter an appearance of the user interface.
2. The computer storage medium ofclaim 1, further comprising ascertaining an operation scenario type of the application, the operation scenario type corresponding to a usage formality of the application, wherein the selecting includes selecting a skin package for a user interface of the application based on the emotional state and the operation scenario type.
3. The computer storage medium ofclaim 2, wherein the ascertaining includes:
polling the application for application information that includes one or more of application process names, field classes, an application object model, and screen pixel information of output data generated by the application; and
classifying the application as operating in one of multiple operation scenario types and generating a type confidence value for the classifying based at least on the application information.
4. The computer storage medium ofclaim 3, wherein the skin package is one of a first skin package or a second skin package, and wherein selecting includes:
selecting the first skin package that corresponds to the emotional state and a usage formality of the application indicated by the operation scenario type of the application when the type confidence value at least meets a corresponding predefined confidence threshold; and selecting the second skin package that corresponds to the emotional state and a most formal usage formality when the type confidence value is below the corresponding predefined confidence threshold.
5. The computer storage medium ofclaim 2, wherein the operation scenario type is one of a plurality of operation scenario types that include an online chat scenario and a document authoring scenario.
6. The computer storage medium ofclaim 5, wherein the determining includes: normalizing the context data into context features that are expressed as name value pairs.
7. The computer storage medium ofclaim 6, wherein the skin package for a user interface is a first skin package for a user interface, and wherein the selecting includes:
selecting a second skin package that corresponds to a neutral emotional state when the classification confidence value is below the confidence value threshold.
8. The computer storage medium ofclaim 5, wherein the context data includes at least one of application specific data, environmental data, or user status data.
9. The computer storage medium ofclaim 8, wherein the environmental data includes at least one of data on a real-world environment, a software status or event of an electronic device that is executing the application, or a hardware status or event of the electronic device.
10. The computer storage medium ofclaim 8, wherein the user status data includes data collected from personal web services used by the user that provide implicit or explicit clues regarding the emotional state of the user.
11. The computer storage medium ofclaim 1, wherein the skin package includes an image and at least one of a sound or an animation.
12. The computer storage medium ofclaim 11, wherein the animation is displayed according to an emotion term inputted into the application by the user.
13. The computer-readable storage medium ofclaim 1, wherein selecting a skin package for a user interface of the application is further based at least on an emotional state of a second user associated with the user.
14. A computer-implemented method, comprising: receiving a user selection, the user selection identifying:
one or more sources from which context data is to be acquired, the one or more sources including a principle application, and
one or more other sources from which context data is not to be used for making emotional state determinations;
determining, based at least in part on the context data, an emotional state of a user that is inputting content into the principal application;
ascertaining a level of formality associated with a current operation scenario;
selecting a skin package for a user interface of a language input method editor application that is executing cooperative with the principal application based on the emotional state and the level of formality associated with the current operation scenario; and
applying the skin package to the user interface of the language input method editor application to alter the user interface, the skin package including an image and at least one of a sound and an animation.
15. The computer-implemented method ofclaim 14, wherein the
selecting includes selecting the skin package to reflect or alter the emotional state of the user.
16. The computer-implemented method ofclaim 14, wherein the principal application is a communication application, further comprising applying the skin package to an additional user interface of another communication application that is used to send communication to the instant messaging client application.
17. The computer-implemented method ofclaim 14, wherein the determining includes:
translating content that is inputted into the principal application from a first language into a second language; and
determining the emotional state of the user based on context data that includes the content that is in the second language.
18. A computing device, comprising:
one or more processors; and
a memory that includes a plurality of computer-executable components, the plurality of
computer-executable components comprising:
a sentiment analysis component that:
receives a user selection, the user selection identifying:
one or more sources from which context data is to be used, the one or more sources including an application, and
one or more other sources from which context data is not to be used for making emotional state determinations;
determines an emotional state of a user that is inputting content into the application based on the context data, the context data including content inputted into the application by the user or communication that the user transmitted through the application; and
determines a classification confidence value for the determination of the emotional state of the user;
an application classification component that ascertains a social context for an operation of the application, the social context being selected from multiple social contexts that include a professional environment or a casual social environment;
a skin selection component that selects a skin package for a user interface of the application based on the emotional state and the social context for the operation of the application, the skin package including an image and at least one of a sound and an animation; and
a skin renderer component that applies the skin package to the user interface of the application.
19. The computing device ofclaim 18, further comprising a skin design component that enables a designer to create the skin package using at least one of an assistant functionality or one or more skin templates.
US13/315,0472011-12-082011-12-08Sentiment aware user interface customizationActive2032-08-08US9348479B2 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US13/315,047US9348479B2 (en)2011-12-082011-12-08Sentiment aware user interface customization

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
US13/315,047US9348479B2 (en)2011-12-082011-12-08Sentiment aware user interface customization

Publications (2)

Publication NumberPublication Date
US20130152000A1 US20130152000A1 (en)2013-06-13
US9348479B2true US9348479B2 (en)2016-05-24

Family

ID=48573236

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US13/315,047Active2032-08-08US9348479B2 (en)2011-12-082011-12-08Sentiment aware user interface customization

Country Status (1)

CountryLink
US (1)US9348479B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20150263997A1 (en)*2014-03-142015-09-17Microsoft CorporationInstant Messaging
US10021044B2 (en)2014-03-142018-07-10Microsoft Technology Licensing, LlcInstant messaging
US20200050306A1 (en)*2016-11-302020-02-13Microsoft Technology Licensing, LlcSentiment-based interaction method and apparatus
US20220198531A1 (en)*2020-12-172022-06-23Kyndryl, Inc.Pre-packaging and pre-configuration of software products using chatbots
US11544452B2 (en)*2016-08-102023-01-03Airbnb, Inc.Generating templates for automated user interface components and validation rules based on context

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20130159919A1 (en)2011-12-192013-06-20Gabriel LeydonSystems and Methods for Identifying and Suggesting Emoticons
WO2013166588A1 (en)2012-05-082013-11-14Bitstrips Inc.System and method for adaptable avatars
US9746990B2 (en)2012-09-282017-08-29Intel CorporationSelectively augmenting communications transmitted by a communication device
US9196173B2 (en)*2012-10-052015-11-24International Business Machines CorporationVisualizing the mood of a group of individuals
US9692839B2 (en)*2013-03-132017-06-27Arris Enterprises, Inc.Context emotion determination system
US20150149939A1 (en)*2013-11-252015-05-28Cellco Partnership D/B/A Verizon WirelessVariable user interface theme customization
JP5702478B1 (en)*2014-01-152015-04-15ナレッジスイート株式会社 Personal information management system and personal information management program
US20170083865A1 (en)*2014-06-092017-03-23Hewlett Packard Enterprise Development LpContext-based experience
US9043196B1 (en)2014-07-072015-05-26Machine Zone, Inc.Systems and methods for identifying and suggesting emoticons
US10824440B2 (en)2014-08-222020-11-03Sensoriant, Inc.Deriving personalized experiences of smart environments
US11621932B2 (en)*2014-10-312023-04-04Avaya Inc.System and method for managing resources of an enterprise
US10387173B1 (en)2015-03-272019-08-20Intuit Inc.Method and system for using emotional state data to tailor the user experience of an interactive software system
US10169827B1 (en)2015-03-272019-01-01Intuit Inc.Method and system for adapting a user experience provided through an interactive software system to the content being delivered and the predicted emotional impact on the user of that content
US9930102B1 (en)*2015-03-272018-03-27Intuit Inc.Method and system for using emotional state data to tailor the user experience of an interactive software system
CN104834450B (en)*2015-05-292018-05-25魅族科技(中国)有限公司A kind of function activating method and terminal
US11073960B2 (en)*2015-07-092021-07-27Sensoriant, Inc.Method and system for creating adaptive user interfaces using user provided and controlled data
US10332122B1 (en)2015-07-272019-06-25Intuit Inc.Obtaining and analyzing user physiological data to determine whether a user would benefit from user support
JP6601069B2 (en)*2015-09-012019-11-06カシオ計算機株式会社 Dialog control apparatus, dialog control method, and program
CN106502712A (en)*2015-09-072017-03-15北京三星通信技术研究有限公司APP improved methods and system based on user operation
CN105159541B (en)*2015-09-212019-02-22无锡知谷网络科技有限公司 Multimedia terminal for airport service and display method thereof
EP3355789A1 (en)*2015-09-302018-08-08Koninklijke Philips N.V.Assistance system for cognitively impaired persons
US10394323B2 (en)*2015-12-042019-08-27International Business Machines CorporationTemplates associated with content items based on cognitive states
US20170213138A1 (en)*2016-01-272017-07-27Machine Zone, Inc.Determining user sentiment in chat data
AT518314A1 (en)*2016-03-142017-09-15Econob - Informationsdienstleistungs Gmbh Steering method for controlling a terminal and control device for its implementation
US10489509B2 (en)*2016-03-142019-11-26International Business Machines CorporationPersonality based sentiment analysis of textual information written in natural language
US10339365B2 (en)2016-03-312019-07-02Snap Inc.Automated avatar generation
US10686899B2 (en)*2016-04-062020-06-16Snap Inc.Messaging achievement pictograph display system
US10360708B2 (en)2016-06-302019-07-23Snap Inc.Avatar based ideogram generation
US10579742B1 (en)*2016-08-302020-03-03United Services Automobile Association (Usaa)Biometric signal analysis for communication enhancement and transformation
US10192551B2 (en)2016-08-302019-01-29Google LlcUsing textual input and user state information to generate reply content to present in response to the textual input
US10432559B2 (en)*2016-10-242019-10-01Snap Inc.Generating and displaying customized avatars in electronic messages
US10454857B1 (en)2017-01-232019-10-22Snap Inc.Customized digital avatar accessories
US11893647B2 (en)2017-04-272024-02-06Snap Inc.Location-based virtual avatars
CN110800018A (en)2017-04-272020-02-14斯纳普公司Friend location sharing mechanism for social media platform
US10212541B1 (en)2017-04-272019-02-19Snap Inc.Selective location-based identity communication
US10423724B2 (en)*2017-05-192019-09-24Bioz, Inc.Optimizations of search engines for merging search results
JP6639444B2 (en)2017-06-072020-02-05本田技研工業株式会社 Information providing apparatus and information providing method
US10838967B2 (en)*2017-06-082020-11-17Microsoft Technology Licensing, LlcEmotional intelligence for a conversational chatbot
US10922490B2 (en)*2017-06-222021-02-16Microsoft Technology Licensing, LlcSystem and method for authoring electronic messages
CN108984229B (en)*2018-07-242021-11-26广东小天才科技有限公司Application program starting control method and family education equipment
JP6816247B2 (en)*2019-12-242021-01-20本田技研工業株式会社 Information provider
US11700426B2 (en)2021-02-232023-07-11Firefly 14, LlcVirtual platform for recording and displaying responses and reactions to audiovisual contents
WO2024254153A2 (en)*2023-06-052024-12-12Synchron Australia Pty LimitedGenerative content for communication assistance
JP2025044232A (en)*2023-09-192025-04-01ソフトバンクグループ株式会社 system

Citations (173)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4559604A (en)1980-09-191985-12-17Hitachi, Ltd.Pattern recognition method
US5796866A (en)1993-12-091998-08-18Matsushita Electric Industrial Co., Ltd.Apparatus and method for editing handwritten stroke
US5873107A (en)1996-03-291999-02-16Apple Computer, Inc.System for automatically retrieving information relevant to text being authored
US5987415A (en)*1998-03-231999-11-16Microsoft CorporationModeling a user's emotion and personality in a computer user interface
US5995928A (en)1996-10-021999-11-30Speechworks International, Inc.Method and apparatus for continuous spelling speech recognition with early identification
US6014638A (en)*1996-05-292000-01-11America Online, Inc.System for customizing computer displays in accordance with user preferences
US6076056A (en)1997-09-192000-06-13Microsoft CorporationSpeech recognition system for recognizing continuous and isolated speech
US6085160A (en)1998-07-102000-07-04Lernout & Hauspie Speech Products N.V.Language independent speech recognition
US6092044A (en)1997-03-282000-07-18Dragon Systems, Inc.Pronunciation generation in speech recognition
US6236964B1 (en)1990-02-012001-05-22Canon Kabushiki KaishaSpeech recognition apparatus and method for matching inputted speech and a word generated from stored referenced phoneme data
US6247043B1 (en)1998-06-112001-06-12International Business Machines CorporationApparatus, program products and methods utilizing intelligent contact management
US20020005784A1 (en)*1998-10-302002-01-17Balkin Thomas J.System and method for predicting human cognitive performance using data from an actigraph
US6363342B2 (en)1998-12-182002-03-26Matsushita Electric Industrial Co., Ltd.System for developing word-pronunciation pairs
US6377965B1 (en)1997-11-072002-04-23Microsoft CorporationAutomatic word completion system for partially entered data
US6408266B1 (en)1997-04-012002-06-18Yeong Kaung OonDidactic and content oriented word processing method with incrementally changed belief system
US6460015B1 (en)1998-12-152002-10-01International Business Machines CorporationMethod, system and computer program product for automatic character transliteration in a text string object
US20020188603A1 (en)2001-06-062002-12-12Baird Bruce R.Methods and systems for user activated automated searching
US20030041147A1 (en)2001-08-202003-02-27Van Den Oord Stefan M.System and method for asynchronous client server session communication
US20030160830A1 (en)2002-02-222003-08-28Degross Lee M.Pop-up edictionary
US20030179229A1 (en)*2002-03-252003-09-25Julian Van ErlachBiometrically-determined device interface and content
US6731307B1 (en)*2000-10-302004-05-04Koninklije Philips Electronics N.V.User interface/entertainment device that simulates personal interaction and responds to user's mental state and/or personality
US6732074B1 (en)1999-01-282004-05-04Ricoh Company, Ltd.Device for speech recognition with dictionary updating
US6801893B1 (en)1999-06-302004-10-05International Business Machines CorporationMethod and apparatus for expanding the vocabulary of a speech system
US20040220925A1 (en)2001-11-302004-11-04Microsoft CorporationMedia agent
US20040243415A1 (en)2003-06-022004-12-02International Business Machines CorporationArchitecture for a speech input method editor for handheld portable devices
US20050144162A1 (en)2003-12-292005-06-30Ping LiangAdvanced search, file system, and intelligent assistant agent
US6941267B2 (en)2001-03-022005-09-06Fujitsu LimitedSpeech data compression/expansion apparatus and method
US20050203738A1 (en)2004-03-102005-09-15Microsoft CorporationNew-word pronunciation learning using a pronunciation graph
US20050216253A1 (en)2004-03-252005-09-29Microsoft CorporationSystem and method for reverse transliteration using statistical alignment
US6963841B2 (en)2000-04-212005-11-08Lessac Technology, Inc.Speech training method with alternative proper pronunciation database
US20060026147A1 (en)2004-07-302006-02-02Cone Julian MAdaptive search engine
US7069254B2 (en)2000-04-182006-06-27Icplanet CorporationInteractive intelligent searching with executable suggestions
US20060167857A1 (en)2004-07-292006-07-27Yahoo! Inc.Systems and methods for contextual transaction proposals
US7089504B1 (en)*2000-05-022006-08-08Walt FroloffSystem and method for embedment of emotive content in modern text processing, publishing and communication
US20060190822A1 (en)*2005-02-222006-08-24International Business Machines CorporationPredictive user modeling in user interface design
US7099876B1 (en)1998-12-152006-08-29International Business Machines CorporationMethod, system and computer program product for storing transliteration and/or phonetic spelling information in a text string class
US7107204B1 (en)2000-04-242006-09-12Microsoft CorporationComputer-aided writing system and method with cross-language writing wizard
US20060204142A1 (en)2005-03-112006-09-14Alamy LimitedRanking of images in the results of a search
US20060206324A1 (en)2005-02-052006-09-14Aurix LimitedMethods and apparatus relating to searching of spoken audio data
CN1851617A (en)2005-04-222006-10-25英华达(上海)电子有限公司Converting device and method for mobile device making OCR convenient and input to existing editor
US20060242608A1 (en)2005-03-172006-10-26Microsoft CorporationRedistribution of space between text segments
US20060248074A1 (en)2005-04-282006-11-02International Business Machines CorporationTerm-statistics modification for category-based search
US7165032B2 (en)2002-09-132007-01-16Apple Computer, Inc.Unsupervised data-driven pronunciation modeling
US20070016422A1 (en)2005-07-122007-01-18Shinsuke MoriAnnotating phonemes and accents for text-to-speech system
US20070033269A1 (en)2005-07-292007-02-08Atkinson Gregory OComputer method and apparatus using embedded message window for displaying messages in a functional bar
US20070050339A1 (en)2005-08-242007-03-01Richard KasperskiBiasing queries to determine suggested queries
US20070052868A1 (en)2005-09-022007-03-08Charisma Communications, Inc.Multimedia accessible universal input device
US7194538B1 (en)2002-06-042007-03-20Veritas Operating CorporationStorage area network (SAN) management system for discovering SAN components using a SAN management server
US20070089125A1 (en)*2003-12-222007-04-19Koninklijke Philips Electronic, N.V.Content-processing system, method, and computer program product for monitoring the viewer's mood
US20070088686A1 (en)2005-10-142007-04-19Microsoft CorporationSearch results injected into client applications
US7224346B2 (en)2001-06-112007-05-29International Business Machines CorporationNon-native language writing aid method and tool
US20070124132A1 (en)2005-11-302007-05-31Mayo TakeuchiMethod, system and computer program product for composing a reply to a text message received in a messaging application
US20070150279A1 (en)2005-12-272007-06-28Oracle International CorporationWord matching with context sensitive character to sound correlating
US20070162281A1 (en)2006-01-102007-07-12Nissan Motor Co., Ltd.Recognition dictionary system and recognition dictionary system updating method
US20070167689A1 (en)*2005-04-012007-07-19Motorola, Inc.Method and system for enhancing a user experience using a user's physiological state
US20070192710A1 (en)*2006-02-152007-08-16Frank PlatzLean context driven user interface
US20070208738A1 (en)2006-03-032007-09-06Morgan Brian STechniques for providing suggestions for creating a search query
US20070214164A1 (en)2006-03-102007-09-13Microsoft CorporationUnstructured data in a mining model language
US20070213983A1 (en)2006-03-082007-09-13Microsoft CorporationSpell checking system including a phonetic speller
US7277029B2 (en)2005-06-232007-10-02Microsoft CorporationUsing language models to expand wildcards
US20070233692A1 (en)2006-04-032007-10-04Lisa Steven GSystem, methods and applications for embedded internet searching and result display
US20070255567A1 (en)2006-04-272007-11-01At&T Corp.System and method for generating a pronunciation dictionary
US20080046405A1 (en)2006-08-162008-02-21Microsoft CorporationQuery speller
US7353247B2 (en)*2001-10-192008-04-01Microsoft CorporationQuerying applications using online messenger service
US7360151B1 (en)*2003-05-272008-04-15Walt FroloffSystem and method for creating custom specific text and emotive content message response templates for textual communications
US7370275B2 (en)2003-10-242008-05-06Microsoft CorporationSystem and method for providing context to an input method by tagging existing applications
US7389223B2 (en)2003-09-182008-06-17International Business Machines CorporationMethod and apparatus for testing a software program using mock translation input method editor
US20080167858A1 (en)2007-01-052008-07-10Greg ChristieMethod and system for providing word recommendations for text input
US20080171555A1 (en)2007-01-112008-07-17Helio, LlcLocation-based text messaging
US20080189628A1 (en)*2006-08-022008-08-07Stefan LiescheAutomatically adapting a user interface
US20080195645A1 (en)2006-10-172008-08-14Silverbrook Research Pty LtdMethod of providing information via context searching of a printed graphic image
US20080195980A1 (en)*2007-02-092008-08-14Margaret MorrisSystem, apparatus and method for emotional experience time sampling via a mobile graphical user interface
US20080208567A1 (en)2007-02-282008-08-28Chris BrockettWeb-based proofing and usage guidance
US20080221893A1 (en)2007-03-012008-09-11Adapx, Inc.System and method for dynamic learning
US7447627B2 (en)2003-10-232008-11-04Microsoft CorporationCompound word breaker and spell checker
US20080288474A1 (en)2007-05-162008-11-20Google Inc.Cross-language information retrieval
US20080294982A1 (en)2007-05-212008-11-27Microsoft CorporationProviding relevant text auto-completions
US20080312910A1 (en)2007-06-142008-12-18Po ZhangDictionary word and phrase determination
US20090002178A1 (en)*2007-06-292009-01-01Microsoft CorporationDynamic mood sensing
US7490033B2 (en)2005-01-132009-02-10International Business Machines CorporationSystem for compiling word usage frequencies
US20090043584A1 (en)2007-08-062009-02-12Lawrence Brooke Frank PhilipsSystem and method for phonetic representation
US20090043741A1 (en)2007-08-092009-02-12Dohyung KimAutocompletion and Automatic Input Method Correction for Partially Entered Search Query
US7505954B2 (en)2004-08-182009-03-17International Business Machines CorporationSearch bar with intelligent parametric search statement generation
US20090077464A1 (en)2007-09-132009-03-19Apple Inc.Input methods for device having multi-language environment
US7512904B2 (en)2005-03-222009-03-31Microsoft CorporationOperating system launch menu program listing
US20090128567A1 (en)*2007-11-152009-05-21Brian Mark ShusterMulti-instance, multi-user animation with coordinated chat
US20090154795A1 (en)2007-12-122009-06-18Microsoft CorporationInteractive concept learning in image search
US7555713B2 (en)2005-02-222009-06-30George Liang YangWriting and reading aid system
US7562082B2 (en)2002-09-192009-07-14Microsoft CorporationMethod and system for detecting user intentions in retrieval of hint sentences
US7565157B1 (en)2005-11-182009-07-21A9.Com, Inc.System and method for providing search results based on location
US20090187824A1 (en)*2008-01-212009-07-23Microsoft CorporationSelf-revelation aids for interfaces
US20090210214A1 (en)2008-02-192009-08-20Jiang QianUniversal Language Input
US20090216690A1 (en)2008-02-262009-08-27Microsoft CorporationPredicting Candidates Using Input Scopes
US20090222437A1 (en)2008-03-032009-09-03Microsoft CorporationCross-lingual search re-ranking
US7599915B2 (en)2005-01-242009-10-06At&T Intellectual Property I, L.P.Portal linking tool
US20090313239A1 (en)2008-06-162009-12-17Microsoft CorporationAdaptive Visual Similarity for Text-Based Image Search Results Re-ranking
US20100005086A1 (en)2008-07-032010-01-07Google Inc.Resource locator suggestions from input character sequence
US7689412B2 (en)2003-12-052010-03-30Microsoft CorporationSynonymous collocation extraction using translation information
US20100122155A1 (en)2006-09-142010-05-13Stragent, LlcOnline marketplace for automatically extracted data
US7725318B2 (en)2004-07-302010-05-25Nice Systems Inc.System and method for improving the accuracy of audio searching
US7728735B2 (en)*2007-12-042010-06-01At&T Intellectual Property I, L.P.Methods, apparatus, and computer program products for estimating a mood of a user, using a mood of a user for network/service control, and presenting suggestions for interacting with a user based on the user's mood
US20100138210A1 (en)2008-12-022010-06-03Electronics And Telecommunications Research InstitutePost-editing apparatus and method for correcting translation errors
US20100146407A1 (en)*2008-01-092010-06-10Bokor Brian RAutomated avatar mood effects in a virtual world
US20100169770A1 (en)2007-04-112010-07-01Google Inc.Input method editor having a secondary language mode
US7752034B2 (en)2003-11-122010-07-06Microsoft CorporationWriting assistance using machine translation techniques
CN101276245B (en)2008-04-162010-07-07北京搜狗科技发展有限公司Reminding method and system for coding to correct error in input process
US20100180199A1 (en)2007-06-012010-07-15Google Inc.Detecting name entities and new words
US20100217795A1 (en)2007-04-092010-08-26Google Inc.Input method editor user profiles
US20100217581A1 (en)2007-04-102010-08-26Google Inc.Multi-Mode Input Method Editor
WO2010105440A1 (en)2009-03-202010-09-23Google Inc.Interaction with ime computing device
US20100245251A1 (en)2009-03-252010-09-30Hong Fu Jin Precision Industry (Shenzhen) Co., LtdMethod of switching input method editor
US20100251304A1 (en)2009-03-302010-09-30Donoghue Patrick JPersonal media channel apparatus and methods
US20100306139A1 (en)2007-12-062010-12-02Google Inc.Cjk name detection
US20100306248A1 (en)2009-05-272010-12-02International Business Machines CorporationDocument processing method and system
US20100309137A1 (en)2009-06-052010-12-09Yahoo! Inc.All-in-one chinese character input method
US20110014952A1 (en)*2009-07-152011-01-20Sony Ericsson Mobile Communications AbAudio recognition during voice sessions to provide enhanced user interface functionality
US7881934B2 (en)*2003-09-122011-02-01Toyota Infotechnology Center Co., Ltd.Method and system for adjusting the voice prompt of an interactive system based upon the user's state
US20110041077A1 (en)*2006-06-052011-02-17Bruce ReinerMethod and apparatus for adapting computer-based systems to end-user profiles
US20110060761A1 (en)2009-09-082011-03-10Kenneth Peyton FoutsInteractive writing aid to assist a user in finding information and incorporating information correctly into a written work
US20110066431A1 (en)2009-09-152011-03-17Mediatek Inc.Hand-held input apparatus and input method for inputting data to a remote receiving device
US7917355B2 (en)2007-08-232011-03-29Google Inc.Word detection
US20110087483A1 (en)*2009-10-092011-04-14Institute For Information IndustryEmotion analyzing method, emotion analyzing system, computer readable and writable recording medium and emotion analyzing device
US7930676B1 (en)*2007-04-272011-04-19Intuit Inc.System and method for adapting software elements based on mood state profiling
US20110107265A1 (en)*2008-10-162011-05-05Bank Of America CorporationCustomizable graphical user interface
US7953730B1 (en)2006-03-022011-05-31A9.Com, Inc.System and method for presenting a search history
US20110131642A1 (en)2009-11-272011-06-02Google Inc.Client-server input method editor architecture
US7957969B2 (en)2008-03-312011-06-07Nuance Communications, Inc.Systems and methods for building a native language phoneme lexicon having native pronunciations of non-native words derived from non-native pronunciatons
US20110137635A1 (en)2009-12-082011-06-09Microsoft CorporationTransliterating semitic languages including diacritics
US20110161080A1 (en)2009-12-232011-06-30Google Inc.Speech to Text Conversion
US20110161311A1 (en)2009-12-282011-06-30Yahoo! Inc.Search suggestion clustering and presentation
US20110173172A1 (en)2007-04-112011-07-14Google Inc.Input method editor integration
US7983910B2 (en)*2006-03-032011-07-19International Business Machines CorporationCommunicating across voice and text channels with emotion preservation
US20110178981A1 (en)2010-01-212011-07-21International Business Machines CorporationCollecting community feedback for collaborative document development
US20110184723A1 (en)2010-01-252011-07-28Microsoft CorporationPhonetic suggestion engine
US20110188756A1 (en)2010-02-032011-08-04Samsung Electronics Co., Ltd.E-dictionary search apparatus and method for document in which korean characters and chinese characters are mixed
US20110191321A1 (en)2010-02-012011-08-04Microsoft CorporationContextual display advertisements for a webpage
US20110202876A1 (en)2010-02-122011-08-18Microsoft CorporationUser-centric soft keyboard predictive technologies
US20110258535A1 (en)2010-04-202011-10-20Scribd, Inc.Integrated document viewer with automatic sharing of reading-related activities across external social networks
US20110289105A1 (en)2010-05-182011-11-24Tabulaw, Inc.Framework for conducting legal research and writing based on accumulated legal knowledge
US20110296324A1 (en)*2010-06-012011-12-01Apple Inc.Avatars Reflecting User States
CN102314441A (en)2010-06-302012-01-11百度在线网络技术(北京)有限公司Method for user to input individualized primitive data and equipment and system
US20120016678A1 (en)2010-01-182012-01-19Apple Inc.Intelligent Automated Assistant
US20120023103A1 (en)2009-01-212012-01-26Telefonaktiebolaget Lm Ericsson (Publ)Generation of Annotation Tags Based on Multimodal Metadata and Structured Semantic Descriptors
US20120029902A1 (en)2010-07-272012-02-02Fang LuMode supporting multiple language input for entering text
US20120036468A1 (en)2010-08-032012-02-09Nokia CorporationUser input remapping
US20120035932A1 (en)2010-08-062012-02-09Google Inc.Disambiguating Input Based on Context
US20120041752A1 (en)2010-04-122012-02-16Yong-Gang WangExtension framework for input method editor
US20120060113A1 (en)2010-09-082012-03-08Nuance Communications, Inc.Methods and apparatus for displaying content
US20120060147A1 (en)2007-04-092012-03-08Google Inc.Client input method
US20120078611A1 (en)*2010-09-272012-03-29Sap AgContext-aware conversational user interface
US8161073B2 (en)2010-05-052012-04-17Holovisions, LLCContext-driven search
US20120117506A1 (en)2010-11-052012-05-10Jonathan KochDevice, Method, and Graphical User Interface for Manipulating Soft Keyboards
US20120143897A1 (en)2010-12-032012-06-07Microsoft CorporationWild Card Auto Completion
CN101661474B (en)2008-08-262012-07-04华为技术有限公司Search method and system
US20120173222A1 (en)2011-01-052012-07-05Google Inc.Method and system for facilitating text input
US8230336B2 (en)2009-04-222012-07-24Microsoft CorporationEfficient discovery, display, and autocompletion of links to wiki resources
US20120297294A1 (en)2011-05-172012-11-22Microsoft CorporationNetwork search for writing assistance
US20120297332A1 (en)2011-05-202012-11-22Microsoft CorporationAdvanced prediction
US20130016113A1 (en)*2011-07-122013-01-17Sony CorporationContext aware user interface system
US20130054617A1 (en)2011-08-302013-02-28Alison Williams ColmanLinking Browser Content to Social Network Data
US20130091409A1 (en)2011-10-072013-04-11Agile Insights, LlcMethod and system for dynamic assembly of multimedia presentation threads
US20130132359A1 (en)2011-11-212013-05-23Michelle I. LeeGrouped search query refinements
US20130159920A1 (en)2011-12-202013-06-20Microsoft CorporationScenario-adaptive input method editor
US8539359B2 (en)*2009-02-112013-09-17Jeffrey A. RapaportSocial network driven indexing system for instantly clustering people with concurrent focus on same topic into on-topic chat rooms and/or for generating on-topic search results tailored to user preferences regarding topic
US8564684B2 (en)*2011-08-172013-10-22Digimarc CorporationEmotional illumination, and related arrangements
US20130298030A1 (en)2011-11-032013-11-07Aaron NahumiSystem, methods and computer readable medium for Augmented Personalized Social Network
US8597031B2 (en)*2008-07-282013-12-03Breakthrough Performancetech, LlcSystems and methods for computerized interactive skill training
US20130346872A1 (en)2012-06-252013-12-26Microsoft CorporationInput method editor application platform
US20140040238A1 (en)2012-08-062014-02-06Microsoft CorporationBusiness Intelligent In-Document Suggestions
US8762356B1 (en)2011-07-152014-06-24Google Inc.Detecting change in rate of input reception
US20150106702A1 (en)2012-06-292015-04-16Microsoft CorporationCross-Lingual Input Method Editor
US20150121291A1 (en)2012-06-292015-04-30Microsoft CorporationInput Method Editor
US20150161126A1 (en)2012-08-302015-06-11Microsoft CorporationFeature-Based Candidate Selection

Patent Citations (182)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4559604A (en)1980-09-191985-12-17Hitachi, Ltd.Pattern recognition method
US6236964B1 (en)1990-02-012001-05-22Canon Kabushiki KaishaSpeech recognition apparatus and method for matching inputted speech and a word generated from stored referenced phoneme data
US5796866A (en)1993-12-091998-08-18Matsushita Electric Industrial Co., Ltd.Apparatus and method for editing handwritten stroke
US5873107A (en)1996-03-291999-02-16Apple Computer, Inc.System for automatically retrieving information relevant to text being authored
US6014638A (en)*1996-05-292000-01-11America Online, Inc.System for customizing computer displays in accordance with user preferences
US5995928A (en)1996-10-021999-11-30Speechworks International, Inc.Method and apparatus for continuous spelling speech recognition with early identification
US6092044A (en)1997-03-282000-07-18Dragon Systems, Inc.Pronunciation generation in speech recognition
US6408266B1 (en)1997-04-012002-06-18Yeong Kaung OonDidactic and content oriented word processing method with incrementally changed belief system
US6076056A (en)1997-09-192000-06-13Microsoft CorporationSpeech recognition system for recognizing continuous and isolated speech
US6377965B1 (en)1997-11-072002-04-23Microsoft CorporationAutomatic word completion system for partially entered data
US5987415A (en)*1998-03-231999-11-16Microsoft CorporationModeling a user's emotion and personality in a computer user interface
US6247043B1 (en)1998-06-112001-06-12International Business Machines CorporationApparatus, program products and methods utilizing intelligent contact management
US6085160A (en)1998-07-102000-07-04Lernout & Hauspie Speech Products N.V.Language independent speech recognition
US20020005784A1 (en)*1998-10-302002-01-17Balkin Thomas J.System and method for predicting human cognitive performance using data from an actigraph
US7099876B1 (en)1998-12-152006-08-29International Business Machines CorporationMethod, system and computer program product for storing transliteration and/or phonetic spelling information in a text string class
US6460015B1 (en)1998-12-152002-10-01International Business Machines CorporationMethod, system and computer program product for automatic character transliteration in a text string object
US6363342B2 (en)1998-12-182002-03-26Matsushita Electric Industrial Co., Ltd.System for developing word-pronunciation pairs
US6732074B1 (en)1999-01-282004-05-04Ricoh Company, Ltd.Device for speech recognition with dictionary updating
US6801893B1 (en)1999-06-302004-10-05International Business Machines CorporationMethod and apparatus for expanding the vocabulary of a speech system
US7069254B2 (en)2000-04-182006-06-27Icplanet CorporationInteractive intelligent searching with executable suggestions
US6963841B2 (en)2000-04-212005-11-08Lessac Technology, Inc.Speech training method with alternative proper pronunciation database
US7107204B1 (en)2000-04-242006-09-12Microsoft CorporationComputer-aided writing system and method with cross-language writing wizard
US7089504B1 (en)*2000-05-022006-08-08Walt FroloffSystem and method for embedment of emotive content in modern text processing, publishing and communication
US6731307B1 (en)*2000-10-302004-05-04Koninklije Philips Electronics N.V.User interface/entertainment device that simulates personal interaction and responds to user's mental state and/or personality
US6941267B2 (en)2001-03-022005-09-06Fujitsu LimitedSpeech data compression/expansion apparatus and method
US20020188603A1 (en)2001-06-062002-12-12Baird Bruce R.Methods and systems for user activated automated searching
US7308439B2 (en)2001-06-062007-12-11Hyperthink LlcMethods and systems for user activated automated searching
US7224346B2 (en)2001-06-112007-05-29International Business Machines CorporationNon-native language writing aid method and tool
US20030041147A1 (en)2001-08-202003-02-27Van Den Oord Stefan M.System and method for asynchronous client server session communication
US7353247B2 (en)*2001-10-192008-04-01Microsoft CorporationQuerying applications using online messenger service
US20040220925A1 (en)2001-11-302004-11-04Microsoft CorporationMedia agent
US20030160830A1 (en)2002-02-222003-08-28Degross Lee M.Pop-up edictionary
US20030179229A1 (en)*2002-03-252003-09-25Julian Van ErlachBiometrically-determined device interface and content
US7194538B1 (en)2002-06-042007-03-20Veritas Operating CorporationStorage area network (SAN) management system for discovering SAN components using a SAN management server
US7165032B2 (en)2002-09-132007-01-16Apple Computer, Inc.Unsupervised data-driven pronunciation modeling
US7562082B2 (en)2002-09-192009-07-14Microsoft CorporationMethod and system for detecting user intentions in retrieval of hint sentences
US7360151B1 (en)*2003-05-272008-04-15Walt FroloffSystem and method for creating custom specific text and emotive content message response templates for textual communications
US20040243415A1 (en)2003-06-022004-12-02International Business Machines CorporationArchitecture for a speech input method editor for handheld portable devices
US7881934B2 (en)*2003-09-122011-02-01Toyota Infotechnology Center Co., Ltd.Method and system for adjusting the voice prompt of an interactive system based upon the user's state
US7389223B2 (en)2003-09-182008-06-17International Business Machines CorporationMethod and apparatus for testing a software program using mock translation input method editor
US7447627B2 (en)2003-10-232008-11-04Microsoft CorporationCompound word breaker and spell checker
US7370275B2 (en)2003-10-242008-05-06Microsoft CorporationSystem and method for providing context to an input method by tagging existing applications
US7752034B2 (en)2003-11-122010-07-06Microsoft CorporationWriting assistance using machine translation techniques
US7689412B2 (en)2003-12-052010-03-30Microsoft CorporationSynonymous collocation extraction using translation information
US20070089125A1 (en)*2003-12-222007-04-19Koninklijke Philips Electronic, N.V.Content-processing system, method, and computer program product for monitoring the viewer's mood
US20050144162A1 (en)2003-12-292005-06-30Ping LiangAdvanced search, file system, and intelligent assistant agent
US20050203738A1 (en)2004-03-102005-09-15Microsoft CorporationNew-word pronunciation learning using a pronunciation graph
US20050216253A1 (en)2004-03-252005-09-29Microsoft CorporationSystem and method for reverse transliteration using statistical alignment
US7451152B2 (en)2004-07-292008-11-11Yahoo! Inc.Systems and methods for contextual transaction proposals
US20060167857A1 (en)2004-07-292006-07-27Yahoo! Inc.Systems and methods for contextual transaction proposals
US20060026147A1 (en)2004-07-302006-02-02Cone Julian MAdaptive search engine
US7725318B2 (en)2004-07-302010-05-25Nice Systems Inc.System and method for improving the accuracy of audio searching
US7505954B2 (en)2004-08-182009-03-17International Business Machines CorporationSearch bar with intelligent parametric search statement generation
US7490033B2 (en)2005-01-132009-02-10International Business Machines CorporationSystem for compiling word usage frequencies
US7599915B2 (en)2005-01-242009-10-06At&T Intellectual Property I, L.P.Portal linking tool
US20060206324A1 (en)2005-02-052006-09-14Aurix LimitedMethods and apparatus relating to searching of spoken audio data
US7555713B2 (en)2005-02-222009-06-30George Liang YangWriting and reading aid system
US20060190822A1 (en)*2005-02-222006-08-24International Business Machines CorporationPredictive user modeling in user interface design
US20060204142A1 (en)2005-03-112006-09-14Alamy LimitedRanking of images in the results of a search
US20060242608A1 (en)2005-03-172006-10-26Microsoft CorporationRedistribution of space between text segments
US7512904B2 (en)2005-03-222009-03-31Microsoft CorporationOperating system launch menu program listing
US20070167689A1 (en)*2005-04-012007-07-19Motorola, Inc.Method and system for enhancing a user experience using a user's physiological state
CN1851617A (en)2005-04-222006-10-25英华达(上海)电子有限公司Converting device and method for mobile device making OCR convenient and input to existing editor
US20060248074A1 (en)2005-04-282006-11-02International Business Machines CorporationTerm-statistics modification for category-based search
US7277029B2 (en)2005-06-232007-10-02Microsoft CorporationUsing language models to expand wildcards
US20070016422A1 (en)2005-07-122007-01-18Shinsuke MoriAnnotating phonemes and accents for text-to-speech system
US20070033269A1 (en)2005-07-292007-02-08Atkinson Gregory OComputer method and apparatus using embedded message window for displaying messages in a functional bar
US20070050339A1 (en)2005-08-242007-03-01Richard KasperskiBiasing queries to determine suggested queries
US7844599B2 (en)2005-08-242010-11-30Yahoo! Inc.Biasing queries to determine suggested queries
US20070052868A1 (en)2005-09-022007-03-08Charisma Communications, Inc.Multimedia accessible universal input device
US7676517B2 (en)2005-10-142010-03-09Microsoft CorporationSearch results injected into client applications
US20070088686A1 (en)2005-10-142007-04-19Microsoft CorporationSearch results injected into client applications
US7565157B1 (en)2005-11-182009-07-21A9.Com, Inc.System and method for providing search results based on location
US20070124132A1 (en)2005-11-302007-05-31Mayo TakeuchiMethod, system and computer program product for composing a reply to a text message received in a messaging application
US20070150279A1 (en)2005-12-272007-06-28Oracle International CorporationWord matching with context sensitive character to sound correlating
US20070162281A1 (en)2006-01-102007-07-12Nissan Motor Co., Ltd.Recognition dictionary system and recognition dictionary system updating method
US20070192710A1 (en)*2006-02-152007-08-16Frank PlatzLean context driven user interface
US7953730B1 (en)2006-03-022011-05-31A9.Com, Inc.System and method for presenting a search history
US20070208738A1 (en)2006-03-032007-09-06Morgan Brian STechniques for providing suggestions for creating a search query
US7983910B2 (en)*2006-03-032011-07-19International Business Machines CorporationCommunicating across voice and text channels with emotion preservation
US20070213983A1 (en)2006-03-082007-09-13Microsoft CorporationSpell checking system including a phonetic speller
US20070214164A1 (en)2006-03-102007-09-13Microsoft CorporationUnstructured data in a mining model language
US20070233692A1 (en)2006-04-032007-10-04Lisa Steven GSystem, methods and applications for embedded internet searching and result display
US20070255567A1 (en)2006-04-272007-11-01At&T Corp.System and method for generating a pronunciation dictionary
US20110041077A1 (en)*2006-06-052011-02-17Bruce ReinerMethod and apparatus for adapting computer-based systems to end-user profiles
US20080189628A1 (en)*2006-08-022008-08-07Stefan LiescheAutomatically adapting a user interface
US20080046405A1 (en)2006-08-162008-02-21Microsoft CorporationQuery speller
US20100122155A1 (en)2006-09-142010-05-13Stragent, LlcOnline marketplace for automatically extracted data
US20080195645A1 (en)2006-10-172008-08-14Silverbrook Research Pty LtdMethod of providing information via context searching of a printed graphic image
US7957955B2 (en)2007-01-052011-06-07Apple Inc.Method and system for providing word recommendations for text input
US20080167858A1 (en)2007-01-052008-07-10Greg ChristieMethod and system for providing word recommendations for text input
US20080171555A1 (en)2007-01-112008-07-17Helio, LlcLocation-based text messaging
US20080195980A1 (en)*2007-02-092008-08-14Margaret MorrisSystem, apparatus and method for emotional experience time sampling via a mobile graphical user interface
US20080208567A1 (en)2007-02-282008-08-28Chris BrockettWeb-based proofing and usage guidance
US20080221893A1 (en)2007-03-012008-09-11Adapx, Inc.System and method for dynamic learning
US20100217795A1 (en)2007-04-092010-08-26Google Inc.Input method editor user profiles
US20120060147A1 (en)2007-04-092012-03-08Google Inc.Client input method
US20100217581A1 (en)2007-04-102010-08-26Google Inc.Multi-Mode Input Method Editor
US20110173172A1 (en)2007-04-112011-07-14Google Inc.Input method editor integration
US20100169770A1 (en)2007-04-112010-07-01Google Inc.Input method editor having a secondary language mode
US7930676B1 (en)*2007-04-272011-04-19Intuit Inc.System and method for adapting software elements based on mood state profiling
US20080288474A1 (en)2007-05-162008-11-20Google Inc.Cross-language information retrieval
US20080294982A1 (en)2007-05-212008-11-27Microsoft CorporationProviding relevant text auto-completions
US20100180199A1 (en)2007-06-012010-07-15Google Inc.Detecting name entities and new words
US20080312910A1 (en)2007-06-142008-12-18Po ZhangDictionary word and phrase determination
US20090002178A1 (en)*2007-06-292009-01-01Microsoft CorporationDynamic mood sensing
US20090043584A1 (en)2007-08-062009-02-12Lawrence Brooke Frank PhilipsSystem and method for phonetic representation
US20090043741A1 (en)2007-08-092009-02-12Dohyung KimAutocompletion and Automatic Input Method Correction for Partially Entered Search Query
US7917355B2 (en)2007-08-232011-03-29Google Inc.Word detection
US20090077464A1 (en)2007-09-132009-03-19Apple Inc.Input methods for device having multi-language environment
US20090128567A1 (en)*2007-11-152009-05-21Brian Mark ShusterMulti-instance, multi-user animation with coordinated chat
US7728735B2 (en)*2007-12-042010-06-01At&T Intellectual Property I, L.P.Methods, apparatus, and computer program products for estimating a mood of a user, using a mood of a user for network/service control, and presenting suggestions for interacting with a user based on the user's mood
US20100306139A1 (en)2007-12-062010-12-02Google Inc.Cjk name detection
US20090154795A1 (en)2007-12-122009-06-18Microsoft CorporationInteractive concept learning in image search
US20100146407A1 (en)*2008-01-092010-06-10Bokor Brian RAutomated avatar mood effects in a virtual world
US20090187824A1 (en)*2008-01-212009-07-23Microsoft CorporationSelf-revelation aids for interfaces
US20090210214A1 (en)2008-02-192009-08-20Jiang QianUniversal Language Input
US20090216690A1 (en)2008-02-262009-08-27Microsoft CorporationPredicting Candidates Using Input Scopes
US7917488B2 (en)2008-03-032011-03-29Microsoft CorporationCross-lingual search re-ranking
US20090222437A1 (en)2008-03-032009-09-03Microsoft CorporationCross-lingual search re-ranking
US7957969B2 (en)2008-03-312011-06-07Nuance Communications, Inc.Systems and methods for building a native language phoneme lexicon having native pronunciations of non-native words derived from non-native pronunciatons
CN101276245B (en)2008-04-162010-07-07北京搜狗科技发展有限公司Reminding method and system for coding to correct error in input process
US20090313239A1 (en)2008-06-162009-12-17Microsoft CorporationAdaptive Visual Similarity for Text-Based Image Search Results Re-ranking
US20100005086A1 (en)2008-07-032010-01-07Google Inc.Resource locator suggestions from input character sequence
US8597031B2 (en)*2008-07-282013-12-03Breakthrough Performancetech, LlcSystems and methods for computerized interactive skill training
CN101661474B (en)2008-08-262012-07-04华为技术有限公司Search method and system
US20110107265A1 (en)*2008-10-162011-05-05Bank Of America CorporationCustomizable graphical user interface
US20100138210A1 (en)2008-12-022010-06-03Electronics And Telecommunications Research InstitutePost-editing apparatus and method for correcting translation errors
US20120023103A1 (en)2009-01-212012-01-26Telefonaktiebolaget Lm Ericsson (Publ)Generation of Annotation Tags Based on Multimodal Metadata and Structured Semantic Descriptors
US8539359B2 (en)*2009-02-112013-09-17Jeffrey A. RapaportSocial network driven indexing system for instantly clustering people with concurrent focus on same topic into on-topic chat rooms and/or for generating on-topic search results tailored to user preferences regarding topic
US20120113011A1 (en)2009-03-202012-05-10Genqing WuIme text entry assistance
WO2010105440A1 (en)2009-03-202010-09-23Google Inc.Interaction with ime computing device
US20120019446A1 (en)2009-03-202012-01-26Google Inc.Interaction with ime computing device
US20100245251A1 (en)2009-03-252010-09-30Hong Fu Jin Precision Industry (Shenzhen) Co., LtdMethod of switching input method editor
US20100251304A1 (en)2009-03-302010-09-30Donoghue Patrick JPersonal media channel apparatus and methods
US20120222056A1 (en)2009-03-302012-08-30Donoghue Patrick JPersonal media channel apparatus and methods
US8230336B2 (en)2009-04-222012-07-24Microsoft CorporationEfficient discovery, display, and autocompletion of links to wiki resources
US20100306248A1 (en)2009-05-272010-12-02International Business Machines CorporationDocument processing method and system
US20100309137A1 (en)2009-06-052010-12-09Yahoo! Inc.All-in-one chinese character input method
US20110014952A1 (en)*2009-07-152011-01-20Sony Ericsson Mobile Communications AbAudio recognition during voice sessions to provide enhanced user interface functionality
US20110060761A1 (en)2009-09-082011-03-10Kenneth Peyton FoutsInteractive writing aid to assist a user in finding information and incorporating information correctly into a written work
US20110066431A1 (en)2009-09-152011-03-17Mediatek Inc.Hand-held input apparatus and input method for inputting data to a remote receiving device
US20110087483A1 (en)*2009-10-092011-04-14Institute For Information IndustryEmotion analyzing method, emotion analyzing system, computer readable and writable recording medium and emotion analyzing device
US20110131642A1 (en)2009-11-272011-06-02Google Inc.Client-server input method editor architecture
US20110137635A1 (en)2009-12-082011-06-09Microsoft CorporationTransliterating semitic languages including diacritics
US20110161080A1 (en)2009-12-232011-06-30Google Inc.Speech to Text Conversion
US20110161311A1 (en)2009-12-282011-06-30Yahoo! Inc.Search suggestion clustering and presentation
US20120016678A1 (en)2010-01-182012-01-19Apple Inc.Intelligent Automated Assistant
US20110178981A1 (en)2010-01-212011-07-21International Business Machines CorporationCollecting community feedback for collaborative document development
US20110184723A1 (en)2010-01-252011-07-28Microsoft CorporationPhonetic suggestion engine
US20110191321A1 (en)2010-02-012011-08-04Microsoft CorporationContextual display advertisements for a webpage
US20110188756A1 (en)2010-02-032011-08-04Samsung Electronics Co., Ltd.E-dictionary search apparatus and method for document in which korean characters and chinese characters are mixed
US20110202876A1 (en)2010-02-122011-08-18Microsoft CorporationUser-centric soft keyboard predictive technologies
US20120041752A1 (en)2010-04-122012-02-16Yong-Gang WangExtension framework for input method editor
US20110258535A1 (en)2010-04-202011-10-20Scribd, Inc.Integrated document viewer with automatic sharing of reading-related activities across external social networks
US8161073B2 (en)2010-05-052012-04-17Holovisions, LLCContext-driven search
US20110289105A1 (en)2010-05-182011-11-24Tabulaw, Inc.Framework for conducting legal research and writing based on accumulated legal knowledge
US20110296324A1 (en)*2010-06-012011-12-01Apple Inc.Avatars Reflecting User States
CN102314441A (en)2010-06-302012-01-11百度在线网络技术(北京)有限公司Method for user to input individualized primitive data and equipment and system
US20120029902A1 (en)2010-07-272012-02-02Fang LuMode supporting multiple language input for entering text
US20120036468A1 (en)2010-08-032012-02-09Nokia CorporationUser input remapping
US20120035932A1 (en)2010-08-062012-02-09Google Inc.Disambiguating Input Based on Context
US20120060113A1 (en)2010-09-082012-03-08Nuance Communications, Inc.Methods and apparatus for displaying content
US20120078611A1 (en)*2010-09-272012-03-29Sap AgContext-aware conversational user interface
US20120117506A1 (en)2010-11-052012-05-10Jonathan KochDevice, Method, and Graphical User Interface for Manipulating Soft Keyboards
US20120143897A1 (en)2010-12-032012-06-07Microsoft CorporationWild Card Auto Completion
US20120173222A1 (en)2011-01-052012-07-05Google Inc.Method and system for facilitating text input
US20120297294A1 (en)2011-05-172012-11-22Microsoft CorporationNetwork search for writing assistance
US20120297332A1 (en)2011-05-202012-11-22Microsoft CorporationAdvanced prediction
US20130016113A1 (en)*2011-07-122013-01-17Sony CorporationContext aware user interface system
US8762356B1 (en)2011-07-152014-06-24Google Inc.Detecting change in rate of input reception
US8564684B2 (en)*2011-08-172013-10-22Digimarc CorporationEmotional illumination, and related arrangements
US20130054617A1 (en)2011-08-302013-02-28Alison Williams ColmanLinking Browser Content to Social Network Data
US20130091409A1 (en)2011-10-072013-04-11Agile Insights, LlcMethod and system for dynamic assembly of multimedia presentation threads
US20130298030A1 (en)2011-11-032013-11-07Aaron NahumiSystem, methods and computer readable medium for Augmented Personalized Social Network
US20130132359A1 (en)2011-11-212013-05-23Michelle I. LeeGrouped search query refinements
US20130159920A1 (en)2011-12-202013-06-20Microsoft CorporationScenario-adaptive input method editor
US20130346872A1 (en)2012-06-252013-12-26Microsoft CorporationInput method editor application platform
US20150106702A1 (en)2012-06-292015-04-16Microsoft CorporationCross-Lingual Input Method Editor
US20150121291A1 (en)2012-06-292015-04-30Microsoft CorporationInput Method Editor
US20140040238A1 (en)2012-08-062014-02-06Microsoft CorporationBusiness Intelligent In-Document Suggestions
US20150161126A1 (en)2012-08-302015-06-11Microsoft CorporationFeature-Based Candidate Selection

Non-Patent Citations (104)

* Cited by examiner, † Cited by third party
Title
"Database", Microsoft Computer Dictionary, Fifth Edition, retrieved on May 13, 2011, at <<http://academic.safaribooksonline.com/book/communications/0735614954>>, Microsoft Press, May 1, 2002, 2 pages.
"Database", Microsoft Computer Dictionary, Fifth Edition, retrieved on May 13, 2011, at >, Microsoft Press, May 1, 2002, 2 pages.
"English Assistant", Published on: Apr. 19, 2013, available at: <<http://bing.msn.cn/pinyin/>>, 2 pages.
"English Assistant", Published on: Apr. 19, 2013, available at: >, 2 pages.
"Google Releases Pinyin Converter", retrieved from <<http://blogoscoped.com/archive/2007-04-04-n49.html>>, Apr. 2007, 3 pages.
"Google Releases Pinyin Converter", retrieved from >, Apr. 2007, 3 pages.
"Google Scribe," retrieved at <<http://www.scribe.googlelabs.com/>>, retrieved date: Feb. 3, 2011, 1 page.
"Google Scribe," retrieved at >, retrieved date: Feb. 3, 2011, 1 page.
"Google Scribe," retrieved on Feb. 3, 2011 at <<http://www.scribe.googlelabs.com/>>, 1 page.
"Google Scribe," retrieved on Feb. 3, 2011 at >, 1 page.
"Google Transliteration Input Method (IME) Configuration", retrieved at <<http://www.technicstoday.com/2010/02/google-transliteration-input-method-ime-configuration/>>, Feb. 2010, 13 pages.
"Google Transliteration Input Method (IME) Configuration", retrieved at >, Feb. 2010, 13 pages.
"Innovative Chinese Engine", Published on: May 2, 2013, available at: <<http://bing.msn.cn/pinyin/help.shtml>>, 6 pages.
"Innovative Chinese Engine", Published on: May 2, 2013, available at: >, 6 pages.
"Input Method (IME)", Retrieved on: Jul. 3, 2013, available at: <<http://www.google.co.in/inputtools/cloud/features/input-method.html>>, 6 pages.
"Input Method (IME)", Retrieved on: Jul. 3, 2013, available at: >, 6 pages.
"Microsoft Computer Dictionary", Fifth Edition, retrieved on May 13, 2011, at <<http://academic.safaribooksonline.com/book/communications/0735614954>>, Microsoft Press, May 1, 2002, 2 pages.
"Microsoft Computer Dictionary", Fifth Edition, retrieved on May 13, 2011, at >, Microsoft Press, May 1, 2002, 2 pages.
"Microsoft Research ESL Assistant," retrieved at <<http://www.eslassistant.com/>>, retrieved date Feb. 3, 2011, 1 page.
"Microsoft Research ESL Assistant," retrieved at >, retrieved date Feb. 3, 2011, 1 page.
"Microsoft Research ESL Assistant," retrieved on Feb. 3, 2011 at <<http://www.eslassistant.com/>>, 1 page.
"Microsoft Research ESL Assistant," retrieved on Feb. 3, 2011 at >, 1 page.
"Prose", Dictionary.com, Jun. 19, 2014, 2 pgs.
"Search Engine", Microsoft Computer Dictionary, Mar. 2002 , Fifth Edition, p. 589.
Ben-Haim, et al., "Improving Web-based Image Search via Content Based Clustering", Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW '06), IEEE, Jun. 17, 2006, 6 pages.
Berg, et al., "Animals on the Web", Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '06), vol. 2, IEEE, Jun. 17, 2006, pp. 1463-1470.
Ciccolini, "Baidu IME More Literate in Chinese Input," Published Sep. 15, 2011, http://www.itnews-blog.com/it/81630.html, 4 pgs.
Ciccolini, Ramiro, "Baidu IME More literate in Chinese input," Published Sep. 15, 2011, retrieved at << http://www.itnews-blog.com/it/81630.html>>, 4 pages.
Ciccolini, Ramiro, "Baidu IME More literate in Chinese input," Published Sep. 15, 2011, retrieved at >, 4 pages.
Damper, "Self-Learning and Connectionist Approaches to Text-Phoneme Conversion", UCL Press, Connectionist Models of Memory and Language, Dec. 1995, 30 pages.
Dinamik-Bot, et al., "Input method", retrieved on May 6, 2015 at <<http://en.wikipedia.org/w/index.php?title=Input-method&oldid=496631911>>, Wikipedia, the free encyclopedia, Jun. 8, 2012, 4 pages.
Dinamik-Bot, et al., "Input method", retrieved on May 6, 2015 at >, Wikipedia, the free encyclopedia, Jun. 8, 2012, 4 pages.
Engkoo Pinyin Redefines Chinese Input, Published on: May 13, 2013, available at: <<http://research.microsoft.com/en-us/news/features/engkoopinyinime-051313.aspx>>, 3 pages.
Engkoo Pinyin Redefines Chinese Input, Published on: May 13, 2013, available at: >, 3 pages.
Final Office Action for U.S. Appl. No. 13/109,021, mailed on Jan. 11, 2013, Scott et al., "Network Search for Writing Assistance", 16 pages.
Gamon et al., "Using Statistical Techniques and Web Search to Correct ESL Errors," CALICO Journal, vol. 26, No. 3, May 2009, 21 pages.
Gamon et al., "Using Statistical Techniques and Web Search to Correct ESL Errors," Published 2009, retrieved from <<http://research.microsoft.com/pubs/81312/Calico-published.pdf>>, CALICO Journal, vol. 26, No. 3, 2009, 21 pages.
Gamon et al., "Using Statistical Techniques and Web Search to Correct ESL Errors," Published 2009, retrieved from >, CALICO Journal, vol. 26, No. 3, 2009, 21 pages.
Guo et al., "NaXi Pictographs Input Method and WEFT", Journal of Computers, vol. 5, No. 1, Jan. 2010, pp. 117-124.
International Search Report & Written Opinion for PCT Patent Application No. PCT/CN2013/081156, mailed May 5, 2014; filed Aug. 9, 2013, 14 pages.
International Search Report & Written Opinion for PCT Patent Application No. PCT/US2013/053321, Mailed Date: Oct. 1, 2013, Filed Date: Aug. 2, 2013, 9 pages.
Komasu et al., "Corpus-based Predictive Text Input", Proceedings of the Third International Conference on Active Media Technology, May 2005, 6 pages.
Kumar, "Google launched Input Method editor-type anywhere in your language", retrieved at <<http://shoutingwords.com/google-launched-input-method-editor.html>>, Mar. 2010, 12 pages.
Kumar, "Google launched Input Method editor-type anywhere in your language", retrieved at >, Mar. 2010, 12 pages.
Kumar, "Google launched Input Method editor-type anywhere in your language", retrieved at on Apr. 2, 2012 from <<http://shoutingwords.com/ google-launched-input-method-editor.htm>>, 12 pages.
Kumar, "Google launched Input Method editor-type anywhere in your language", retrieved at on Apr. 2, 2012 from >, 12 pages.
Lo, et al., "Cross platform CJK input Method Engine", IEEE International Conference on Systems, Man and Cybernetics, Oct. 6, 2002, 6 pages.
Miessler, "7 Essential Firefox Quicksearches", Retrieved from <<https:danielmiessler.com/blog/7-essential-firefox-quicksearches/>>, Published Aug. 19, 2007, 2 pages.
Miessler, "7 Essential Firefox Quicksearches", Retrieved from >, Published Aug. 19, 2007, 2 pages.
Millward, "Baidu Japan Acquires Simeji Mobile App Team, for Added Japanese Typing Fun", retrieved from <<http://www.techinasia.com/baidu-japan-simeiji/>>, Dec. 13, 2011, 3 pages.
Millward, "Baidu Japan Acquires Simeji Mobile App Team, for Added Japanese Typing Fun", retrieved from >, Dec. 13, 2011, 3 pages.
Mohan et al., "Input Method Configuration Overview," Jun. 3, 2011, retrieved at <http://gameware.autodesk.com/documents/gfx-4.0-ime.pdf>>, 71 pages.
Mohan et al., "Input Method Configuration Overview," Jun. 3, 2011, retrieved at >, 71 pages.
Non-Final Office Action for U.S. Appl. No. 13/331,023, mailed Aug. 4, 2014, Matthew Robert Scott et al., "Scenario-Adaptive Input Method Editor", 20 pages.
Office action for U.S. Appl. No. 12/693,316, mailed on Jun. 19, 2013, Huang et al., "Phonetic Suggestion Engine", 20 pages.
Office action for U.S. Appl. No. 12/693,316, mailed on May 19, 2014, Huang et al., "Phonetic Suggestion Engine", 22 pages.
Office action for U.S. Appl. No. 12/693,316, mailed on Oct. 16, 2014, Huang, et al., "Phonetic Suggestion Engine", 24 pages.
Office action for U.S. Appl. No. 12/693,316, mailed on Oct. 30, 2013, Huang, et al., "Phonetic Suggestion Engine", 24 pages.
Office Action for U.S. Appl. No. 13/109,021, mailed on Aug. 21, 2012, Scott et al., "Network Search for Writing Assistance", 19 pages.
Office Action for U.S. Appl. No. 13/109,021, mailed on Jun. 19, 2014, Dyer, A.R., "Network Search for Writing Assistance," 42 pages.
Office Action for U.S. Appl. No. 13/109,021, mailed on Mar. 11, 2014, Dyer, A.R., "Network Search for Writing Assistance," 18 pages.
Office Action for U.S. Appl. No. 13/109,021, mailed on Sep. 25, 2013, Scott et al., "Network Search for Writing Assistance", 18 pages.
Office Action for U.S. Appl. No. 13/109,021, mailed on Sep. 30, 2014, Dyer, A.R., "Network Search for Writing Assistance," 17 pages.
Office action for U.S. Appl. No. 13/331,023 mailed on Nov. 20, 2015, Scott et al., "Scenario-Adaptive Input Method Editor", 25 pages.
Office action for U.S. Appl. No. 13/331,023, mailed on Feb. 12, 2015, Scott et al, "Scenario-Adaptive Input Method Editor", 20 pages.
Office action for U.S. Appl. No. 13/331,023, mailed on Jun. 26, 2015, Scott et al., "Scenario-Adaptive Input Method Editor", 23 pages.
Office action for U.S. Appl. No. 13/567,305, mailed on Jan. 30, 2014, Scott, et al., "Business Intelligent In-Document Suggestions", 14 pages.
Office action for U.S. Appl. No. 13/586,267 mailed on Nov. 6, 2015, Scott et al., "Input Method Editor Application Platform", 22 pages.
Office action for U.S. Appl. No. 13/586,267, mailed on Jan. 2, 2015, Scott et al., "Input Method Editor Application Platform", 19 pages.
Office action for U.S. Appl. No. 13/586,267, mailed on Jul. 31, 2014, Scott et al., "Input Method Editor Application Platform", 20 pages.
Office action for U.S. Appl. No. 13/586,267, mailed on May 8, 2015, Scott et al., "Input Method Editor Application Platform", 18 pages.
Office action for U.S. Appl. No. 13/635,219, mailed on Mar. 13, 2015, Scott et al., "Cross-Lingual Input Method Editor", 21 pages.
Office action for U.S. Appl. No. 13/635,219, mailed on Sep. 29, 2015, Scott et al., "Cross-Lingual Input Method Editor", 14 page.
Office action for U.S. Appl. No. 13/635,306, mailed on Aug. 14, 2015, Scott et al., "Input Method Editor", 26 pages.
Office action for U.S. Appl. No. 13/635,306, mailed on Mar. 27, 2015, Scott et al., "Input Method Editor", 18 pages.
Office action for U.S. Appl. No. 13/701,008, mailed on Jun. 15, 2015, Wang et al., "Feature-Based Candidate Selection", 17 pages.
Office action for U.S. Appl. No. 13/701,008, mailed on May 12, 2015, Wang et al., "Feature-Based Candidate Selection", 12 pages.
PCT International Preliminary Report on Patentability mailed Mar. 12, 2015 for PCT Application No. PCT/CN2012/080749, 8 pages.
Scott, et al., "Engkoo: Mining theWeb for Language Learning", In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Systems Demonstrations, Jun. 21, 2011, 6 pages.
Sowmya, et al., "Transliteration Based Text Input Methods for Telugu", Proceedings of 22nd International Conference on Computer Processing of Oriental Languages. Language Technology for the Knowledge-based Economy (ICCPOL), Mar. 2009, pp. 122-132.
Suematsu et al., "Network-Based Context-Aware Input Method Editor," Proceedings: Sixth International Conference on Networking and Services (ICNS), Mar. 7, 2010, 6 pages.
Suzuki et al., "A Comparative Study on Language Model Adaptation Techniques using New Evaluation Metrics," Proceedings: HLT/EMNLP, Oct. 2005, 8 pages.
Suzuki et al., "A Comparative Study on Language Model Adaptation Techniques using New Evaluation Metrics," Proceedings: HLT/EMNLP, Vancouver, Oct. 6-8, 2005, retrieved at <<http://www.aclweb.org/anthology/H/H05/H05-1034.pdf>>, 8 pages.
Suzuki et al., "A Comparative Study on Language Model Adaptation Techniques using New Evaluation Metrics," Proceedings: HLT/EMNLP, Vancouver, Oct. 6-8, 2005, retrieved at >, 8 pages.
The European Office Action mailed Jun. 18, 2015 for European patent application No. 12879676.0, a counterpart foreign application of U.S. Appl. No. 13/635,306, 5 pages.
The European Office Action mailed Nov. 27, 2015 for European patent application No. 12880149.5, a counterpart foreign application of U.S. Appl. No. 13/635,219, 10 pages.
The European Office Action mailed Oct. 8, 2015 for European patent application No. 12879804.8, a counterpart foreign application of U.S. Appl. No. 13/586,267, 9 pages.
The Partial Supplemenary European Search Report mailed Oct. 26, 2015 for European patent application No. 12883902.4, 7 pages.
The Supplemenary European Search Report mailed Jul. 16, 2015 for European patent application No. 12880149.5, 5 pages.
The Supplemenary European Search Report mailed Sep. 14, 2015 for European patent application No. 12879804.8, 5 pages.
The Supplementary European Search Report mailed May 20, 2015 for European Patent Application No. 12879676.0, 3 pages.
The Supplementary European Search Report mailed Nov. 12, 2015 for European patent application No. 12880149.5, 7 pages.
U.S. Appl. No. 12/960,258, filed Dec. 3, 2010, Wei, et al., "Wild Card Auto Completion".
U.S. Appl. No. 12/960,258, filed Dec. 3, 2010, Wei, et al., "Wild Card Auto Completion," 74 pages.
U.S. Appl. No. 13/109,021, filed May 17, 2011, Scott et al., "Network Search for Writing Assistance", 43 pages.
U.S. Appl. No. 13/109,021, filed May 17, 2011, Scott et al., "Network Search for Writing Assistance".
U.S. Appl. No. 13/331,023; "Scenario-Adaptive Input Method Editor", Scott, et al.; filed Dec. 20, 2011.
U.S. Appl. No. 13/63,5219, filed Sep. 14, 2011, Scott, et al., "Cross-Lingual Input Method Editor", 43 pages.
Wikipedia, "Selection Based Search", retrieved on Mar. 23, 2012 at <<http://en.wikipedia.org/wiki/Selection based search>>, 3 pgs.
Wikipedia, "Selection Based Search", retrieved on Mar. 23, 2012 at >, 3 pgs.
Wikipedia, "Soundex", retrieved on Jan. 20, 2010 at <<http://en.wikipedia.org/wiki/soundex>>, 3 pgs.
Wikipedia, "Soundex", retrieved on Jan. 20, 2010 at >, 3 pgs.
Windows XP Chinese Pinyin Setup, published Apr. 15, 2006, retrieved at <<http://www.pinyinjoe.com/pinyin/pinyin-setup.htm>>, pp. 1-10.
Windows XP Chinese Pinyin Setup, published Apr. 15, 2006, retrieved at >, pp. 1-10.

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20150263997A1 (en)*2014-03-142015-09-17Microsoft CorporationInstant Messaging
US10021044B2 (en)2014-03-142018-07-10Microsoft Technology Licensing, LlcInstant messaging
US11544452B2 (en)*2016-08-102023-01-03Airbnb, Inc.Generating templates for automated user interface components and validation rules based on context
US20200050306A1 (en)*2016-11-302020-02-13Microsoft Technology Licensing, LlcSentiment-based interaction method and apparatus
US20220198531A1 (en)*2020-12-172022-06-23Kyndryl, Inc.Pre-packaging and pre-configuration of software products using chatbots
US11562410B2 (en)*2020-12-172023-01-24Kyndryl, Inc.Pre-packaging and pre-configuration of software products using chatbots

Also Published As

Publication numberPublication date
US20130152000A1 (en)2013-06-13

Similar Documents

PublicationPublication DateTitle
US9348479B2 (en)Sentiment aware user interface customization
US11809829B2 (en)Virtual assistant for generating personalized responses within a communication session
US11106870B2 (en)Intelligent text enhancement in a computing environment
CN101840414B (en)From the apparatus and method of network text making of cartoon
US8825533B2 (en)Intelligent dialogue amongst competitive user applications
US8332752B2 (en)Techniques to dynamically modify themes based on messaging
US20190325012A1 (en)Phased collaborative editing
US11733823B2 (en)Synthetic media detection and management of trust notifications thereof
US12153893B2 (en)Automatic tone detection and suggestion
US20240295953A1 (en)Prompt modification for automated image generation
US11169667B2 (en)Profile picture management tool on social media platform
US10678855B2 (en)Generating descriptive text contemporaneous to visual media
US20250045526A1 (en)Methods for Emotion Classification in Text
CN116304007A (en) An information recommendation method, device, storage medium and electronic equipment
US10623346B2 (en)Communication fingerprint for identifying and tailoring customized messaging
US20250077783A1 (en)Methods and systems for avoiding offensive language based on personas
US12321831B1 (en)Automated detection of content generated by artificial intelligence
KR20210039618A (en)Apparatus for processing a message that analyzing and providing feedback expression items
WO2024035416A1 (en)Machine-learned models for multimodal searching and retrieval of images
CN112507684A (en)Method and device for detecting original text, electronic equipment and storage medium
Song et al.Large language models for subjective language understanding: A survey
KR20210039626A (en)Program of processing a message that analyzing and providing feedback expression items
KR20210039608A (en)Recording Medium
KR20210039621A (en)Program of processing a message that analyzing and providing feedback expression items
CN117520489B (en) AIGC-based interactive method, device, equipment and storage medium

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:MICROSOFT CORPORATION, WASHINGTON

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIU, WEIPENG;SCOTT, MATTHEW ROBERT;HOU, HUIHUA;AND OTHERS;SIGNING DATES FROM 20111201 TO 20111207;REEL/FRAME:027350/0682

ASAssignment

Owner name:MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034544/0541

Effective date:20141014

STCFInformation on status: patent grant

Free format text:PATENTED CASE

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:4

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:8


[8]ページ先頭

©2009-2025 Movatter.jp