CROSS-REFERENCE TO RELATED APPLICATIONSThis is a continuation of application Ser. No. 14/815,420 filed Jul. 31, 2015, which claims priority from Korean Patent Application No. 10-2015-0026750, filed on Feb. 25, 2015, and 10-2014-0098634, filed on Jul. 31, 2014 in the Korean Intellectual Property Office, the disclosures of which are incorporated herein in their entireties by reference.
BACKGROUND1. FieldApparatuses and methods consistent with exemplary embodiments relate to providing content to a user during a conversation by using an electronic device.
2. Description of the Related ArtAs the distribution rate of portable terminals has been rapidly increased in recent times, portable terminals have become a daily necessity for contemporary people. The portable terminals provide various functions including not only a voice call service, which is its primary function, but also various kinds of additional services, such as data transferring, etc.
In particular, communication between users has been actively performed by using a smart terminal. With this trend, various technologies configured to provide more convenient environments for such communication have been developed.
SUMMARYOne or more exemplary embodiments provide a method of providing convenient communication environments by using various electronic devices.
According to an aspect of an exemplary embodiment, there is provided an electronic device providing a message service including: a communication unit configured to exchange a message with another device; a display configured to display a message received from the another device, a message input by the electronic device, and at least one response message generated by the electronic device, all together on a message service screen of the display; and a processor configured to determine whether the received message includes an inquiry, and provide at least one piece of recommended content based on the inquiry in response to the received message being determined as including the inquiry, and transmit the at least one response message comprising the at least one piece of recommended content to the other device.
The processor may be further configured to determine whether the electronic device stores data used to generate the at least one piece of recommended comment, and obtain the data based on a result of the determination.
The processor may be further configured to determine a keyword associated with the at least one piece of recommended content, and obtain content corresponding to the keyword.
The processor may be further configured to obtain the at least one piece of recommended content based on relational data between a user of the other device and a user of the electronic device.
The relational data may include at least one of data stored in the electronic device, data stored in a server communicating with the electronic device, and data stored in the other device.
The display may be further configured to display the at least one piece of recommended content based on a user input requesting the at least one piece of recommended content in response to a settings menu being set to manually recommend the at least one piece of recommended content, display the at least one piece of recommended content based on a user input indicated in the received message in response to the settings menu being set to semi-automatically recommend the at least one piece of recommended content, and display the at least one piece of recommended content without a user input when the electronic device obtains the at least one piece of content by recognizing the received message in response to the settings menu being set to automatically recommend the at least one piece of recommended content.
The processor may be further configured to obtain the at least one piece of recommended content based on types of words included in the message, relations among the words, and meanings of the words.
The processor may be further configured to obtain the at least one piece of recommended content based on a relation between a user of the electronic device and a user of the other device, which is set based on a user input.
The at least one piece of recommended content may be obtained from an application installed in the electronic device.
The processor may be further configured to extract at least one keyword from the received message to determine whether the received message includes an inquiry.
According to an aspect of another exemplary embodiment, there is provided a method of providing content to another device, via an electronic device providing a message service including: receiving a message from the other device; displaying the received message on a message service screen; determining whether the received message includes an inquiry; providing at least one piece of recommended content based on the received message; and transmitting a response message comprising the at least one piece of recommended content to the other device.
The providing the at least one piece of recommended content may include determining whether the electronic device stores data used to generate the at least one piece of recommended comment, and obtaining the data based on a result of the determination.
The providing the at least one piece of recommended content may include determining a keyword associated with the at least one piece of recommended content and obtaining content corresponding to the keyword.
The providing the at least one piece of recommended content may include obtaining the at least one piece of recommended content based on relational data between a user of the electronic device and a user of the other device.
The relational data may include at least one of data stored in the electronic device, data stored in a server communicating with the electronic device, and data stored in the other device.
The providing the at least one piece of recommended content may include: displaying the at least one piece of recommended content based on a user input requesting the at least one piece of content in response to a settings menu being set to manually recommend the at least one piece of recommended content; displaying the at least one piece of recommended content based on a user input indicated in the received message in response to the settings menu being set to semi-automatically recommend the at least one piece of recommended content; and displaying the at least one piece of recommended content without a user input when the electronic device obtains the at least one piece of content by recognizing the received message in response to the settings menu being set to automatically recommend the at least one piece of recommended content.
The providing the at least one piece of recommended content may include obtaining the at least one piece of recommended content based on types of words included in the message, relations among the words, and meanings of the words.
The providing the at least one piece of recommended content may include obtaining the at least one piece of recommended content based on a relation between a user of the electronic device and a user of the other device, which is set based on a user input.
The at least one piece of recommended content may be obtained from an application installed in the electronic device.
The providing the at least one piece of recommended content includes: extracting at least one keyword from the received message to determine whether the received message includes an inquiry.
According to an aspect of an exemplary embodiment, there is provided a method of providing a social networking service by a server including: displaying a message posted by a first device through an application or a website; recognizing a user input from a second device that activates an input field displayed on the application or the website; determining whether the posted message includes an inquiry; generating a recommended response based on information of relationship between a user of the first device and a user of the second device in response to the posted message including the inquiry; and providing the recommended response to the second device.
The determining whether the posted message may include the inquiry includes determining whether a sentence of the message starts with an interrogative adverb, determining whether a subject and a verb of the sentence are inversed, and determining whether the sentence includes a question mark.
The recommended response may include a plurality of contents that belong to at least two different categories, and the at least two different categories include an application and a picture.
The method may further include determining user preference of the at least two different categories based on the number of times that each of the categories is selected by the second device, and displaying the plurality of contents in the order of the user preference.
BRIEF DESCRIPTION OF THE DRAWINGSThe above and/or other aspects will be more apparent by describing certain exemplary embodiments, with reference to the accompanying drawings, in which:
FIG. 1 is a diagram for describing a content providing system according to an exemplary embodiment;
FIG. 2 is a view of a user interface (UI) of an electronic device, according to an exemplary embodiment;
FIG. 3 is a flowchart for describing a method of providing content via an electronic device, according to an exemplary embodiment;
FIGS. 4 through 7 are views for describing a method of providing content according to an exemplary embodiment;
FIG. 8 is a view for describing various types of electronic devices and various chatting methods, which apply a method of providing content according to exemplary embodiments;
FIG. 9A is a view for describing a knowledge framework configured to provide response related content, which is included in an electronic device;
FIG. 9B is a view for describing an operation of each of components of an electronic device according to an exemplary embodiment;
FIGS. 10A through 10D are views of an environment setting user interface which is provided by an electronic device according to an exemplary embodiment;
FIGS. 11A through 11D are views for describing a method of providing response related content via an electronic device;
FIG. 12A is a flowchart for describing a method of providing response related content via an electronic device;
FIG. 12B is a view of a module included in each of an electronic device and a server;
FIG. 13A is a flowchart for describing a method of providing response related content via an electronic device;
FIG. 13B is a view of a module included in each of an electronic device and a server;
FIG. 14A is a flowchart for describing a method of providing response related content via an electronic device;
FIG. 14B is a view of a module included in each of an electronic device and a server;
FIG. 15 is a view for describing a method of utilizing content stored in a database of a server to understand a relationship between users;
FIG. 16 is a view for describing a method of utilizing content stored in a database of an electronic device to understand a relationship between users;
FIG. 17 is a view for describing a method of utilizing content stored in a database of an electronic device and content stored in a database of a server, in order to understand a relationship between a user of the electronic device and a user of a device of another user;
FIGS. 18, 19A through 19D, 20A through 20D, 21, 22A, 22B, 23A through 23C, and 24A through 24E are views for describing a method of providing content according to an exemplary embodiment;
FIGS. 25A through 25E are views of content providing screens, when content which may be used for a response is provided, according to an exemplary embodiment;
FIG. 26 is a view of an electronic device providing an interface via which a content providing layout is changed;
FIGS. 27 through 37 are views for describing a method of providing content according to an exemplary embodiment;
FIGS. 38 through 42, 43A, 43B, 44, 45, 46A, 46B, 47, and 48 are views for describing a method of providing content according to an exemplary embodiment;
FIG. 49A through 49D are block diagrams of components of a user's terminal according to an exemplary embodiment, wherein the user's terminal may be the electronic device ofFIG. 1;
FIGS. 50A through 50D, 51A through 51D, 52A through 52D, and 53A through 53C,53D1, and53D2 are views for describing an overall method of providing content according to an exemplary embodiment;
FIGS. 54A through 54D are views for describing a method of providing content according to an exemplary embodiment;
FIGS. 55A through 55D are views for describing a method of providing content according to an exemplary embodiment;
FIG. 56 is a block diagram of software components of a user's terminal according to an exemplary embodiment;
FIG. 57 is a view of a user interface (UI) of an electronic device according to an exemplary embodiment; and
FIG. 58 is a view of a user interface (UI) of an electronic device according to an exemplary embodiment.
DETAILED DESCRIPTIONExemplary embodiments are described in greater detail below with reference to the accompanying drawings.
In the following description, like drawing reference numerals are used for like elements, even in different drawings. The matters defined in the description, such as detailed construction and elements, are provided to assist in a comprehensive understanding of the exemplary embodiments. However, it is apparent that the exemplary embodiments can be practiced without those specifically defined matters. Also, well-known functions or constructions are not described in detail since they would obscure the description with unnecessary detail.
When a part “includes” or “comprises” an element, unless there is a particular description contrary thereto, the part can further include other elements, not excluding the other elements. Throughout the specification, it will be understood that when an element is referred to as being “connected” to another element, it may be “directly connected” to the other element or “electrically connected” to the other element with intervening elements therebetween. In addition, terms such as “ . . . unit,” “ . . . module,” or the like refer to units that perform at least one function or operation, and the units may be implemented as hardware or software or as a combination of hardware and software.
Throughout the specification, a term such as “user” may refer to a user of an electronic device. Throughout the specification, a term such as “message service” may refer to a one to one, one to many, or many to many service via which users may easily exchange messages, such as a conversation of the users.
Throughout the specification, a term “application” refers to a set of a series of computer programs designed to execute specific tasks. The application in this specification may vary. The application may include a game application, a video playing application, a map application, a memo application, a calendar application, a phone book application, a broadcasting application, a sport support application, a payment application, a picture folder application, etc. However, the application is not limited thereto. The application may also be referred to as an App.
Expressions such as “at least one of,” when preceding a list of elements, modify the entire list of elements and do not modify the individual elements of the list.
In this specification, a term “message” may refer to a group of text units or a group of sound units including at least one selected from one or more words, one or more phrases, and one or more clauses, as part of a conversation exchanged between users.
In this specification, a term “keyword” may refer to a word, a phrase, and a clause related to a meaning of a message, the meaning being obtained by performing a natural language analysis with respect to the message. The keyword may refer to not only the word, the phrase, and the clause included in the message, but also a word, a phrase, and a clause which are not included in the message.
In this specification, a term “content” may refer to data, files, software, and information including a video, a sound, and a text transferring information via an electronic device. For example, the image content may refer to image data transferred via the electronic device. For example, the content may include text responses including a two-dimensional image, a three-dimensional image, a two-dimensional video, a three-dimensional video, and various languages, and content related to various application services.
FIG. 1 is a diagram for describing acontent providing system10 according to an exemplary embodiment.
As illustrated inFIG. 1, thecontent providing system10 may include anelectronic device100, a device of anotheruser200, and aserver300.
Theelectronic device100 according to an exemplary embodiment may exchange a text message or a voice message with the device of anotheruser200. Also, theelectronic device100 may exchange the text message or the voice message with the device of anotheruser200 via theserver300.
In addition, theelectronic device100 may request and obtain various types of data from theserver300 and may transmit various types of data to theserver300. For example, the data that theelectronic device100 obtains from theserver300 may be data that is exchanged between the device of anotheruser200 and theserver300.
Thecontent providing system10 may be embodied by more or less components than the illustrated components. For example, according to another exemplary embodiment, theserver300 may not be included in thecontent providing system10.
Hereinafter, theelectronic device100, the device of anotheruser200, and theserver300 will be described in greater detail.
Theelectronic device100 according to an exemplary embodiment may exchange a message with the device of anotheruser200.
In this specification, theelectronic device100 may be realized as smart phones, tablets, personal computers (PCs), wearable devices, personal digital assistants (PDAs), laptop computers, cell phones, mobile phones, enterprise digital assistants (EDAs), portable multimedia players (PMPs), personal navigation devices or portable navigation devices (PNDs), handheld game consoles, mobile internet devices (MIDs), or electronic books (e-books).
The wearable device may include a head mounted display (HMD) (hereinafter, referred to as “HMD”) which may be worn at a head portion. For example, the HMD may include glasses, helmets, hats, etc., but it is not limited thereto. The wearable device may also be realized as a ring, a necklace, a bracelet, shoes, earrings, a hair band, clothes, gloves, a thimble, etc.
The device of anotheruser200 may be a device which receives a first message generated in theelectronic device100 and displays the first message in an output interface. Also, the device of anotheruser200 may generate a second message according to a user input and transmit the second message to theelectronic device100.
According to an exemplary embodiment, the device of anotheruser200 may receive the first message from theelectronic device100 via theserver300, and may transmit the second message to theelectronic device100 via theserver300. According to another exemplary embodiment, the device of anotheruser200 may directly receive the first message from theelectronic device100 without theserver300, and may directly transmit the second message to theelectronic device100 without theserver300. According to another exemplary embodiment, the device of anotheruser200 may be a single device or a plurality of devices.
The device of anotheruser200 according to exemplary embodiments may be realized as various types. For example, the device of anotheruser200 may be realized as smart phones, tablet PCs, wearable devices, personal digital assistants (PDAs), laptop computers, cell phones, mobile phones, enterprise digital assistants (EDAs), portable multimedia players (PMPs), personal navigation devices or portable navigation devices (PNDs), handheld game consoles, mobile internet devices (MIDs), or electronic books (e-books).
Theserver300 may perform communication with theelectronic device100 or the device of anotheruser200. For example, theserver300 may receive the first message generated in theelectronic device100 from theelectronic device100, and may receive the second message generated in the device of anotherdevice200 from the device of anotherdevice200. Also, theserver300 may transmit the first message to the device of anotheruser200 and transmit the second message to theelectronic device100.
Theserver300 may manage the message received from theelectronic device100 or the device of anotheruser200. For example, theserver300 may store the exchanged message in a message database (DB) of each device. Also, theserver300 may renew the message DB. Theserver300 may renew the message DB according to a cycle time period, or may renew the message DB whenever a new message is received from theelectronic device100 or the device of anotheruser200.
FIG. 2 is a view illustrating a user interface (UI) of theelectronic device100 according to an exemplary embodiment.
Referring toFIG. 2, theelectronic device100 according to an exemplary embodiment may extract a keyword from the message received from the device of anotheruser200. Also, based on the keyword, theelectronic device100 may obtain at least one piece of content which may be used when responding to the message. Also, theelectronic device100 may display the at least one piece of content that is obtained.
Theelectronic device100 may provide an interface, such as a conversation window including aback button20, acall button30, auser name box40, andmessages50,51,52, and53. Theback button20 and thecall button30 may be displayed on theuser name box40.
The user may touch or double tap theback button20 to go back to a previous menu. The user may touch or double tap thecall button30 to execute a voice chatting or a voice call. Theuser name box40 may display a name or a nickname of a conversation partner. Also, theuser name box40 may display a name or a nickname of the user of theelectronic device100. The conversation window may display themessages50,51,52, and53 exchanged between the user of the electronic device and the conversation partner.
Theelectronic device100 may determine whether eachmessage50,51,52, or53 is a question or not. Theelectronic device100 may extract a keyword from themessage50,51,52, or53, when any of themessages50,51,52,53 is a question. Based on the extracted keyword, theelectronic device100 may obtain the content which may be used when responding to the message.
Via a pop-upwindow60, theelectronic device100 may receive a user input regarding whether to display the content obtained with respect to each keyword. Theelectronic device100 may display the obtained content, when the user touches or taps afirst response button61. Theelectronic device100 may not display the obtained content, when the user touches or taps asecond response button62.
When any of themessages50,51,52, and53 is a question, theelectronic device100 may obtain the content which may be used in responding to the question, before activating the pop-upwindow60. Alternatively, when any of themessages50,51,52, and53 is a question, theelectronic device100 may obtain the content which may be used in responding to the question, after activating the pop-upwindow60. Alternatively, when any of themessages50,51,52, and53 is a question, theelectronic device100 may obtain the content which may be used in responding to the message after the user touches thefirst response button61, after the pop-upwindow60 is activated.
FIG. 3 is a flowchart for describing a method of providing content via theelectronic device100 according to an exemplary embodiment.
1.1. Message ReceptionReferring toFIG. 3, theelectronic device100 may receive a message from the device of anotheruser200 in operation S110.
Theelectronic device100 may receive themessages50,51,52, and53 from the device of anotheruser200. Alternatively, theelectronic device100 may receive themessages50,51,52, and53 from the device of anotheruser200 via theserver300.
Theelectronic device100 may determine whether each of themessages50,51,52, and53 includes a question. Theelectronic device100 may determine whether each of themessages50,51,52, and53 includes a question by using a semantic analysis method and a statistical analysis method which will be described later.
1.2. Keyword ExtractionTheelectronic device100 may extract a keyword from the message received from the device of anotheruser200 in operation S130.
Theelectronic device100 may extract the keyword from the message by using the semantic analysis method and the statistical analysis method.
Theelectronic device100 may extract the keyword by using the semantic analysis. Theelectronic device100 may determine whether a given sentence is demanding an answer or is providing some information. Theelectronic device100 may analyze the sentence to determine whether the sentence starts with interrogative adverbs (e.g., who, what, when, where, why, which, how) or helping verb (e.g., is, are, can, could, did, does, or do, have, has, may, might, shall, should, will, would), whether the subject and the verb of the sentence are inversed, and whether the sentence includes a question mark to determine whether the sentence is an interrogative sentence.
For example, theelectronic device100 may receive themessage53 “where are you?” from the device of anotheruser200. Theelectronic device100 may determine the meaning of themessage53 and extract “where” as the keyword. Also, the electronic device may extract “where” as the keyword by further considering conversation content between the users and user-related data. The user-related data may include at least one of content that is input by the user and recorded data regarding the user. The user-related data may denote content related to only one user or content related to two or more users.
Theelectronic device100 may extract “where” as the keyword based on at least one selected from the meaning of the message, the content that is input by the user, and the recorded data regarding the user.
Theelectronic device100 may perform a natural language analysis with respect to the meaning of the message to extract the keyword. For example, theelectronic device100 may extract “where” as the keyword. Theelectronic device100 may analyze and determine the meaning of the message included in the conversation between the users, and may predict content for a response message, based on the meaning of the message. For example, theelectronic device100 may analyze the meaning of the message as a question that is put by the user of the device of anotheruser200 to ask about a location of the user of theelectronic device100. Based on this meaning, theelectronic device100 may predict that the user of theelectronic device100 may need to provide a response regarding his/her own location to the user of the device of anotheruser200, and may extract “where” as the keyword.
Also, when theelectronic device100 extracts the keyword based on the meaning of the message, theelectronic device100 may consider the content that is input by the user. For example, the content that is input by the user may include a home address of the user, a company address of the user, a movement path record of the user, and a place according to a schedule of the user. For example, when the user records a business trip plan via a scheduler application, and the content of the message exchanged with the device of anotheruser200 is related to a business trip location, theelectronic device100 may extract “where” as the keyword.
Also, when theelectronic device100 extracts the keyword based on the meaning of the message, theelectronic device100 may consider the recorded content regarding the user. For example, the recorded content regarding the user may denote a record about the user which is written in theelectronic device100 and theserver300. Also, the record about the user may include a record that is written by the user while the user uses theelectronic device100 and a record that is written in the server while the user users an App service. Also, the record about the user may include a record that is written indirectly rather than directly by the user while the user uses theelectronic device100 or theserver300.
For example, the record about the user may include content of a call of the user, content of payment of the user via a credit card, and content written by the user via a social network service (SNS).
Theelectronic device100 may extract the keyword by figuring out the meaning of the message by the statistical analysis.
For example, theelectronic device100 may determine a priority order among predicted situations of the user. For example, theelectronic device100 may determine that any of one or more predicted situations is more likely to happen than the other situations. For example, theelectronic device100 may determine that any of one or more keywords is more likely to happen than the other keywords.
The electronic device may extract “where” as the keyword in the above-described method by the statistical analysis and the semantic analysis.
1.3. Content ObtainmentTheelectronic device100 may obtain content which may be used in responding to the message, based on the extracted keyword, in operation S170.
Theelectronic device100 may obtain the content by performing various searches based on the extracted keyword. The content that theelectronic device100 obtains may include a two-dimensional image, a three-dimensional image, a two-dimensional video, a three-dimensional video, a text reply formed of various languages, content of various fields, and content related to applications providing various services.
Theelectronic device100 according to an exemplary embodiment may obtain the content related to the keyword from an external search server.
For example, theelectronic device100 may obtain the content related to the application service related to the keyword.
For example, when the received message is a question type message, theelectronic device100 may analyze the message and obtain the content related to a weather application service.
For example, when although the received message is not a question type, it is recognized as a result of the semantic analysis that the message requires a response, theelectronic device100 may obtain the content by using a matching table or a predetermined rule.
For example, theelectronic device100 may determine an application service which the user may use in responding to the message according to the keyword, by considering the conversation content and the user-related data.
For example, when the user is located in Korea, theelectronic device100 may select an application service related to maps which is provided by a service provider of Korean nationality. When the user is located in Japan, theelectronic device100 may select an application service related to maps which is provided by a service provider of Japanese nationality.
Also, for example, theelectronic device100 may select a map application service which is frequently used by the user of theelectronic device100, by referring to an application use frequency of the user.
Also, for example, theelectronic device100 may select an application service suitable to a current situation from among application services which are pre-selected by the user.
For example, theelectronic device100 may select a first application service suitable to the current situation of the user, from among first through fifth application services which are pre-selected by the user. For example, when there are first through fifth restaurant application services which are pre-selected by the user, theelectronic device100 may select at least one restaurant application service suitable to the current situation of the user, from among the first through fifth restaurant application services.
Also, theelectronic device100 may obtain the content by using an internet search service. For example, theelectronic device100 may obtain the content by performing various searches based on the keyword, after determining the interest search service. For example, the internet search service may be search services which are capable of being accessed by only authorized persons, such as a university library, a thesis search site, and a database of a research institute.
For example, theelectronic device100 may obtain a two-dimensional image corresponding to the keyword, through the search service. For example, theelectronic device100 may obtain the content of various fields by inputting the keyword in the search service as an input value.
According to another exemplary embodiment, theelectronic device100 may obtain the related content stored in theelectronic device100.
For example, theelectronic device100 may obtain a two-dimensional image, a three-dimensional image, a two-dimensional video, a three-dimensional video, a text reply formed of various languages, data about the content of various fields, which are stored in theelectronic device100.
1.4 Content ProvisionTheelectronic device100 may provide at least one piece of obtained content in operation S190.
Theelectronic device100 may provide the content obtained based on the keyword to the user by various methods.
For example, theelectronic device100 may provide the obtained content to the user by using at least one of a sound, an image, and a text. For example, the method by which theelectronic device100 provides the obtained content may vary according to types ofelectronic devices100. For example, theelectronic device100 may display the obtained content by a screen division method and a screen transformation method, and when theelectronic device100 is a wearable device such as a smart watch, theelectronic device100 may display the content by summarizing the content. For example, theelectronic device100 may display the content in a sound and an image via an avatar, and when theelectronic device100 is a wearable device such as a smart watch, theelectronic device100 may display the summary of the content.
For example, theelectronic device100 may obtain a two-dimensional image corresponding to the keyword, by using the search service. For example, theelectronic device100 may obtain the content of various fields by inputting the keyword in the search service as an input value.
2.Scenario 1FIGS. 4 through 7 are views for describing a method of providing content according to an exemplary embodiment.FIGS. 4 through 7 are views of a user interface according to steps of a scenario in which theelectronic device100 provides a text chatting service and provides an image stored in theelectronic device100 to a user as response related content.
Referring toFIG. 4, Chris, the user of theelectronic device100 is performing a text chatting with Hyunjin, a user of the device of anotheruser200.
Theelectronic device100 displays the name of the user of theelectronic device100 in theuser name box40. According to another exemplary embodiment, theelectronic device100 may display the name of the user of the device of anotheruser200 in theuser name box40.
Theelectronic device100 receives themessage50 “Hi” from the device of anotheruser200 at8:26 on Aug. 6, 2013. Theelectronic device100 sends themessage51 to the device of anotheruser200 and receives themessage52 from the device of anotheruser200. The user of theelectronic device100 receives the message “where are you?” from Hyunjin.
Theelectronic device100 may determine whether each of themessages50,51,52, and53 includes a question. For example, theelectronic device100 may determine that each of themessages50,51, and52 is not a question, and themessage53 is a question.
For example, theelectronic device100 may mark the message which is determined as a question to distinguish the question message from the other messages. For example, theelectronic device100 may mark the message which theelectronic device100 determines as the question in a different color (for example, a fluorescent yellow color) so as to be distinguished from the other messages.
Theelectronic device100 may extract a keyword with respect to themessage53. For example, theelectronic device100 may extract “a current location” as the keyword with respect to themessage53. Alternatively, for example, theelectronic device100 may extract “where are you?” and convert it to “where am I?” as the keyword with respect to themessage53. Alternatively, for example, theelectronic device100 may extract “England” or “Seoul” as the keyword with respect to themessage53 by further considering user-related data.
Referring toFIG. 5, theelectronic device100 may displayimages71,72, and73 related to themessage53. Theelectronic device100 may display the image stored in theelectronic device100 or the image stored in a server connected to theelectronic device100. When theelectronic device100 determines themessage53 as a question, theelectronic device100 may display therelated images71,72, and73 without an additional input of the user. Alternatively, when theelectronic device100 determines themessage53 as a question, theelectronic device100 may display therelated images71,72, and73 as the content which may be used in responding, by considering an input value of the user in the settings.
Referring toFIG. 6, theelectronic device100 may select one of the displayed images, by a user's touch input or tap input.
Referring toFIG. 7, theelectronic device100 may send as a response theimage72 selected by the user's touch or tap input, to the device of anotheruser200, via themessage54.
2.1. Device and Application Capable of Applying the MethodFIG. 8 is a view for describing various types of electronic devices and various chatting methods, which are capable of applying a method of providing content according to exemplary embodiments
Referring toFIG. 8, the method of providing content according to exemplary embodiments may be performed during a communication process via a voice chatting or a text chatting between theelectronic device100 and the device of anotheruser200. Also, the method of providing content according to the exemplary embodiments may be performed during a communication process via a voice chatting accompanied by a speech to text (STT) function between theelectronic device100 and the device of anotheruser200. Also, theelectronic device100 may be realized as a smart phone, a tablet PC, a wearable device, etc. Also, even if it is not the types above, any electronic device that may realize a message service may apply the method of providing content according to the exemplary embodiments.
2.2. Specific Components of the DeviceFIG. 9A is a view for describing aknowledge framework120 configured to provide response related content, which is included in theelectronic device100.
Theelectronic device100 may realize theknowledge framework120, afirst application141, and asecond application143, and may store aweb content list145 and adevice list147.
Theknowledge framework120 may include a natural language processing unit (NLU)121, asession manager122, aprofiler123, acontext analyzer124, aresponse generator125, acontent metadata storage126, and anApp register127.
TheNLU121 may analyze a meaning of each message received by theelectronic device100. TheNLU121 may analyze the meaning of each message by a statistical method. TheNLU121 may analyze each message by the statistical method and the semantic method and may transmit the analyzed message to thecontext analyzer124.
Thesession manager122 may check a conversation partner, conversation date and time, conversation content, and a conversation environment with respect to the messages exchanged by theelectronic device100 and may analyze the exchanged message in a group unit. Thesession manager122 may analyze the one or more messages exchanged by theelectronic device100 to define the messages as a session.
Theprofiler123 may collect and store content about the user of the electronic device or a user who exchanged messages with the user of the electronic device. Theprofiler123 may keep, process, and store content written by the user of the electronic device or the user who exchanged messages with the user of the electronic device.
The context analyzer124 may analyze and determine a meaning of each of messages exchanged by the electronic device, based on meanings of the messages exchanged by the electronic device. The context analyzer124 may analyze and determine the meaning of each of messages exchanged by the electronic device in a unit of one or more message, the unit being defined by thesession manager122.
Theresponse generator125 may generate a response with respect to the message. Theresponse generator125 may generate the content which may be used in responding to the message. Theresponse generator125 may generate various possible responses and provide the generated responses to the user via an interface screen.
Thecontent metadata storage126 may include metadata about content. For example, thecontent metadata storage126 may include content about an application. For example, thecontent metadata storage126 may include the metadata regarding whether the application is a sport-related application or a movie-related application. Also, for example, thecontent metadata storage126 may include the metadata about an image. Thecontent metadata storage126 may include content, such as names of persons appearing in the image, and a relation between the persons and the user of the electronic device.
For example, thecontent metadata storage126 may continuously collect content about aweb content list145 on the internet. For example, thecontent metadata storage126 may continuously collect data about adevice content list147 of the electronic device.
TheApp register127 may include content about various applications. For example, theApp register127 may include data about applications included in theelectronic device100. For example, theApp register127 may include data about applications not included in theelectronic device100. TheApp register127 may frequently update the data about applications. Theresponse generator125 may generate a response related to the applications by referring to the data about the applications of theApp register127.
For example, theApp register127 may store data about at least one of thefirst application141 and thesecond application142. For example, thefirst application141 may be an application installed in theelectronic device100. For example, thesecond application143 may be an application that is not installed in theelectronic device100.
FIG. 9B is a view for describing an operation of each of components of theelectronic device100, according to an exemplary embodiment.
Referring toFIG. 9B, theelectronic device100 may include thesession manager122, theprofiler123, thecontext analyzer124, thecontent metadata storage126, theApp register127, aresponse recommender128, and aweb search engine129.
Theresponse recommender128 may include the natural languageprocessing unit NLU121 and theresponse generator125.
Thesession manager122 may transmit a message um[n] to theresponse recommender128. Theresponse recommender128 may receive the message um[n] from thesession manager122 and the naturallanguage processing unit121 may generate a keyword kw[m] from the message um[n]. For example, thesession manager122 may transmit the message um[n] “where are you?” to theresponse recommender128, and the naturallanguage processing unit121 may generate “location” or “information” as the keyword kw[m].
Theresponse recommender128 may receive user information usr.info from theprofiler123. For example, with respect to the message um[n] “where are you?”, theresponse recommender128 may receive user information in an external service account, information about a user's home or company or about a place that the user frequently visits, as the user information usr.info. When theresponse recommender128 generates the keyword kw[m], theresponse recommender128 may refer to the user information usr.info received from theprofiler123.
Theresponse recommender128 may receive context information cnt.info from thecontext analyzer124. For example, the context information cnt.info about the message um[n] “where are you?” may include time information, global positioning system (GPS) information, weather information, analyzed user activity information, and recent logs.
For example, the user activity information may include information such as a fact that the user stayed in a restaurant for one hour, and a fact that the user continuously ran for one minute. For example, the recent logs may include networking information such as information of a base station accessed by theelectronic device100. Theresponse recommender128 may consider the context information cnt.info, when generating a recommended response rcm.ans by using the keyword kw[m].
Theresponse recommender128 may receive application related information ap.info by transmitting the keyword kw[m] to theApp register127. For example, the application related information ap.info may include metadata of applications or information about a map-related application.
Theresponse recommender128 may receive content information cm.info corresponding to the keyword kw[m] from thecontent metadata storage126. For example, the content information cm.info may include specific information of an image. For example, the specific image information may include a location (latitude and longitude) in which the image is generated or tag information.
Theresponse recommender128 may receive a search result value srch.rst by transmitting the keyword kw[m] to theweb search engine129. For example, the search result value srch.rst may include an image of Boston, an address of a company, and a map application.
Theresponse recommender128 may receive at least one of the application related information ap.info, the content information cm.info, and the search result value srch.rst, and may generate the recommended response rcm.ans corresponding to the message um[n] by referring to the user information usr.info and the context information cnt.info.
2.3. Environment SettingsFIGS. 10A through 10D are views of an environment setting user interface which is provided by theelectronic device100 according to an exemplary embodiment.
A term “setting” may denote pre-determining an operation method of theelectronic device100 by a user's setting or a user's input to set a use environment of theelectronic device100.
Referring toFIG. 10A, theelectronic device100 may not provide the response related content or may provide the response related content to the user visually or auditorily according to the settings. When theelectronic device100 provides the response related content according to the settings, theelectronic device100 may provide the response related content automatically, semi-automatically, and manually.
That theelectronic device100 provides the response related content automatically according to the settings may denote that when theelectronic device100 determines that an exchanged message includes a question, theelectronic device100 provides the response related content with respect to the message without receiving an additional user input.
That theelectronic device100 provides the response related content semi-automatically according to the settings may denote that when theelectronic device100 determines that an exchanged message includes a question, theelectronic device100 provides the response related content with respect to the message if theelectronic device100 receives a user's simple input (a touch or tap input) with respect to the message.
For example, when theelectronic device100 provides the response related content semi-automatically, theelectronic device100 may display the message corresponding to the question so as to be distinguished from other messages. For example, theelectronic device100 may display a background color of the message corresponding to the question differently from other messages. For example, theelectronic device100 may display the message corresponding to the question by using an underline and/or a conspicuous color so as to be distinguished from other messages.
The user of theelectronic device100 may receive the content by touching or tapping the message marked by using one of the above methods. Theelectronic device100 may obtain the response related content by receiving the touch or tap input of the user. Theelectronic device100 may obtain and store the content related to a predicted response before theelectronic device100 receives the touch or tap input of the user, and then, may provide the response related content when receiving the touch or tap input of the user.
When theelectronic device100 is set to manually provide the response related content, theelectronic device100 provides the response related content with respect to the message, if theelectronic device100 determines an exchanged message as a question and identifies whether the user will receive the response related content by using the user input indicating that the user will receive the response related content.
Referring toFIG. 10B, theelectronic device100 may provide the response related content by considering user-related conditions which are set in the settings.
For example, theelectronic device100 may provide the response related content by considering content related to a first user User1 set in the settings. Also, for example, theelectronic device100 may provide the response related content with respect to a message exchanged by a second user User2. Also, for example, theelectronic device100 may provide the response related content only with respect to a message input by a third user User3.
Referring toFIG. 10C, theelectronic device100 may provide the response related content by considering time related conditions which are set in the settings.
For example, when it is set in the settings to semi-automatically provide the response related content as illustrated inFIG. 10A, theelectronic device100 may restrict the time period in which the message corresponding to a question is displayed such that the message corresponding to a question is distinguished from other messages. For example, theelectronic device100 may restrict the time period in which the message corresponding to a question is displayed so as to be distinguished from other messages to 10 seconds, 20 seconds, 30 seconds, or 1 minute. When the time period exceeds, theelectronic device100 may not any more display the message corresponding to a question so as to be distinguished from other messages.
Referring toFIG. 10D, theelectronic device100 may provide the response related content by considering data conditions in the settings.
For example, theelectronic device100 may determine content of the question by referring to only data stored in theelectronic device100. For example, theelectronic device100 may determine the content of the question by referring to only data stored in theelectronic device100 and theserver300.
For example, theelectronic device100 may provide the response related content by referring to only the data stored in theelectronic device100. For example, theelectronic device100 may provide the response related content by referring to only the data stored in theelectronic device100 and theserver300.
In addition to the manners illustrated inFIGS. 10A through 10D, theelectronic device100 may operate in various manners, according to the settings. For example, the user of theelectronic device100 may set the settings such that the response related content is provided as a text, a sound, an image, etc. For example, the user of theelectronic device100 may set an interest area for which content needs to be provided. For example, the user of theelectronic device100 may set the interest area as items, such as sports, weather, politics, movies, economics, life, etc.
2.4. Service Provision Based on the TerminalFIG. 11A illustrates a method of providing the response related content via theelectronic device100.
Referring toFIG. 11A, theelectronic device100 may receive a message from the device of anotheruser200 in operation S210. Theelectronic device100 may continuously receive the message from the device of anotheruser200. Theelectronic device100 may receive the message from the device of anotheruser200 via theserver300.
Theelectronic device100 may determine whether the message received from the device of anotheruser200 includes a question in operation S220. Theelectronic device100 may divide a sentence used in the message into grammatical units and extract the relationship among the grammatical units. Theelectronic device100 may determine whether the received message is a question based on a result of this operation.
Theelectronic device100 may extract a keyword from the message received from the device of anotheruser200 in operation S230. Theelectronic device100 may extract the keyword from the received message by performing a natural language analysis. When theelectronic device100 extracts the keyword, theelectronic device100 may extract the keyword by referring to data that is input by the user or recorded data regarding the user.
Theelectronic device100 may obtain the content which may be used in responding to the message based on the keyword in operation S250. Theelectronic device100 may obtain the content by performing various searches based on the extracted keyword. The content that theelectronic device100 obtains may include a two-dimensional image, a three-dimensional image, a two-dimensional video, a three-dimensional video, text replies including various languages, content of various fields, and content regarding application services providing various services.
Theelectronic device100 may provide at least one piece of obtained content in operation S270. For example, theelectronic device100 may provide the at least one piece of obtained content to the user via at least one of a sound, an image, and a text.
FIG. 11B illustrates a method of providing the response related content via theelectronic device100.
Referring toFIG. 11B, theelectronic device100 may receive a message from the device of anotheruser200 in operation S310.
Theelectronic device100 may determine whether to receive the content which may be used in responding to the message received from the device of anotheruser200, in operation S320. For example, theelectronic device100 may determine whether to receive the content which may be used in responding to the received message, based on user configurations which are set via settings.
Theelectronic device100 may extract a keyword from the message received from the device of anotheruser200 in operation S330. Theelectronic device100 may extract the keyword by performing a natural language analysis to determine a meaning of the message.
Theelectronic device100 may obtain the content which may be used in responding to the message, based on the keyword, in operation S350. Theelectronic device100 may obtain the content related to the keyword from an external search server. Theelectronic device100 may obtain the content via an internet search service. Theelectronic device100 may obtain the related content stored in theelectronic device100.
Theelectronic device100 may provide a list of the obtained content in operation S370. Theelectronic device100 may provide the content obtained based on the keyword to the user via various methods.
FIG. 11C illustrates a method of providing the response related content via theelectronic device100.
Referring toFIG. 11C, theelectronic device100 may receive a message from the device of anotheruser200 in operation S410. Theelectronic device100 may extract one or more keywords from the message received from the device of anotheruser200 in operation S430.
Theelectronic device100 may set a priority order among the one or more keywords based on internal data of theelectronic device100 or an input by the user, in operation S440. Theelectronic device100 may set the priority order among the keywords, which may be referred to for obtaining the content. For example, when theelectronic device100 may obtain the content based on the plurality of keywords, theelectronic device100 may first obtain the content based on the keyword which has a high priority order, and may later obtain the content based on the keyword which has a low priority order. For example, when theelectronic device100 may obtain the content based on the plurality of keywords, theelectronic device100 may first display the content based on the keyword which has a high priority order, and may later display the content based on the keyword which has a low priority order.
Theelectronic device100 may obtain the content which may be used in responding to the message, based on the keyword, in operation S450, and may provide at least one piece of content via one or more methods, in operation S470.
FIG. 11D illustrates a method of providing the response related content via theelectronic device100.
Referring toFIG. 11D, theelectronic device100 may receive a message from the device of anotheruser200 in operation S510.
Theelectronic device100 may extract a keyword from the message received from the device of anotheruser200 in operation S530.
Theelectronic device100 may analyze relational data between a user of the device of anotheruser200 and a user of theelectronic device100, in operation S540. The relational data between the user of theother device200 and the user of theelectronic device100 may refer to data stored in theelectronic device100 or theserver300.
Theelectronic device100 may obtain the content which may be used in responding to the message based on the analyzed relational data between the users or the keyword, in operation S550. Also, theelectronic device100 may provide at least one piece of content via one or more methods in operation S570.
FIG. 12A is a flowchart for describing a method of providing the response related content via theelectronic device100.
Theelectronic device100 may receive a first message from the device of anotheruser200 in operation S1005. Theelectronic device100 may receive the first message from the device of anotheruser200 via theserver300.
Theelectronic device100 may determine whether the first message received from the device of anotheruser200 includes a question in operation S1010. Theelectronic device100 may determine whether the received first message includes a question by performing a natural language analysis.
When the received first message includes a question, theelectronic device100 may extract a keyword from the message in operation S1020.
Theelectronic device100 may request theserver300 to obtain the content based on the keyword in operation S1020. Theserver300 may obtain the content which may be used in responding to the message based on the keyword in operation S1030.
Theserver300 may provide the content obtained based on the keyword to theelectronic device100 in operation S1035. Theelectronic device100 may provide the obtained content in operation S1040.
Theelectronic device100 may transmit a second message to the device of anotheruser200 in operation S1045. For example, theelectronic device100 may transmit the second message to the device of anotheruser200 by including at least one piece of the obtained content in the second message.
FIG. 12B is a view of a module included in each of theelectronic device100 and theserver300.
FIG. 12B is an exemplary view for describing a method of obtaining the response related content via theelectronic device100. Specific components included in theelectronic device100 and specific components included in theserver300 may be adjusted in one or more ways.
Referring toFIG. 12B, theelectronic device100 may include thesession manager122, theprofiler123, thecontext analyzer124, and theresponse recommender128. Theresponse recommender128 may include the natural language processing unit (NLU)121 and theresponse generator125.
Theserver300 may include acontent metadata storage326 and anApp register327.
TheNLU121 may analyze a meaning of the message received by theelectronic device100. TheNLU121 may analyze the message via a statistical analysis method and a semantic analysis method, and transmit the analyzed message to thecontext analyzer124.
Thesession manager122 may analyze the messages exchanged by theelectronic device100 in a group unit by checking and using a conversation partner, date and time at which the conversation takes place, conversation content, and a conversation environment with respect to the exchanged messages. Thesession manager122 may group one or more messages exchanged by theelectronic device100 and define the group as a session. For example, thesession manager122 may transmit, for example, a message um[n] “where are you going in this afternoon?” to theresponse recommender128.
Theresponse recommender128 may receive the message um[n] from thesession manager122, and theNLU121 may generate a keyword kw[m] from the message um[n]. For example, the session manager12 may transmit the message “where are you going in this afternoon?” to theresponse recommender128, and theNLU121 may generate a schedule, an event, a location, a destination, or time as the keyword kw[m].
Theprofiler123 may continuously collect, process, and store content regarding the user of theelectronic device100 and persons related to the user of theelectronic device100. When the keyword kw[m] is generated, theresponse recommender128 may receive user information usr.info related to the keyword kw[m] from theprofiler123 and refer to the user information usr.info. For example, the user information usr.info with respect to the message um[n] “where are you going in this afternoon?” may include the user information in an external service accountant and information about a user's home or company or places frequently visited by the user.
The context analyzer124 may determine a meaning of each of messages exchanged by theelectronic device100, based on meanings of the messages exchanged by theelectronic device100. The context analyzer124 may determine the meaning of each of messages exchanged by theelectronic device100 in a unit of one or more messages, the unit being defined by thesession manager122. Theresponse recommender128 may receive the context information cnt.info from thecontext analyzer124.
For example, the context information cnt.info regarding the message um[n] “where are you going in this afternoon?” may include time information, GPS information, analyzed user schedule information, and recent logs. For example, theresponse recommender128 may consider the context information cnt.info when theresponse recommender128 generates a recommended response rcm.ans by using the keyword kw[m].
Theelectronic device100 may receive application related information ap.info by transmitting the keyword kw[m] to theApp register327. For example, the application related information ap.info with respect to the message um[n] “where are you going in this afternoon?” may include scheduler application related information, social network service application related information, and chatting application related information. For example, a first application may be a weather-related application installed in theelectronic device100. For example, a second application may be a news related application which is not installed in theelectronic device100.
Theelectronic device100 may receive content information cm.info by transmitting the keyword kw[m] to thecontent metadata storage326. For example, theresponse recommender128 may receive data about the weather related application and the news related application from thecontent metadata storage326. For example, the content information cm.info regarding the message um[n] “where are you going in this afternoon?” may include image specific information or schedule related information. For example, the image specific information may include a location (latitude, longitude) in which an image is generated.
Theresponse recommender128 of theelectronic device100 may receive at least one of the application related information ap.info and the content information cm.info from theserver300 and refer to the user information usr.info and the context information cnt.info to generate the recommended response rcm.ans corresponding to the message um[n].
Theresponse generator125 may generate the response related content with respect to the message. Theresponse generator125 may generate the content which may be used in responding to the message. Theresponse generator125 may provide the generated content to the user via an interface screen.
2.5. Service Provision Based on the ServerFIG. 13A is a flowchart for describing a method of providing the response related content via theelectronic device100.
Theserver300 may receive a first message from the device of anotheruser200 in operation S2010. Theserver300 may transmit the first message received from the device of anotheruser200 to theelectronic device100 in operation S2020.
Theserver300 may determine whether the first message received from the device of anotheruser200 includes a question in operation S2030.
When the received first message includes a question, theserver300 may extract a keyword from the message in operation S2040.
Theserver300 may obtain the content which may be used in responding to the message, based on the keyword, in operation S2050.
Theserver300 may provide the content obtained based on the keyword to theelectronic device100. Theelectronic device100 may provide the obtained content in operation S2060.
Theelectronic device100 may transmit a second message to theserver300 in operation S2070. For example, theelectronic device100 may transmit the second message to theserver300 by including at least one piece of obtained content in the second message. Theserver300 may transmit the second message to the device of anotheruser200 in operation S2080.
InFIG. 13A, operations S2030, S2040, and S2050 are illustrated as being performed after the first message is transmitted to theelectronic device100 through operation S2020. However, the present embodiment is not limited thereto, and operations S2030, S2040, and S2050 may be performed before theserver200 transmits the first message to theelectronic device100. For example, the first message may be transmitted together with the content obtained inoperation2050,
Alternatively, the device ofanther user200 may directly transmit the first message to theelectronic device100 before, at, or after the device of anotheruser200 transmits the first message to theserver300. In that case, theserver200 may omit operation S2020.
According to another exemplary embodiment, theserver300 may provide online social networking services through a website or a mobile application. In that case, theserver300 may display the first message received from the device of anotheruser200 on the website or the mobile application rather than directly forwarding the first message to the electronic device100 (operation S2020). The first message may be one of threaded comments posted by the device of anotheruser200 and/or other users. When the user of theelectronic device100 activates a comment input field by placing a cursor in the input field or touching the input field with his/her finger, theserver300 may perform operations S2030, S2040, and S2050 with one of the comments posted on the website or the mobile application. For example, theserver300 may perform operations S2030, S2040, and S2050 on comments which are posted most recently or selected by a user input.
FIG. 13B is a view of a module included in each of theelectronic device100 and theserver300.
FIG. 13B is an exemplary view for describing a method of obtaining the response related content based on theserver300. Specific components included in theelectronic device100 and specific components included in theserver300 may be adjusted via one or more methods.
Referring toFIG. 13B, theelectronic device100 may include theprofiler123 and thecontext analyzer124.
Theprofiler123 may continuously retain, process, and store content written by the user of theelectronic device100 or other users. Theprofiler123 may transmit user information usr.info to theserver300 by receiving a request of theserver300.
The context analyzer124 may determine a meaning of each of messages exchanged by theelectronic device100, based on meanings of the messages exchanged by theelectronic device100. The context analyzer124 may transmit context information cnt.info to theserver300 by receiving a request of theserver300.
Theserver300 may include asession manager322, thecontent metadata storage326, theApp register327, and aresponse recommender328. Theresponse recommender328 may include the natural languageprocessing unit NLU321 and aresponse generator325.
Thesession manager322 may determine a conversation partner, date and time in which the conversation takes place, conversation content, and conversation environments with respect to the messages exchanged by theelectronic device100 and may determine the messages exchanged by theelectronic device100 via theserver300 in a group unit. Thesession manager322 may gather one or more messages exchanged by theelectronic device100 and define the gathered messages as a session. For example, thesession manager322 may transmit the received message um[n] to theresponse recommender328.
Theresponse recommender328 may receive the message um[n] from thesession manager322 and theNLU321 may generate a keyword kw[m] from the message um[n]. For example, thesession manager322 may transmit a message “Is there a hot issue in Boston?” to theresponse recommender328, and theNLU321 may generate news, issues, topic, and location as the keyword kw[m].
Theresponse recommender328 may receive user information usr.info from theprofiler123. For example, when the keyword kw[m] is generated, theresponse recommender328 may refer to the user information usr.info received from theprofiler123. For example, the user information usr.info may include an age and a preference of the user. For example, the user information usr.info may include user information in an external service accountant.
Theresponse recommender328 may receive context information cnt.info from thecontext analyzer124. For example, theresponse recommender328 may consider the context information cnt.info, when theresponse recommender328 generates a recommended response rcm.ans by using the keyword kw[m]. The context information cnt.info may include time, an analyzed user schedule, and recent logs.
Theresponse recommender328 may receive at least one of application related information ap.info and content information cm.info and may refer to the user information usr.info and the context information cnt.info received from theelectronic device100, to generate the recommended response rcm.ans corresponding to the message um[n]. For example, the application related information ap.info may include information of applications providing news, magazines, articles. For example, the content information cm.info may include articles, and images of Boston.
Theresponse generator325 may generate the response related content with respect to the message. Theresponse generator325 may generate the content which may be used in responding to the message. Theresponse generator325 may transmit the generated content to theelectronic device100.
2.6. Service Provision Based on the Terminal and the ServerFIG. 14A is a flowchart for describing a method of providing the response related content via theelectronic device100.
Theserver300 may receive a first message from the device of anotheruser200 in operation S3010. Theserver300 may transmit the first message received from the device of anotheruser200 to theelectronic device100 in operation S3020.
Theserver300 may determine whether the first message received from the device of anotheruser200 includes a question in operation S3030. Theserver300 may determine whether the first message includes a question after transmitting the first message to theelectronic device100, and may determine whether the first message includes a question before transmitting the first message to theelectronic device100.
When the received first message includes a question, theserver300 may extract a keyword from the message in operation S3040. Theserver300 may transmit the keyword to theelectronic device100 in operation S3045.
Theelectronic device100 may obtain the content which may be used in responding to the message based on the keyword, in operation S3050. Theelectronic device100 may display the content obtained based on the keyword.
Theelectronic device100 may transmit a second message to theserver300 in operation S3060. For example, theelectronic device100 may transmit the second message to theserver300 by including at least one piece of obtained content in the second message. Theserver300 may transmit the second message to the device of anotheruser200 in operation S3070.
The sever300 may perform operations S3030 and S3040 before transmitting the first message, and may transmit the first message and the keyword to theelectronic device100 at substantially the same time. Alternatively, the device ofanther user200 may directly transmit the first message to theelectronic device100 before, at, or after the device of anotheruser200 transmits the first message to theserver300. In that case, theserver300 may omit operation S3020.
FIG. 14B is a view of a module included in each of theelectronic device100 and theserver300.
FIG. 14B is an exemplary view for describing a method of obtaining the response related content, based on theserver300. Specific components included in theelectronic device100 and specific components included in theserver300 may be adjusted in one or more ways.
Referring toFIG. 14B, theelectronic device100 may include thefirst profiler123, thecontent metadata storage126, and theApp register127.
Thefirst profiler123 may collect and store data regarding a user of theelectronic device100 or another user. The data regarding the user may include a name, a job, a telephone number, an interest field, a friend relationship, and a movement record of the user. Thefirst profiler123 may keep, process, and store content written by the user of theelectronic device100. The content written by the user of theelectronic device100 may include a message, a memo, a reply, and a comment written by the user.
Thecontent metadata storage126 may include various types of data with respect to the content. For example, thecontent metadata storage126 may include data regarding types of an application. Also, for example, thecontent metadata storage126 may include metadata, such as a date of image generation, and a place of image generation.
Also, for example, thecontent metadata storage126 may continuously collect and store data with respect to various types of web content on the internet and data with respect to various types of content of theelectronic device100.
TheApp register127 may include data regarding various applications. For example, theApp register127 may store data regarding at least one of a first application installed in theelectronic device100 and a second application not installed in theelectronic device100.
Theserver300 may include theNLU321, thesession manager322, asecond profiler323, thecontext analyzer324, and theresponse generator325.
Thesession manager322 may analyze messages exchanged by theserver300 in a group unit by checking and using conversation participants, date and time of conversation, conversation content, and conversation environments of the messages exchanged by theserver300. Thesession manager322 may gather one or more messages exchanged by theserver300 and define the gathered messages as a session.
TheNLU321 may analyze a meaning of a first message received by theserver300. TheNLU321 may analyze the first message by a statistical analysis method and a semantic analysis method and transmit the analyzed meaning to thecontext analyzer324.
The context analyzer324 may determine a meaning of each of messages exchanged by theelectronic device100 based on meanings of the messages exchanged by theelectronic device100. The context analyzer324 may determine the meaning of each of messages exchanged by theelectronic device100 in a unit of one or more messages, the unit being defined by thesession manager322.
Theresponse recommender328 may receive at least one of user information usr.info[p] and context information cnt.info and may refer to the user information usr.info[p], application related information ap.info, and content information cm.info received from theelectronic device100, to generate a recommended response rcm.ans corresponding to the message um[n]. Theserver300 may transmit the recommended response rcm.ans to theelectronic device100.
2.7. Analysis of the Relationship of Conversation ParticipantsFIG. 15 is a view for describing a method of utilizing information stored in adatabase370 of theserver300 to figure a relationship between users.
Referring toFIG. 15, theserver300 may collect, manage, and analyze information of the users via thedatabase370 and adatabase manager360.
Thedatabase370 may include data with respect to various users. For example, thedatabase370 may store information regarding a user related to the user of theelectronic device100. For example, thedatabase370 may include data regarding a level of intimacy between a user of the device of anotheruser200 and the user of theelectronic device100.
Thedatabase manager360 may manage thedatabase370. Thedatabase manager360 may manage various types of data recorded in thedatabase370, and may provide the data managed by thedatabase manager360 to theelectronic device100 in response to a request of theelectronic device100.
Thedatabase manager360 may manage records for each user written in thedatabase370. Also, thedatabase manager360 may manage records for each group written in thedatabase370. Thedatabase manager360 may transmit the data requested by theelectronic device100 to theelectronic device100.
Theelectronic device100 may edit and manage information stored in theserver300 via arelationship finder140, anintimacy analyzer150, and afriend list database190.
Therelationship finder140 may determine a relationship between the user of the device of anotheruser200 and the user of theelectronic device100. Therelationship finder140 may receive the information about the user of the device of anotheruser200 and the user of theelectronic device100 by requesting the information from thedatabase manager360 included in theserver300, and may determine the relationship between the users. Therelationship finder140 may determine the relationship between the users by analyzing previous call records and conversation content between the users.
Theintimacy analyzer150 may analyze the level of intimacy between the user of the device of anotheruser200 and the user of theelectronic device100. For example, theintimacy analyzer150 may calculate the level of intimacy between the user of the device of anotheruser200 and the user of theelectronic device100 as quantified data (for example, data such as 1, 2, 3, and 10, or data such as a first level, a second level, and a third level). For example, theintimacy analyzer150 may determine which of one or more predetermined categories the intimacy level between the user of the device of anotheruser200 and the user of theelectronic device100 belongs to. For example, theintimacy analyzer150 may analyze the level of intimacy between the user of the device of anotheruser200 and the user of theelectronic device100 in such a way that theintimacy analyzer150 selects the intimacy level from among categories, such as a “family,” a “friend,” a “colleague,” or a “school mate.”
Thefriend list database190 may store data analyzed by therelationship finder140 and theintimacy analyzer150, based on a relationship that a user has with respect to the user of theelectronic device100. Here, information included in thefriend list database190 may correspond to not only information about friends of the user of theelectronic device100, the friends as in a social context, but also information about all persons who may be accessed by theelectronic device100, such as families, alumni, colleagues, and relatives.
For example, thefriend list database190 may store information that the user of the device of anotheruser200 and the user of theelectronic device100 are friends in a social network service (SNS). For example, thefriend list database190 may store information that the user of the device of anotheruser200 and the user of theelectronic device100 are family members or relatives. For example, thefriend list database190 may store information that the user of the device of anotheruser200 and the user of theelectronic device100 are an employee and an employer of a company. For example, thefriend list database190 may store information that the user of the device of anotheruser200 and the user of theelectronic device100 are school alumni.
FIG. 16 is a view for describing a method of utilizing information stored in thedatabase170 of theelectronic device100 to determine a relationship between users.
Referring toFIG. 16, theelectronic device100 may collect, manage, and analyze information about the users via thedatabase170, adatabase analyzer160, therelationship finder140, theintimacy analyzer150, and thefriend list database190.
Thedatabase170 may include data corresponding to phone records between the user of theelectronic device100 and various users. For example, thedatabase170 may store information about phone records between the user of theelectronic device100 and other users, in real time. For example, the phone records may include calling time, calling places, calling start time, calling end time, and calling methods (voice or message).
Also, thedatabase170 may include data written in an address book or a contact list of the user of theelectronic device100. For example, thedatabase170 may include information, such as a company name, a job, and a family relation of a friend.
Thedatabase analyzer160 may analyze and manage data such as the phone records or the address list included in thedatabase170. Thedatabase analyzer160 may analyze various types of data written in thedatabase170 and may provide a result of the analysis of the data, in response to a request of therelationship finder140 or theintimacy analyzer150.
Thedatabase analyzer160 may manage the record written in thedatabase170 for each individual. Also, thedatabase analyzer160 may manage the record written in thedatabase170 for each group. Thedatabase analyzer160 may transmit the data requested by therelationship finder140 or theintimacy analyzer150 to theelectronic device100.
Therelationship finder140 may determine a relationship between the user of the device of anotheruser200 and the user of theelectronic device100. Therelationship finder140 may determine the relationship between the user of the device of anotheruser200 and the user of theelectronic device100 by requesting thedatabase analyzer160. Therelationship finder140 may determine the relationship between the user of the device of anotheruser200 and the user of theelectronic device100 by analyzing previous call records between the user of the device of anotheruser200 and the user of theelectronic device100.
Theintimacy analyzer150 may analyze a level of intimacy between the user of the device of anotheruser200 and the user of theelectronic device100. For example, theintimacy analyzer150 may calculate the level of intimacy between the user of the device of anotheruser200 and the user of theelectronic device100 as quantified data (for example, data such as 1, 2, 3, and 10, or data such as a first level, a second level, and a third level). For example, theintimacy analyzer150 may analyze the level of intimacy between the user of the device of anotheruser200 and the user of theelectronic device100 in such a way that theintimacy analyzer150 selects the level of intimacy from among categories, such as a “family,” a “friend,” a “colleague,” or a “school mate.”
Thefriend list database190 may store the data analyzed by therelationship finder140 and theintimacy analyzer150, based on a relationship that a user has with respect to the user of theelectronic device100. Thefriend list database190 ofFIG. 16 is similar to thefriend list database190 ofFIG. 15.
FIG. 17 is a view for describing a method of utilizing information stored in thedatabase170 of theelectronic device100 and information stored in adatabase370 of theserver300, to determine a relationship between a user of theelectronic device100 and a user of the device of anotheruser200.
Referring toFIG. 17, theelectronic device100 may collect, manage, and analyze information related to the users via thedatabase170, thedatabase analyzer160, therelationship finder140, theintimacy analyzer150, and thefriend list database190. Thedatabase170 and thedatabase analyzer160 ofFIG. 17 are similar to thedatabase170 and thedatabase analyzer160 ofFIG. 16. Hereinafter, repeated descriptions will be omitted.
Theserver300 may include thedatabase370 and thedatabase manager360. Thedatabase370 and thedatabase manager360 ofFIG. 17 are similar to thedatabase370 and thedatabase manager360 ofFIG. 15. Hereinafter, repeated descriptions will be omitted.
Therelationship finder140 may determine the relationship between the user of the device of anotheruser200 and the user of theelectronic device100. Therelationship finder140 may determine the relationship between the user of the device of anotheruser200 and the user of theelectronic device100 by requesting at least one of thedatabase analyzer160 and thedatabase manager360. Therelationship finder140 may determine the relationship between the user of the device of anotheruser200 and the user of theelectronic device100 by analyzing previous phone records between the user of the device of anotheruser200 and the user of theelectronic device100. Therelationship finder140 may determine the relationship between the user of the device of anotheruser200 and the user of theelectronic device100 by analyzing conversation content between the user of the device of anotheruser200 and the user of theelectronic device100.
Theintimacy analyzer150 may analyze a level of intimacy between the user of the device of anotheruser200 and the user of theelectronic device100. Theintimacy analyzer150 may analyze the level of intimacy between the user of the device of anotheruser200 and the user of theelectronic device100 by requesting at least one of thedatabase analyzer160 and thedatabase manager360.
For example, theintimacy analyzer150 may calculate the level of intimacy between the user of the device of anotheruser200 and the user of theelectronic device100 as quantified data (for example, data such as 1, 2, 3, and 10, or data such as a first level, a second level, and a third level). For example, theintimacy analyzer150 may determine which of one or more predetermined categories the level of intimacy between the user of the device of anotheruser200 and the user of theelectronic device100 belongs to. For example, theintimacy analyzer150 may analyze the level of intimacy between the user of the device of anotheruser200 and the user of theelectronic device100 in such a way that theintimacy analyzer150 selects the level of intimacy from among categories, such as a “family,” a “friend,” a “colleague,” or a “school mate.”
Thefriend list database190 may store the data analyzed in therelationship finder140 and theintimacy analyzer150, based on a relationship that a user has with respect to the user of theelectronic device100.
Also, according to another exemplary embodiment, at least one of therelationship finder140, theintimacy analyzer150, and thefriend list database190 may be included in theserver300.
3.Scenario 23.1. Identification with Respect to Content ObtainmentFIGS. 18, 19A through 19D, 20A through 20D, 21, 22A, 22B, 23A through 23C, and 23A through 24E are views for describing a method of providing content according to an exemplary embodiment.FIGS. 18, 19A through 19D, 20A through 20D, 21, 22A, 22B, 23A through 23C, and 23A through 24E are views of a user interface according to steps of a scenario in which theelectronic device100 provides a text chatting service, and directly provides to a user an image and a recommended application, as response related content.
Referring toFIG. 18, Chris, the user of theelectronic device100 is performing a text chatting with Hyunjin, the user of the device of anotheruser200.
Theelectronic device100 displays the name of the user of theelectronic device100 via theuser name box40. According to another exemplary embodiment, theelectronic device100 may display the name of the user of the device of anotheruser200 via the user name box40.0
Theelectronic device100 receives themessage50 “Hi” from the device of anotheruser200 at8:26 on Aug. 6, 2013. Theelectronic device100 transmits themessage51 to the device of anotheruser200 and receives themessage52 from the device of anotheruser200. The user of theelectronic device100 receives the message “where are you?” from Hyunjin.
Theelectronic device100 may determine whether each of themessages50,51,52, and53 includes a question. For example, theelectronic device100 may determine that themessages50,51, and52 do not include questions, and themessage53 includes a question.
For example, theelectronic device100 may mark the message which is determined as a question such that the message determined as a question is distinguished from other messages, as illustrated inFIGS. 19A through 19C. For example, theelectronic device100 may mark the message corresponding to the question by using an underline as illustrated inFIG. 19A. For example, theelectronic device100 may mark the message corresponding to the question by using a shadow for the text, as illustrated inFIG. 19B. For example, theelectronic device100 may mark the message corresponding to the question by using a shadow for a speech bubble, as illustrated inFIG. 19C.
When theelectronic device100 receives a user input corresponding to the portion marked so as to be distinguished from other messages, theelectronic device100 may display content which may be used in responding to the message.
For example, when themessage53 is determined as the question, theelectronic device100 may receive a user input indicating whether a recommended response is required, via the pop-upwindow60, as illustrated inFIG. 19D.
3.2. Proposal of ContentReferring toFIGS. 20A through 20D, theelectronic device100 may display the content which may be used in responding to themessage53, in response to the user input.
Referring toFIG. 20A, theelectronic device100 may recommend an application related to the message corresponding to the question, when theelectronic device100 receives a user input (for example, a touch) corresponding to the mark illustrated inFIG. 19C. For example, theelectronic device100 may recommend a map application or a navigation application when the message corresponding to the question is related to a location.
Referring toFIG. 20B, theelectronic device100 may recommend an application related to the message corresponding to the question, when theelectronic device100 receives a user input (for example, a user input touching a YES button) corresponding to the mark illustrated inFIG. 19D. For example, when the message corresponding to the question is related to a location, theelectronic device100 may recommend a map application or a navigation application.
Referring toFIG. 20C, theelectronic device100 may recommend at least one of an application and an image related to the message corresponding to the question, when theelectronic device100 receives a user input (for example, a user input touching a YES button) corresponding to the mark illustrated inFIG. 19D.
Theelectronic device100 may recommend at least one of the application and the image which may be used in responding to the message, by considering the relationship between the user of theelectronic device100 and the user of the device of anotheruser200, according to the exemplary method described by referring toFIGS. 15 through 17.
For example, theelectronic device100 may recommend at least one of the application and the image, except a personal picture of the user of theelectronic device100, when theelectronic device100 determines that the user of theelectronic device100 and the user of the device of anotheruser200 have a relationship that is an employer and an employee. For example, if the user of theelectronic device100 takes a vacation in London now, theelectronic device100 may not recommend a picture taken in the vacation as the response, by considering that the relationship between conversation participants is business-related. Theelectronic device100 may classify the vacation picture as personal data and store the classification information in connection with the vacation picture. Theelectronic device100 may not suggest any data classified as personal data when the level of intimacy (e.g., level 2) of the user of anotherdevice200 is lower than a predetermined level (e.g., level 7).
Referring toFIG. 20D, theelectronic device100 may recommend at least one of an application, a direct response, and an image, which are related to the message corresponding to the question, when theelectronic device100 receives a user input (for example, a user input touching a YES button) corresponding to the mark illustrated inFIG. 19D.
Theelectronic device100 may recommend at least one of the application, the direct response, and the image which may be used in responding to the message, by considering the relationship between the user of theelectronic device100 and the user of the device of anotheruser200, according to the exemplary method described with reference toFIGS. 15 through 17. In this specification, the direct response may denote a sentence, a phrase, a word, etc. which may become a response with respect to the message, without an additional search.
For example, when theelectronic device100 determines that the user of theelectronic device100 and the user of the device of anotheruser200 have a relationship that has a high level of intimacy, such as families or friends, theelectronic device100 may provide the content which may be used in responding to the message, the content including an personal picture of the user of theelectronic device100. For example, if the user of theelectronic device100 currently takes a vacation in London, theelectronic device100 may recommend a picture taken in the vacation as the response, by considering that the conversation participants have a personal relationship.
3.3. Specific Response SituationFIGS. 21, 22A, 22B, 23A through 23C, and 23A through 24E are views illustrating a process in which a user selects response related content proposed by theelectronic device100 and provides the response related content to the device of anotheruser200.
Referring toFIG. 21, the user of theelectronic device100 may select a map application from among the recommended response related content illustrated inFIG. 20D.
Referring toFIG. 22A, theelectronic device100 may request the user to identify whether to edit the content, via a pop-upwindow70. For example, when the user wants to use the content provided by the application without an additional edit thereof, the user may select a response indicating no edit of the content.
Referring toFIG. 22B, when theelectronic device100 identifies that the user does not want to edit the selected content, theelectronic device100 may provide a screen indicating a current location of the user in the map application selected inFIG. 21 to the device of anotheruser200 via a message, without an additional edit of the screen.
Referring toFIG. 23A, theelectronic device100 may request the user to identify whether to edit the content, via the pop-upwindow70. For example, when the user wants to make a response by editing the content provided by the application, the user may select a response indicating an edit of the content.
Referring toFIG. 23B, when theelectronic device100 identifies that the user wants to edit the selected content, theelectronic device100 may search the current location of the user in the map application selected inFIG. 21 for the user to edit a screen indicating the current location of the user, and may display a screen corresponding to a search result.
Referring toFIG. 23C, theelectronic device100 may transmit a screen which is the result of editing the search result provided by the map application, via the user, to the device of anotheruser200, via a message.
3.4. Feedback SituationFIGS. 24A through 24E are views of a user interface via which a user feedback is received with respect to the provision of content which may be used in responses, according to an exemplary embodiment.
Referring toFIG. 24A, theelectronic device100 may ask the user whether the user is satisfied with the providing of the content which may be used in responses, via a pop-upwindow80.
Referring toFIGS. 24B through 24D, when the user has answered via the pop-upwindow80 that the user was not satisfied with the providing of the content, theelectronic device100 may provide pop-upwindows81,82,83,84, and85 via which the user may select categories which the user wants to add or exclude for the later providing of the content which may be used in responses.
Referring toFIG. 24E, when the user has answered via the pop-upwindow80 that the user was not satisfied with the providing of the content, theelectronic device100 may provide pop-upwindows86 and87 via which the user may select an environment setting which the user wants to adjust with respect to the later providing of the content which may be used in responses.
According to another exemplary embodiment, theelectronic device100 may determine user's category preference without using the survey pop-upwindows81,82,83,53, and85, To this end, theelectronic device100 may automatically store category information of content selected by the user every time a selection is made, and count the number of times that each category is selected. Recommended response may be displayed in the order of category preference.
3.5. Layout of the Proposed ResponseFIGS. 25A through 25E are views of a content providing screen, when content which may be used in responses is provided, according to an exemplary embodiment.
Referring toFIG. 25A, theelectronic device100 may provide the content by a screen division method. Theelectronic device100 may display a recommendedcontent list90, a recommendedcontent list91, and arecommended application list92. For example, when a user touches the recommendedcontent list90 in a firstscene Scene #1, theelectronic device100 is transformed into a secondscene Scene #2 to transmit the recommendedcontent list90 to a device of another user. For example, theelectronic device100 may transmit the recommendedcontent list90 to the device of another user via a message.
Referring toFIG. 25B, theelectronic device100 may provide the content by a screen transformation method. Theelectronic device100 may display summarized response relatedcontent92 in a thirdscene Scene #3, and may display specific response relatedcontent93,94, and95 in a fourthscene Scene #4. Theelectronic device100 may display the fourthscene Scene #4 when the user touches or taps the thirdscene Scene #3.
Referring toFIG. 25C, theelectronic device100 may display an image, an application, or a direct possible response. Theelectronic device100 may display the content which may be used in responses by one or more methods and arrangements.
Referring toFIG. 25D, theelectronic device100 may provide the content which may be used in responses via a voice of an avatar.
Referring toFIG. 25E, when theelectronic device100 is a smart watch, theelectronic device100 may provide the recommended content via asummary screen97. When a user touches or taps amove button98, theelectronic device100 may display another summary screen.
FIG. 26 is a view of theelectronic device100 providing an interface via which a content providing layout is adjusted.
Referring toFIG. 26, theelectronic device100 may receive a user feedback regarding whether to change a layout of the content provision, via a pop-upwindow99, and may select various content providing layouts according to a selection of the user.
4.Scenario 34.1. Identification with Respect to Content ObtainmentFIGS. 27 through 37 are views for describing a method of providing content according to an exemplary embodiment.FIGS. 27 through 37 are views of a user interface according to operations of a scenario in which theelectronic device100 provides a voice chatting service, and provides an image, a recommended application, and a direct response, to a user, as the response related content.
Referring toFIG. 27, Chris, the user of theelectronic device100, is performing a voice chatting with Hyunjin, the user of the device of anotheruser200. Theelectronic device100 and the device of anotheruser200 may divide the conversation in a message unit.
Referring toFIG. 28, theelectronic device100 may receive each of voice messages and perform a speech to text (STT) transformation with respect to the message according to illustrated orders.
Theelectronic device100 may identify which language each voice message is formed of in operation S4010. For example, theelectronic device100 may identify whether each voice message is formed of the English language or the Korean language.
Theelectronic device100 may transform each voice message into a text of the identified language in operation S4030. For example, when theelectronic device100 determines that a first voice message is formed of the English language, theelectronic device100 may transform the first voice message into an English text.
Theelectronic device100 may display the transformed text in operation S4050. For example, theelectronic device100 may transform the first voice message into the English text and may display the English text via a display unit (or an output interface).
Referring toFIG. 29, the electronic device of Chris receives themessage55 “Where is he?” from the electronic device of Hyunjin. Theelectronic device100 may determine whether each of messages is a question. For example, theelectronic device100 may determine whether themessage55 is a question. Theelectronic device100 may determine who “he” included in themessage55 “Where is he?” refers to, by performing a natural language analysis.
4.2. User Input ReceptionReferring toFIG. 30, when theelectronic device100 determines that the message includes a question, theelectronic device100 may receive a user input indicating whether the user needs a recommended response.
For example, when theelectronic device100 determines that themessage55 “Where is he?” is a question, theelectronic device100 may check whether the user needs a recommended response, via the pop-upwindow63, as illustrated inFIG. 30. When the user touches a “Yes” button indicating that the user wants to receive content which may be used in responses, in correspondence to the pop-upwindow63, theelectronic device100 may display various types of obtained content. Alternatively, theelectronic device100 may omit to display the pop-upwindow63 according to a user setting, and may provide a recommended response once themessage55 is determined as an inquiry message.
4.3. Specific Proposal SituationsReferring toFIG. 31, theelectronic device100 may provide images74_a,74_b, and74_c, applications75_a,75_b, and75_c, and a possibledirect answer76.
Referring toFIG. 32, the user may select the image74_bfrom among the content which may be used in responses. Theelectronic device100 may receive a user input. For example, the user input may be a touch input selecting the image74_b.
Referring toFIG. 33, theelectronic device100 may identify whether to edit the selected content, via the pop-upwindow64. The user may touch a “No” button indicating that the user wants to use the selected content as the response without editing the selected content, via the pop-upwindow64.
Referring toFIG. 34, when the user checks that the user wants to use the selected content without editing the content via the pop-upwindow64, theelectronic device100 may transmit the image74_bto the conversation partner, via themessage56.
Referring toFIG. 35, theelectronic device100 may identify whether to edit the selected content via the pop-upwindow64. The user may touch a “Yes” button indicating that the user wants to use the selected content as the response by editing the content, via the pop-upwindow64.
Referring toFIG. 36, when the user touches the “Yes” button indicating that the user wants to use the selected content as the response by editing the content, via the pop-upwindow64, theelectronic device100 may provide the user with animage edition environment65. The user may edit the image74_bvia theimage edit environment65.
Referring toFIG. 37, when the user finishes editing the image74_b, theelectronic device100 may provide the image74_bto the user of the electronic device, the user of the device of another user, or a third user, via a mailing service or a social network service, according to a selection of the user of theelectronic device100. Also, theelectronic device100 may provide the image74_bto the user of the electronic device, the user of the device of another user, or the third user, regardless of the selection of the user.
5.Scenario 45.1. Identification with Respect to Content ObtainmentFIGS. 38, 42, 43A, 43B, 44, 45, 46A, 46B, 47, 48, 50A through 50D, 51A through 51D, 52A through 52D, and 53A through 53C,53D1,53D2,54A through54D,55A through55D, and56 are views for describing a method of providing content according to an exemplary embodiment.FIGS. 38, 42, 43A, 43B, 44, 45, 46A, 46B, 47, 48, 50A through 50D, 51A through 51D, 52A through 52D, 53A through 53C,53D1, and53D2,54A through54D,55A through55D, and56 are views of a user interface according to operations of a scenario in which theelectronic device100 provides a video call service and provides to a user at least one of an image, a recommended application, and a possible direct answer, as the response related content.
Referring toFIG. 38, Sunny, the user of theelectronic device100, is performing the video call service with John, the user of the device of another user. Theelectronic device100 may display appearances of the user of the electronic device and the user of the device of another user in real time. The electronic device may display content of conversation between the user of the electronic device and the user of the device of another user in real time. For example, theelectronic device100 may display the content of conversation between the users of theelectronic device100 and the device of anotheruser200 as a speech bubble shape via the STT function.
The electronic device receives themessage57 “Where are you?” from the device of anotheruser200, which is the device of John. Theelectronic device100 and the device of anotheruser200 may display exchanged messages via the STT function. Theelectronic device100 continuously transmits messages to the device of another user, and receives messages from the device of anotheruser200. Theelectronic device100 may determine whether the continuously received messages are questions. The user of theelectronic device100 receives themessage57 “where are you?” from John.
Referring toFIG. 39, when theelectronic device100 determines that the message37 is a question, theelectronic device100 may check whether the user needs to be provided with content which may be used in responding to the message37, via the pop-upwindow65.
5.1. Proposal of ContentReferring toFIGS. 40 through 42, theelectronic device100 may display the content which may be used in responding to themessage57, in correspondence to a user input.
Referring toFIG. 40, when theelectronic device100 receives themessage57 illustrated inFIG. 38, theelectronic device100 may recommend an application related to themessage57. For example, when content of themessage57 is related to a location, theelectronic device100 may recommend a map application or a navigation application.
Referring toFIG. 41, when theelectronic device100 receives a user input (for example, a user input touching a Yes button) corresponding to the mark illustrated inFIG. 39, theelectronic device100 may recommend an application related to themessage57. For example, when the message corresponding to the question is related to a location, theelectronic device100 may recommend a map application or a navigation application.
Referring toFIG. 42, when theelectronic device100 receives a user input (for example, a user input touching a Yes button) corresponding to the mark illustrated inFIG. 39, the electronic device may recommend at least one of an image, an application, and a direct response related to themessage57.
5.3. Specific Proposal Situation (1)Referring toFIG. 43A, theelectronic device100 may provide images to the user when the user requests for the content which may be used in responses.
Referring toFIG. 43B, when the user wants to receive the content which may be used in responding to the message, theelectronic device100 may provide images to the user without additionally identifying whether to provide the content which may be used in responses.
Referring toFIG. 44, the user may select the image77_afrom among the content which may be used in responses. Theelectronic device100 may receive a user input selecting the content. For example, the user input may be a touch input selecting the image77_a.
Theelectronic device100 may provide a response edit window to the user. The user may edit the selected image77_bvia the response edit window.
Referring toFIG. 45, when the user touches the image77_bwhich the user has finished editing, the edited selected image77_bmay be provided to the conversation partner. For example, when the user touches the selected image77_bwhich the user has finished editing, theelectronic device100 may transmit the edited selected image77_bto the conversation partner. Theelectronic device100 may transmit the edited selected image77_bvia a portion displaying the appearance of the user.
When the user finishes editing the selected image, the electronic device may provide the image77_bto the user of theelectronic device100, the user of the device of anotheruser200, or a third user, via a mailing service or an SNS, according to a selection of the user. Also, theelectronic device100 may provide the image77_bto the user of theelectronic device100, the user of the device of anotheruser200, or the third user, via one or more services, regardless of the selection of the user.
5.4. Specific Proposal Situation (2)Referring toFIG. 46A, when the user requests for the content which may be used in responses, theelectronic device100 may provide an image, an application, and a possible direct answer.
Referring toFIG. 46B, the electronic device may provide the image, the application and the possible direct answer, without additionally identifying whether to provide the content which may be used in responding to the message.
Referring toFIG. 47, the user may select a map application78_afrom among the content which may be used in responses. Theelectronic device100 may receive a user input. For example, the user input may be a touch input selecting the application78_a.
Theelectronic device100 may provide a response edit window to the user, in correspondence to a user input. Theelectronic device100 may display, via the response edit window, a result of inputting a keyword extracted from the message in the application78_aas an input value. For example, as illustrated inFIG. 47, theelectronic device100 may display, via the response edit window, the image78_b, which is the result of inputting the keyword “a current location” in the map application.
Theelectronic device100 may provide an editing environment via the response edit window, via which the user may emit the result image78_b. For example, the user may enlarge or reduce the result image78_bvia the response edit window, and may select a portion of the result image78_b.
Referring toFIG. 48, when the user selects the result image78_bwhich has been edited, the result image78_bmay be provided to the conversation partner. For example, when the user touches the result image78_bwhich has been edited, theelectronic device100 may transmit the edited result image78_bto the conversation partner. Theelectronic device100 may transmit the edited result image78_bvia a portion displaying the appearance of the user.
6. Scenario 5FIGS. 50A through 50D, 51A through 51D, 52A through 52D, and 53A through 53C,53D1, and53D2 are views for generally describing a method of providing content according to an exemplary embodiment.FIGS. 49A through 49D, 50A through 50D, 51A through 51D, 52A through 52D, and 53A through 53C,53D1, and53D2 are views of a user interface according to operations of a scenario in which theelectronic device100 and the device of anotheruser200 recommend to the users a possible direct answer, a restaurant application, a response-related image, a map application, and a news application, as the response related content, while theelectronic device100 and the device of anotheruser200 provide a text message service.
6.1. Proposal of a Direct ResponseReferring toFIGS. 49A and 49B, John, the user of theelectronic device100 is conversing with Mike, the user of the device of anotheruser200, via a text message service. John receives amessage111 “Where are you going in this afternoon?” from Mike. Theelectronic device100 and the device of anotheruser200 may display exchanged messages. Theelectronic device100 may determine whether continuously received messages are questions.
Referring toFIG. 49A, when theelectronic device100 determines that themessage111 is a question, theelectronic device100 may provide direct answer candidates111_a,111_b, and111_cto the user. John, the user of theelectronic device100 may click one direct answer candidate111_bto transmit the direct answer candidate111_bto the conversation partner, Mike.
Referring toFIGS. 49C and 49D, John and Mike may identify exchangedmessages111,112,211, and212 via each of the devices.
6.2. Proposal of a Restaurant Via a Restaurant ApplicationReferring toFIGS. 50A and 50B, Mike may transmit amessage213 “Really? I was there for one month” to John. Theelectronic device100 may display amessage113 received from the device of anotheruser200. John may transmit amessage114 “Do you know a good restaurant near the Fenway Park?” to Mike. Thedevice200 of another user may display amessage214 received from theelectronic device100.
Referring toFIGS. 50C and 50D, the device of anotheruser200 may recommend an application214_arelated to various restaurants, in correspondence to the received message, and may recommend menu content. Mike may touch the menu content to transmit amessage115 including the menu content, to John.
6.3. Proposal of the Response Via an ImageReferring toFIGS. 51A and 51B, John, the user of theelectronic device100 may transmit amessage116 “Do you have a picture?” to Mike, the user of the device of anotheruser200. The device of anotheruser200 may display amessage216 received from John.
Referring toFIGS. 51C and 51D, the device of anotheruser200 may recommend an image216_ain correspondence to the received message. Mike may touch the recommended food image to transmit amessage118 including the menu content, to John.
6.4. Transmission of its Own Location Via a Map ApplicationReferring toFIGS. 52A and 52B, Mike may transmit amessage219 “Where do you depart from?” to John. Theelectronic device100 may display amessage119 received from Mike.
Referring toFIGS. 52C and 52D, theelectronic device100 may recommend a map application119_ain correspondence to the receivedmessage119. John may transmit his location via the recommended map application, to Mike, via amessage220.
6.5. Transmission of an Article Via a News ApplicationReferring toFIGS. 53A and 53B, John may transmit amessage121 “Is there a hot issue in Boston?” to Mike.
Referring toFIGS. 53C,53D1, and53D2, the device of anotheruser200 may display amessage221 received from John. The device of anotheruser200 may recommend a news application221_ain correspondence to the receivedmessage221. When Mike selects the recommended news application221_a, the device of anotheruser200 may display a news article via a screen transformation as illustrated in FIG.53D2, and Mike may select the news article to transmit the related article to theelectronic device100 of John. Theelectronic device100 may display amessage122 including a link of the related article and an article summary screen.
7.Scenario 6FIGS. 54A through 54D are views for describing a method of providing content according to an exemplary embodiment.FIGS. 54A through 54D are a view of a user interface according to operations of a scenario in which theelectronic device100 and the device of anotheruser200 provide a text message service during a voice call, and recommend to the users a response via a map application, as the response related content.
Referring toFIG. 54A, theelectronic device100 may receive avoice message131 “Where is the accident point?” during the voice call. Referring to FIG.54B, the electronic device may identify whether the user wants a recommended response via a pop-upscreen132. Referring toFIG. 54C, the user of theelectronic device100 may select a map application by a touch input. Referring toFIG. 54D, the device of anotheruser200 may receive amessage232 indicating a location of the user of theelectronic device100 from theelectronic device100.
8. Scenario 7FIGS. 55A through 55D are views for describing a method of providing content according to an exemplary embodiment.FIGS. 55A through 55D are a view of a user interface according to operations of a scenario in which theelectronic device100 and the device of anotheruser200 recommend to the users different response candidates as the response related content, while theelectronic device100 and the device of anotheruser200 provide a text message service.
Referring toFIG. 55A, John may receive amessage141 “Where are you going in this afternoon?” from his boss. In response to this, theelectronic device100 may propose direct responses, such as “13:00 Lunch meeting”141_a, “15:00 Business trip”141_b, or “18:00 conference meeting”141_c, and a related application141_d.
Referring toFIG. 55B, John may receive amessage151 “Where are you going in this afternoon?” from a friend. In response to this, theelectronic device100 may propose direct responses, such as “13:00 Lunch meeting”151_a, “14:00 Coffee time with Peter”151_b, or “18:00 Conference meeting”151_c, and a related application151_d.
Referring toFIGS. 55A and 55B, in response to the same question, theelectronic device100 or the device of anotheruser200 may recommend different responses according to the relationship between the user of theelectronic device100 or the user of the device of anotheruser200, and the conversation partner. For example, when theelectronic device100 receives themessage141 from the boss, the response “15:00 Business trip”141_bmay be recommended, and when the electronic device receives themessage151 from the friend, the response “14:00 Coffee time with Peter”151_bmay be recommended rather than the response “15:00 business trip”141_b.
Referring toFIG. 55C, John may receive amessage142 “How about buying a nice car?” from a salesperson. In response to this, theelectronic device100 may recommend direct responses, such as “No, I don't need to buy a car”142_aor “Never call me”142_b, and a related application142_c.
Referring toFIG. 55D, John may receive amessage152 “How about buying a nice car?” from a friend. In response to this, theelectronic device100 may propose direct responses, such as “If you give it to me, I'll be so happy”151_aor “I dream about it every day”152_b, and a related application151_c.
Referring toFIGS. 55C and 55D, in response to the same question, theelectronic device100 or the device of anotheruser200 may recommend different responses according to the relationship between the user of theelectronic device100 or the user of the device of anotheruser200, and the conversation partner. For example, when theelectronic device100 receives themessage142 from the salesperson, theelectronic device100 may recommend the direct response “Never call me”142_b, and when theelectronic device100 receives themessage152 from the friend, theelectronic device100 may recommend the direct response “I dream about it every day”152_b.
9. Components of the Electronic DeviceFIG. 56 is a block diagram illustrating a structure of auser terminal device1000 according to an exemplary embodiment. Theuser terminal device1000 illustrated inFIG. 56 may be theelectronic device100 ofFIG. 1.
Referring toFIG. 56, theuser terminal device1000 may include at least one selected from adisplay unit1100, a control unit (e.g., controller)1700, amemory1200, a global positioning system (GPS)chip1250, acommunication unit1300, avideo processor1350, anaudio processor1400, auser input unit1450, amicrophone unit1500, a photographing unit (e.g., camera)1550, aspeaker unit1600, and a motion detection unit (e.g., motion detector)1650.
Thedisplay unit1100 may include adisplay panel1110 and a controller for controlling thedisplay panel1110. Thedisplay panel1110 may be realized as various types of displays, such as liquid crystal displays (LCD), organic light-emitting diodes (OLED), active-matrix organic light-emitting diodes (AM-OLED), and plasma display panels (PDP). Thedisplay panel1110 may be realized to be flexible, transparent, and wearable. Thedisplay unit1100 may be combined with atouch panel1470 of theuser input unit1450 to be provided as a touch screen. For example, the touch screen may include an integral module in which thedisplay panel1110 and thetouch panel1470 are stacked. Thememory1200 may include at least one of an internal memory and an external memory.
The internal memory may include at least one selected from, for example, volatile memories (for example, dynamic RAM (DRAM), static RAM (SRAM), and synchronous dynamic RMA (SDRAM), non-volatile memories (for example, one time programmable ROM (OTPROM), programmable ROM (PROM), erasable and programmable ROM (EPROM), electrically erasable and programmable ROM (EEPROM), mask ROM, and flash ROM), hard disk drive (HDD), and solid state drive (SSD). According to an exemplary embodiment, thecontrol unit1700 may process a command or data received from the non-volatile memory or from at least one of other components by loading the command or the data in the volatile memory. Also, thecontrol unit1700 may retain the data received or generated from other components in the non-volatile memory.
The external memories may include at least one selected from, for example, compact flash (CF), secure digital (SD), micro secure digital (Micro-SD), mini secure digital (Mini-SD), extreme digital (xD), and memory stick.
Thememory1200 may store various programs and data used for an operation of theuser terminal device1000. For example, thememory1200 may temporarily or semi-permanently store at least a portion of content that is to be displayed on a lock screen.
Thecontrol unit1700 may control thedisplay unit1100 such that a portion of the content stored in thememory1200 is displayed in thedisplay unit1100. In other words, thecontrol unit1700 may display a portion of the content stored in thememory1200 in thedisplay unit1100. Alternatively, thecontrol unit1700 may perform a control operation corresponding to a user's gesture, when the user's gesture is generated in a portion of thedisplay unit1100.
Thecontrol unit1700 may include at least one of the natural language processing unit (NLU)121, thesession manager122, theprofiler123, thecontext analyzer124, theresponse generator125, thecontent metadata storage126, and the App register127 of theknowledge framework120 as illustrated inFIG. 9A. For example, thecontrol unit170 of theuser terminal device1000 may include the natural language processing unit (NLU)121, thesession manager122, theprofiler123, thecontext analyzer124, and theresponse generator125, and theserver300 may include theApp register327 and thecontent metadata storage326 as shown inFIG. 12B. According to another exemplary embodiment, thecontrol unit1700 may include theprofiler123 and thecontext analyzer124, and theserver300 may include the natural language processing unit (NLU)321, thesession manager322, theresponse generator325, thecontent metadata storage326, and the App register327 with reference toFIG. 13B. Alternatively, thecontrol unit1700 of theuser terminal device1000 may include theprofiler123, theApp register127, and thecontent metadata storage126, and theserver300 may include the natural language processing unit (NLU)321, thesession manager322, theresponse generator325, thecontext analyzer324 and aprofiler323 as shown inFIG. 14B.
Thecontrol unit1700 may include at least one selected from random access memory (RAM)1710, read-only memory (ROM)1720, a central processing unit (CPU)1730, a graphic processing unit (GPU)1740, and a bus1750. TheRAM1710, theROM1720, theCPU1730, and theGPU1740 may be connected with one another via the bus1750.
TheCPU1730 accesses thememory1200 and performs a boot operation by using O/S stored in thememory1200. Also, theCPU1730 performs various operations by using various programs, content, and data stored in thememory1200.
A set of command languages for a system boot operation is stored in theROM1720. For example, when a turn-on command is input in aportable terminal1000 and a power is supplied, theCPU1730 may copy the O/S stored in thememory1200 in theRAM1710, according to the command language stored in theROM1720, and may execute the O/S to boot the system. When the booting is completed, theCPU1730 may copy various programs stored in thememory1200 in theRAM1710 and execute the programs copied in theRAM1710 to perform various operations. TheGPU1740 displays a UI screen on a portion of thedisplay unit1100, when the booting of theuser terminal device1000 is completed. In detail, theGPU1740 may generate a screen in which electronic files including various objects, such as content, icons, and menus are displayed. TheGPU1740 may calculate feature values, such as coordinate values, shapes, sizes, and colors of displayed objects in correspondence to a layout of the screen. Also, theGPU1740 may generate screens of various layouts including objects, based on the calculated feature values. The screens generated by theGPU1740 may be provided to thedisplay unit1100 and may be each displayed in each portion of thedisplay unit1100.
TheGPS chip1250 may receive a global positioning system (GPS) signal from a GPS satellite to calculate a current location of theuser terminal device1000. Thecontrol unit1700 may calculate a user location by using theGPS chip1250, when a navigation program is used or on other occasions when a current user location is needed.
Thecommunication unit1300 may perform communication with various types of devices of another user, according to various types of communication methods. Thecommunication unit1300 may include at least one selected from aWiFi chip1310, abluetooth chip1320, awireless communication chip1330, and anNFC chip1340. Thecontrol unit1700 may perform communication with various types of devices of another user by using thecommunication unit1300.
TheWiFi chip1310 and thebluetooth chip1320 may perform communication by respectively using a WiFi method and a bluetooth method. When theWiFi chip1310 or thebluetooth chip1320 is used, various connection content such as SSID and a session key is firstly transmitted and received, and then, communication is connected by using the various connection content, to transmit and receive various information. Thewireless communication chip1330 refers to a chip performing communication according to various communication standards, such as IEEE, Zigbee, 3rdgeneration (3G), 3rdgeneration partnership project (3GPP), and long term evolution (LTE). TheNFC chip1340 refers to a chip operating in a near field communication (NFC) method using 13.56 MHz zone from among various RF-ID frequency zones, including 135 kHz, 13.56 MHz, 433 MHz, 860-960 MHz, and 2.45 GHz.
Thevideo processor1350 may process content received via thecommunication unit1300 or video data included in content stored in thememory1200. Thevideo processor1350 may perform various image processing with respect to the video data, such as decoding, scaling, noise filtering, frame rate conversion, and resolution conversion.
Theaudio processor1400 may process content received via thecommunication unit1300 or audio data included in the content stored in thememory1200. Theaudio processor1400 may perform various processing with respect to the audio data, such as decoding, amplification, and noise filtering.
Thecontrol unit1700 may drive thevideo processor1350 and theaudio processor1400 when a reproducing program with respect to multimedia content is executed, to reproduce corresponding content. Thespeaker unit1600 may output audio data generated by theaudio processor1400.
Theuser input unit1450 may receive an input of various command languages from a user. Theuser input unit1450 may include at least one selected from a key1460, atouch panel1470, and apen recognition panel1480.
The key1460 may include various types of keys, such as a mechanical button and a wheel, formed in various portions of an external body of theuser terminal device1000, such as a front portion, a side portion, and a rear portion.
Thetouch panel1470 may sense a touch input of a user and may output a touch event value corresponding to the sensed touch signal. When thetouch panel1470 is formed as the touch screen by being combined with thedisplay panel1110, the touch screen may be realized as various types of touch sensors, such as a capacitive type, a decompression type, and a piezoelectric type. The capacitive type uses a method of calculating a touch coordinate by sensing fine electricity caused by a human body of a user when a part of the human body of the user touches a surface of the touch screen, by using a dielectric coated on the surface of the touch screen. The decompression type uses a method of including two electrode plates equipped in the touch screen and calculating a touch coordinate by sensing a current flow due to the upper and bottom plates contacting each other at a touched point, when a user touches a screen. The touch event occurring in the touch screen may be generated mainly by a finger of a human being, but may also be generated by an object of a conductive material which may cause a change in a capacitance.
Thepen recognition panel1480 may sense a proximate input of a pen or a touch input of a pen according to an operation of a touch pen (for example, a stylus pen) or a digitizer pen by a user, and may output a pen proximate event or a pen touch event that is sensed. Thepen recognition panel1480 may be realized for example as an EMR method and may sense a touch or proximate input, according to a change in a strength of an electromagnetic field due to approaching or a touch of a pen. In detail, thepen recognition panel1480 may be formed by including an electronic induction coil sensor having a grid structure, and an electronic signal processing unit sequentially providing indirect signals having pre-determined frequencies in each loop coil of the electronic induction coil sensor. If there is a pen in which a resonator circuit is equipped, around the loop coil of thepen recognition panel1480, the magnetic field transmitted from the corresponding loop coil may generate a current based on mutual electromagnetic induction, in the resonator circuit in the pen. Based on the current, an inductive magnetic field is generated from a coil forming the resonator circuit in the pen, and thepen recognition panel1480 may detect the inductive magnetic field in the loop coil in a state of signal reception, so that an approaching or a touch location of the pen may be sensed. Thepen recognition panel1480 may be provided below thedisplay panel1110, by having a pre-determined area, for example, an area which may cover a display portion of thedisplay panel1110.
Themicrophone unit1500 may receive an input of a user's voice or other noises and convert the input into audio data. Thecontrol unit1700 may use the user's voice input through themicrophone unit1500 in a calling operation or may convert the user's voice into audio data to store the audio data in thememory1200.
The photographingunit1550 may photograph a still image or a video according to a control of a user. The photographingunit1550 may be realized in a multiple number, for example including a front camera and a rear camera.
When the photographingunit1550 and themicrophone unit1500 are provided, thecontrol unit1700 may perform a control operation according to a user's voice input through themicrophone unit1500 or a user's motion recognized by the photographingunit1550. For example, theuser terminal device1000 may operate in a motion control mode or a sound control mode. When theuser terminal device1000 operates in the motion control mode, thecontrol unit1700 may active the photographingunit1550 to photograph the user and may track the motion change of the user to perform a control operation corresponding to the motion change of the user. When theuser terminal device1000 operates in the sound control mode, thecontrol unit1700 may analyze the user's voice input through themicrophone unit1500 and may operate in a sound recognition mode that performs a control operation according to the analyzed user's voice.
Themotion detection unit1650 may sense a motion of a body of theuser terminal device1000. Theuser terminal device1000 may rotate or may tilt in various directions. Here, themotion detection unit1650 may detect characteristics of the motion, such as a direction and angle of the rotation, and a tilting degree, by using at least one of various sensors, such as an earth magnetic sensor, a gyro sensor, and an acceleration sensor.
In addition, theuser terminal device1000 may further include a USB port to which a USB connector may be connected, various external input ports for connections with various external terminals, such as a headset, a mouse, and LAN, a DMB chip receiving and processing a digital multimedia broadcasting (DMB) signal, and various sensors.
Names of the above-described components of theuser terminal device1000 may vary. Also, theuser terminal device1000 may be formed by including one of the above-described components, by omitting some of the above-described components, or by further including additional components.
FIG. 57 is a block diagram of a software structure of theuser terminal device1000.
Referring toFIG. 57, thememory1200 may store an operating system for controlling a resource of theuser terminal device1000, and an application program for operating an application. The operating system may include akernel1210, middleware, an application program interface (API), etc. The operating system may include, for example, Android, iOS, Windows, Symbian, Tizen, Bada, etc.
Thekernel1210 may include at least one of a device driver1210-1 for managing the resource and a system resource manager1210-2. The device driver1210-1 may access and control hardware of theuser terminal device1000 like the software. To this end, the device driver1201-1 may divided into an interface and a separate driver module provided by each hardware company. The device driver1210-1 may include, for example, at least one selected from a display driver, a camera driver, a Bluetooth driver, a shared memory driver, a USB driver, a keypad driver, a Wi-Fi driver, an audio driver, and an inter-process communication (IPC) driver. The system resource manager1210-2 may include at least one of a process managing unit, a memory managing unit, and a file system managing unit. The system resource manager1210-2 may control, allocate, and retrieve the system resource.
Themiddleware1220 may include a plurality of modules which are realized in advance to provide common features of various applications. Themiddleware1220 may provide the function via theAPI1230 so that anapplication1240 may efficiently use the resource in theuser terminal device1000. Themiddleware1220 may include at least one selected from a plurality of modules including, for example, an application manager1220-1, a windows manager1220-2, a multimedia manager1220-3, a resource manager1220-4, a power manager1220-5, a database manager1220-6, a package manager1220-7, a connection manager1220-8, a notification manager1220-9, a location manager1220-10, a graphic manager1220-11, and a security manager1220-12.
The application manager1220-1 may manage a life cycle of at least one application of theapplications1240. The windows manager1220-2 may manage a graphic user interface (GUI) resource used in a screen. The multimedia manager1220-3 may check a format for reproducing various media files, and may encode or decode the media files by using a codec corresponding to a corresponding format. The resource manager1220-4 may manage a resource of at least one application of theapplications1240, such as a source code, memory, and a storage space. The power manager1220-5 may manage a battery or power by operating together with BIOS, and provide power information for operation. The database manager1220-6 may generate, search, and change a database to be used in at least one application of theapplications1240. The package manger1220-7 may install or update an application distributed in a package file format. The connection manager1220-8 may manage wireless connection, such as Wifi or Bluetooth. The inform manager1220-9 may display to the user or inform user of events, such as a message arrival, an appointment, and a proximity inform, by a method which is not disturbing to the user. The location manager1220-10 may manage location information of theuser terminal device1000. The graphics manager1220-11 may manage graphics effects which are provided to the user and a user interface related thereto. The security manager1220-12 may provide various security functions required for a system security or a user authentication. When theuser terminal device1000 includes a call function, themiddleware1220 may further include a call manager for managing a voice or a video call function of the user.
Themiddleware1220 may further include a runtime library1220-13 or other library modules. The runtime library1220-13 is a library module which is used by a compiler for adding a new function via a programming language while an application is executed. For example, the runtime library1220-13 may perform a function related to an input and output, a memory management, or an arithmetic function. Themiddleware1220 may generate a new middleware module by combining various functions of the above described internal modules. Themiddleware1220 may provide a specialized module according to a type of an operating system, to provide differentiated functions. Themiddleware1220 may partially omit previous components dynamically or add new components. Themiddleware1220 may be formed by partially omitting the components in the present exemplary embodiment, by further adding other components, or by replacing the components with other components performing substantially the same functions with different names.
TheAPI1230 is a set of API programming functions, and may be formed of different components, according to operating systems. In the case of Android or iOS, for example, one API set may be provided per platform, and in the case of Tizen, for example, two or more API sets may be provided.
Theapplication1240 may include a preloaded application which is installed as default or a third party application which a user may install and use during a process of using. Theapplication1240 may include, for example, at least one selected from a home application1240-1 for going back to a home screen, a dialer application1240-2 for making a call with a partner, a text message application1240-3 for receiving a message from a partner identified by a telephone number, an instant message (IM) application1240-4, a browser application1240-5, a camera application1240-6, an alarm application1240-7, a phone book application1240-8 for managing a telephone number or an address of a partner, a call log application1240-9 for managing a call log of a user, a text message receiving or sending log, or an absence call log, an email application1240-10 for receiving a message from a partner identified by an email, a calendar application1240-11, a medical player application1240-12, an album application1240-13, and a watch application1240-14. Names of the described components of the software in the present exemplary embodiment may vary according to types of operating systems. Also, the software according to the present exemplary embodiment may include at least one of the above-described components, omit some of the above-described components, or may further include other additional components.
FIG. 58 is a view of a user interface (UI) of anelectronic device2000 according to an exemplary embodiment. Theelectronic device2000 may be theelectronic device100 ofFIG. 1.
Theelectronic device2000 may include aprocessor2700, aninput interface2400, and anoutput interface2100.
Theprocessor2700 may include a mobile application processor or a central processing unit. Theprocessor2700 may be referred to as a controller and a control unit. The term “processor” may be used to denote a core, a display controller, and an image signal processor (ISP). Theprocessor2700 may include at least one of the components121-126,1710,1720,1730, and1740, of thecontrol unit1700 with reference toFIG. 56.
Theprocessor2700 according to an exemplary embodiment may extract at least one keyword from a message displayed on a screen via a message service. Also, theprocessor2700 according to an exemplary embodiment may newly generate a keyword related to the keyword. Also, theprocessor2700 according to an exemplary embodiment may obtain content based on the newly generated keyword and location information of theelectronic device2000.
Theinput interface2400 may denote a device used by a user to input data to control theelectronic device1000. For example, theinput interface2400 may be a key pad, a dome switch, a touch pad (using a touch-type capacitance method, a pressure-type resistive method, an infrared sensing method, a surface ultrasonic conductive method, an integral tension measuring method, a piezo effect method, etc.), a jog wheel, or a jog switch. Also, theinput interface2400 may include a touch screen, a touch panel, a microphone, and a keyboard.
Also, theinput interface2400 may include at least one module for receiving data from the user. For example, theinput interface2400 may include a motion recognition module, a touch recognition module, a voice recognition module, etc.
The touch recognition module sense a touch gesture on a touch screen of the user and transmit content related to the touch gesture to the processor. The sound recognition module may recognize a voice of the user by using a voice recognition engine and transmit the recognized voice to the processor. The motion recognition module may recognize motion of an object and transmit content related to the motion of the object to the processor.
Throughout the specification, an “input” made by the user via theinput interface2400 of theelectronic device2000 may include at least one selected from a touch input, a bending input, a voice input, a key input, and a multimodal input. However, it is not limited thereto.
A “touch input” may signify a gesture performed by a user on a touch screen to control theelectronic device100. The touch gesture stated in the present specification may include tapping, touching & holding, double tapping, dragging, panning, flicking, dragging & dropping, etc.
“Tapping” is a user's motion of touching a screen by using a finger or a touch tool such as an electronic pen (e.g., stylus) and then instantly lifting the finger or touch tool from the screen without moving.
“Touching & holding” is a user's motion of touching a screen by using a finger or a touch tool such as an electronic pen and then maintaining the above touching motion over a critical time, for example, 2 seconds, after touching the screen. In other words, a time difference between a touch-in time and a touch-out time is greater than or equal to the critical time, for example, 2 seconds. When a touch input lasts over the critical time, to remind the user whether the touch input is tapping or touching & holding, a feedback signal may be provided in a visual, acoustic, or tactile manner. The critical time may vary according to an exemplary embodiment.
“Double tapping” is a user's motion of touching the screen twice by using the finger or touch tool.
“Dragging” is a user's motion of touching the screen by using the finger or touch tool and moving the finger or touch tool to another position on the screen while keeping the touching motion. The dragging motion may enable the moving or panning motion of an object.
“Panning” is a user's motion of performing a dragging motion without selecting an object. Since no object is selected in the panning motion, no object is moved in a page but the page itself is moved on the screen or a group of objects may be moved within a page.
“Flicking” is a user's motion of performing a dragging motion over a critical speed, for example, 100 pixel/s, by using the finger or touch tool. The dragging (panning) motion or the flicking motion may be distinguished based on whether the moving speed of the finger or touch tool is over the critical speed, for example, 100 pixel/s, or not.
“Dragging & Dropping” is a user's motion of dragging an object to a predetermine position on the screen by using the finger or touch tool and then dropping the object at that position.
“Pinching” is a user's motion of moving two fingers touching on the screen in opposite directions. The pinching motion is a gesture to magnify (open pinch) or contract (close pinch) an object or a page. A magnification value or a contraction value is determined according to the distance between the two fingers.
“Swiping” is a user's motion of touching an object on the screen by using the finger or touch tool and simultaneously moving the object horizontally or vertically by a predetermine distance. A swiping motion in a diagonal direction may not be recognized as a swiping event.
“A motion input” refers to motion that a user applies to theelectronic device100 to control theelectronic device100. For example, the motion input may include the user's rotating theelectronic device100, the user's titling theelectronic device100, and the user's moving theelectronic device2000 in up, down, left, and right directions. Theelectronic device2000 may detect a motion which is predetermined by the user, by using an acceleration sensor, a tilt sensor, a gyro sensor, and a 3-axis magnetic sensor.
Throughout the specification, the “bending input” refers to a user input of bending the entireelectronic device2000 or a portion of theelectronic device2000 to control theelectronic device2000, when theelectronic device100 is a flexible display device. According to an exemplary embodiment, theelectronic device2000 may sense a bending location (a coordinate value), a bending direction, a bending angle, a bending speed, the number of times of bending, an occurrence point in time of a bending operation, a period of time for which the bending operation retains, etc. by using a bending sensor.
Throughout the specification, the term “key input” refers to a user input of controlling theelectronic device2000 by using a physical key attached to theelectronic device2000 or a virtual key provided by theelectronic device2000.
An output interface is configured to output an audio signal, a video signal, or an alarm signal, and may include a display module, a sound input module, etc. Also, the output interface may include a flat display device which may display a two-dimensional image, and a flat display device which may display a three-dimensional image. The output interface may include a device which may output a three-dimensional hologram.
Theelectronic device2000 may exchange information with a search server via communication. For example, theelectronic device2000 may communicate with the search server via at least one protocol. For example, theelectronic device2000 may communicate with the search server via at least one protocol selected from a trivial file transfer protocol (TFTP), a simple network management protocol (SNMP), a simple mail transfer protocol (SMTP), a post office protocol (POP), an internet control message protocol (ICMP), a serial line interface protocol (SLIP), a point to point protocol (PPP), a dynamic host control protocol (DHCP), a network basic input output system (NETBIOS), an internetwork packet excharge (IPX/SPX), an internet control manage protocol (ICMP), an internet protocol (IP), an address resolution protocol (ARP), a transmission control protocol (TCP), a user datagram protocol (UDP), winsock, a dynamic host configuration protocol (DHCP), and a routing information protocol (RIP). However, it is not limited thereto.
Theelectronic device2000 may perform near field communication via a near field communication module. The near field communication technology may include wireless LAN (WiFi), Bluetooth, Zigbee, WiFi Direct (WFD), ultra wideband (UWB), infrared data association (IrDA), etc. However, it is not limited thereto.
While not restricted thereto, an exemplary embodiment can be embodied as computer-readable code on a computer-readable recording medium. The computer-readable recording medium is any data storage device that can store data that can be thereafter read by a computer system. Examples of the computer-readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The computer-readable recording medium can also be distributed over network-coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion. Also, an exemplary embodiment may be written as a computer program transmitted over a computer-readable transmission medium, such as a carrier wave, and received and implemented in general-use or special-purpose digital computers that execute the programs. Moreover, it is understood that in exemplary embodiments, one or more units of the above-described apparatuses and devices can include circuitry, a processor, a microprocessor, etc., and may execute a computer program stored in a computer-readable medium.
The foregoing exemplary embodiments are merely exemplary and are not to be construed as limiting. The present teaching can be readily applied to other types of apparatuses. Also, the description of the exemplary embodiments is intended to be illustrative, and not to limit the scope of the claims, and many alternatives, modifications, and variations will be apparent to those skilled in the art.