The present invention relates to a system and a method for displaying content on a television in standby mode. More specifically, the present invention relates to a system and method making use of a television even in standby mode.
In recent times televisions become more sophisticated and of increasing functionality.
Additionally, televisions become more and more integrated into hi-fi networks or other types of networks having different sources at their disposal. During watching a television program, the functionality of the television can be fully used and enjoyed. On the other hand, when no television program is watched and the television is in standby mode or turned off, the television is not used anymore even though it might provide a variety of functions.
It is therefore the objective problem of the present invention to improve the prior art. Specifically, it is an object of the present invention to provide a system and a method which allow to display content on a television even in standby mode and adapted to the preferences of a user.
The present invention according to a first aspect relates to a system for displaying content on a television in standby mode, comprising a storage for storing face data of one or more users and for storing preference data associated to the face data, said preference data indicating preferences of the user, a display for displaying content, a standby indicator for indicating that the television is in standby mode, wherein in case that the television is in standby mode activating a front camera for capturing an image of a user, and activating a recommendation device for searching the attribute data of content currently available and for selecting in case that the captured image corresponds to one of the store face data at least one content to be displayed having attribute data matching the associated stored preference data.
The present invention according to a further aspect relates to a method for displaying content on a television in standby mode, comprising the steps of storing face data of one or more users and for storing preference data associated to the face data, said preference data indicating preferences of the user, displaying content, indicating that the television is in standby mode, wherein in case that the television is in standby mode activating a front camera for capturing an image of a user, and activating a recommendation device for searching the attribute data of content currently available and for selecting in case that the captured image corresponds to one of the store face data at least one content to be displayed having attribute data matching the associated stored preference data.
Advantageous features and embodiments are the subject matter of the dependent claims.
The present invention will now be explained in more detail in the following description of preferred embodiments in relation to the enclosed drawings in which
FIG. 1 shows a system for displaying content on a television in standby mode according to the present invention,
FIGS. 2 and 3 show embodiments of data stored in the storage according to the present invention, and
FIGS. 4 to 10 show flow charts with the process steps according to the method of the present invention.
FIG. 1 shows asystem100 for displaying content on a television in standby mode according to the present invention. The components shown inFIG. 1 can be all or partly integrated into a television. If only a part of the components is integrated into a television, then the other components can be integrated into one or more external components or devices which are connected to and in data communication with the television. In the following description, the embodiment is described that all components are integrated into a television; however, as mentioned above, the present invention is not limited to such an embodiment.
Thesystem100 as shown inFIG. 1 of course comprises all further features necessary enabling the functionality, such as a power source or the like, which are omitted in the figures for the sake of clarity.
Thesystem100 comprises astorage101. Thestorage101 can be divided into one or more storage parts being volatile and/or non-volatile memory parts.
Thesystem100 further comprises afront camera102 which is adapted to take pictures or images from an object in the environment of thesystem100, specifically from objects in the front of thesystem100. The term “front” hereby is intended to refer to those parts of the environment from which the display of the television can be seen. Thefront camera102 can be adapted to take still images and/or video. In one embodiment thefront camera102 is movable, i. e. turnable in order to also take pictures or images from objects which are not in the front of thesystem100.
Thefront camera102 is connected to and in data communication with aface recognition device103. Theface recognition device103 is adapted to carry out a face recognition based on images submitted by thefront camera102. The face recognition is hereby carried out according to known methods, for example the algorithms according to Viewdle or UIUC. But also any other present or future face recognition algorithm can be used.
Thesystem100 further comprises adisplay109 for displaying video content or image content. Thedisplay109 can be any type of known or future display devices, for example liquid crystal display panels (LCD), thin-film transistor displays (TFT), color-sequential displays, plasma display panels (PDP), digital micro-mirror devices or organic light emitting diode (OLED) displays or any other type of display. Connected to the display is agraphics engine106 which controls thedisplay109.
Thesystem100 further comprises adecoder107 for receiving television programs which are broadcast. The term “television program” is intended to encompass all types of video content, which are displayable on a display, panel or monitor. Adigital tuner108 within thesystem100 is adapted to select a frequency in order to show a specific television program and therefore also achannel control105 is adapted to control thedigital tuner108 in order to select the channel to be displayed on thedisplay109 and to change the frequency of thedigital tuner108.
Thesystem100 optionally comprises aback camera111, which is like thefront camera102 adapted to take pictures or images either in the form of a single image or video. Theback camera111 hereby covers the environment of thesystem100 which is not covered by thefront camera102, i. e. the back camera can take images of objects within parts of the environment from which the display cannot be seen. Specifically, if the system is placed near a wall, then theback camera111 is adapted to take a picture of the wall.
However,FIG. 1 is only an exemplary embodiment of a system according to the present invention, but the present invention is not limited to the shown components and structure, but can be adapted to any other television enabling a reception of television programs from an arbitrary source and enabling the display of the television programs.
Thesystem100 further comprises animage processing device110 which is adapted to process images and/or video received either from thefront camera102, from theback camera111 or stored in thestorage101. Image processing can comprise changing the size, changing the resolution, changing the color, sharpen, turn, invert or any other type of image processing.
Thesystem100 additionally comprises astandby indicator112 indicating whether the television, into which thesystem100 is integrated or to which thesystem100 is connected, is in standby mode. That means thestandby indicator112 indicates whether the television is currently used for watching television programs or whether the television is currently switched off or put into standby mode. Thestandby indicator112 can either automatically indicate a standby in case that the watching of television programs is ended by a user or the standby indicator can be triggered by a corresponding function or key, for example the power key, which has to be pressed or activated by the user. That means that thestandby indicator112 either automatically based on predefined condition indicates a standby or that thestandby indicator112 based on manual actions accomplished by the user indicates a standby.
Thesystem100 additionally comprises arecommendation device104 for selecting content stored in thestorage110 and/or for selecting content from any other source, e.
g. thefront camera102, theback camera111 or any other source connected to thesystem100 to be displayed on thedisplay109. The selection process will be explained in detail in the following.
The term “content” when used in the present specification is intended to refer to any type of data or information comprising displayable parts. That means that the content can either consist of displayable data or information or the content can comprise displayable data or information. Content can for example comprise text, images, video or any other type of displayable data. The content additionally can also comprise sound data, which can be played by a corresponding speaker or the like, attached to thesystem100. The content can also comprise further data or information functioning as instruction for further devices in order to enable not only a displaying of the content but in order to enable also the activation of further devices such as sound, light and so on.
According to the present invention in thestorage101 face data of one or more users are stored and additionally preference data of the one or more users are stored and associated with the stored face data.
The preference data comprises information on preferences, habits and/or key words indicating preferences of the user with respect to content. The preference data can hereby have been input manually by a user and/or retrieved automatically by the system. The system can for example monitor the habits of a user when watching television programs and therefrom extract preference data of the user.
If the user stops watching television then either automatically or manually thestandby indicator112 will indicate the standby mode of the television.
Then the following process is activated. Thefront camera102 captures an image of the user. Theface recognition device103 then accomplishes a face recognition of the face within the image captured by thefront camera102.
The recognized face is transmitted to therecommendation device104, which compares the recognized face with face data stored in thestorage101. Alternatively, the comparison step can also be carried out by theface recognition device103. If the captured face data correspond to face data stored in thestorage101, then therecommendation device104 will select a content out of all contents currently available whose attributes matches the preference data stored with the corresponding face data.
More specifically, therecommendation device104 receives a list of all contents currently available either from thestorage101, from thefront camera102, from theback camera111, via an external device which is connected to thesystem100, via the interne, via thedecoder107 or via any other available source. Usually, content has associated thereto meta data and/or other additional data indicating attributes of the content. Otherwise, therecommendation device104 can be adapted to analyze the content in order to retrieve attributes of the content.
Therecommendation device104 compares the attributes of the available content with the preference data stored and associated to the stored face data and selects the content for displaying on thedisplay109 having attribute data matching the preference data of the user.
An example of data stored in thestorage101 is shown inFIGS. 2 and 3.
InFIG. 2 and example of auser entry200 is shown. The user entry in the present embodiment comprises a user identification, i. e. auser ID201, which uniquely identifies a user. Auser ID201 can be any combination of symbols and/or icons uniquely identifying a user. Theuser entry200 further comprisesface data202 of the corresponding user. The face data can either be a stored image and/or a set of data indicating important features within the face of a user in order to enable a proper match between the storedface data202 and those face data which are transmitted by theface recognition device103. However, in an alternative embodiment, theuser ID201 can also be omitted.
Together with theface data202 preference data are stored indicating preferences of the corresponding user. According to a preferred embodiment of the present invention in theuser entry200source data203 are stored indicating a preferred source for contents to be displayed. Thesource data203 can for example indicate thestorage101 as source, theback camera111 as source, thefront camera102 and/or any other type of source, e.g. the internet, an external device or the like. Additionally, thesource data203 may indicate a specific group of contents stored within thestorage101 or available via any other source.
In theuser entry200 in a preferred embodiment favoritekey words204 are stored indicating key words for which therecommendation device104 should search within the available content in order to retrieve content that is preferred by the user.
Theuser entry200 in any case at least has to compriseface data202 and preference data, which in the present example is represented assource data203 and favoritekey words204, but also any other type of preference data can be stored, for example date and time, gender, age, personal settings or any other type of preference data.
FIG. 3 shows an example ofattribute data300 associated to content, which can comprise or consist of metadata. In case that the content is a picture, theattribute data300 can for example comprise apicture ID301 uniquely identifying the picture. Comprised within theattribute data300 can be further atitle302, e.g. “football game”,key words303 indicating important features or attributes of the content, e.g. “football” or “green”, the time anddate304 of creation of the content or of change of the content andlocation data305 indicating the location from which the content can be retrieved, i.e. thestorage101, thefront camera102, the internet or the like. Of course the present invention is not limited to the shownattribute data300 but can comprise any other type ofattribute data300 including metadata indicating features of the content. Additionally or alternatively theimage processing device110 can analyze the content itself in order to retrieve attribute data of the content. For example, if the content has many green pixels within the image, then theimage processing device110 can add “green” to thekeywords303.
According to a preferred embodiment, theuser entry200 is created once by the user. The user triggers thefront camera102 to take a picture of his face in order to store theface data202 within theuser entry200 in thestorage101. Theuser ID201 is either created automatically by thesystem100 or the user might have selected hisown user ID201, e.g. his name, a nickname, a symbol, a combination thereof or the like. When creating theuser entry200 the user preferably also chooses one or more sources, from which the content to be displayed should be retrieved and the users' selection is stored assource data203. Thesource data203 can also comprise an indication, which function should be used, i.e. whether a wall function, a mirror function or a picture function should be, which will be explained in detail later on. The user can further input several keywords, e.g. “football”, “classic music” and they are stored asfavourite keywords204 in theuser entry200 in thestorage101.
The user can further have storedattribute data300 of content. Hereby, the content itself can be stored in thestorage101 together with theattribute data300 or only theattribute data300 can be stored, whereby thelocation data305 indicate the location of the content, e.g. the file path on thestorage101.
As previously explained, the television might have a connection to the internet and thereby content can also be retrieved from the internet together with their attribute data. For example thesystem100 can get content together with metadata by parsing the content metadata or related documents like HTML, or by analyzing the content by a corresponding image processing carried out by theimage processing device110. In this case, thelocation data305 may be the URL of the content if the content is not stored in thestorage101.
In other words, with the present invention a system and a method is provided which allow to use the sophisticated system of a television even when the television is in standby mode. In order to ensure that the shown contents are really personalized to the user who is currently present, a picture of the user is taken and compared with face data stored in the storage in order to select contents appropriate for the user based on the stored preference data.
In an alternative embodiment also groups of users can be stored. For example the face data of two users are stored and treated as one group associated with preference data. Such a group can comprise people living together in the same household, such as wife and husband or family. Thefront camera102 in this case is adapted to capture an image of all users in the vicinity of thesystem100, to transmit the data to theface recognition device103, which in turn recognizes the number of faces and can accomplish a face recognition on each of the faces.
Alternatively or additionally, theface recognition103 in case that thefront camera102 captures an image comprising several faces and after having recognized two or more faces, can choose only one of them, for example by measuring the face size within the picture and choosing the face which appears as largest face within the image or the like.
In the following the steps carried out according to the method of the present invention will be explained in detail with reference toFIGS. 5 to 10.
FIG. 4 shows an initial phase of the process. The process starts in step S0. In step S1 the displaying of television programs is ended. That means that in step S1 triggered either automatically or manually thestandby indicator112 indicates the standby of the television. As already explained, the user can press for example the power key of anintelligent television system100 when the user finished watching a television program. Thedisplay109 will then stop displaying a television program.
In case of a standby then the components of thesystem100 are activated and the following steps are carried out. In step S2 thefront camera102 captures a picture of the room and thereby captures a picture of the one or more users being in the vicinity of thesystem100. The captured pictures then transmitted to the face recognition device130 which in the following step S3 recognizes one or more user faces.
As previously described either one of the recognized faces is selected or a part or all faces are treated as a corresponding group. The recognized face is then transmitted to therecommendation device104 which in the following step S4 compares the recognized face with theface data202 stored in thestorage101. Alternatively, the comparison can also be carried out by thefact recognition device103, which then transmits the comparison result to therecommendation device104.
In the following step S5 therecommendation device104 checks whether a match has been found between the recognized face data captured by thefront camera102 and theface data202 stored in thestorage101, i.e. whether the same or a similar face was found within theface data202 stored in thestorage101. In case that a match is found, then the process continues at point A, which will be described later. Otherwise, if in step S5 it is decided by therecommendation device104 that no match has been found, then the process continues at point B, which will also be described later.
The process starting from point B will now be described with reference toFIG. 5.
In step S10 therecommendation device104 or theface recognition device103 indicates that no match was found between the face in the image captured by thefront camera102 and theface data202 stored in thestorage101. If theface recognition device103 carried out the matching, then the case of no found match might be indicated to therecommendation device104 by sending a special user ID to therecommendation device104 indicating “no user”.
That means that currently a user is present whose face is not yet stored as face data in the storage. In the preferred embodiment of the present invention it is assumed that all users who frequently use the television have already stored their face data in thestorage101. If therefore the face of a user cannot be found in thestorage101, he may be a guest or visitor or the like. In order to nevertheless use the television system and to display content which could be adapted to the preferences of the user, whose face data are not stored, in step S11 therecommendation device104 compares the recognized face captured by thefront camera102 with faces which appear within pictures, video or other types of contents stored within thestorage101. This bases on the assumption that if a user presently is a guest or a visitor, then there might be a relationship between the guest and the regular user, so that there might be the probability that pictures or videos are present within the storage which show the user who is currently the guest or visitor.
If more than one face was recognized in the captured image by theface recognition device103, then it is also possible that therecommendation devices104 searches for pictures and/or videos within the storage comprising some or all of the recognized faces.
If in step S12 it is decided that a match was found then in the following step S13 the found one or more contens are displayed on thedisplay109. If in step S12 it is decided that no match was found, then in the following step S14 the standard settings can be used. The standard settings can hereby be that no content is displayed at all, that content is randomly selected and displayed or any other type of setting can be used.
The process then in any case continues with point C.
The process starting from point A will now be explained with reference toFIG. 6. This happens in the case that a match is found between the face within the image captured by thefront camera102 and theface data202 stored within the storage. The comparison process can be carried out by therecommendation device104 or by theface recognition device103. In the latter case, the face recognition device can send thecorresponding user ID201 associated to the foundface data202 to therecommendation device104.
In the following step S20 therecommendation device104 retrieves the face data and optionally the user ID from thestorage101. In the next step S21 the settings or preference data associated to the storedface data202 are retrieved. The process then depending on the settings, e. g. dependent on thesource data203, continues with different functions. Otherwise, if nosource data203 are present or if no other settings indicate which function should be used, then either automatically one of the functions can be used or one of the functions can be randomly selected.
The process then depending on the selected function continues with a mirror function in step S22, a picture function in step S23 or a wall function in step S24. The different functions will be explained in detail later on.
In any case the process then continues with step S25 where it is checked whether additional sound, i. e. sound or other features in addition to displaying the content on thedisplay109 is provided. If this is the case, then in step S26 the sound source is selected. Hereby, the content can either comprise associated metadata already indicating a specific sound, sound source or the like or otherwise a sound source can be selected depending on other parameters. For example therecommendation device104 can analyze theattribute data300 of the content or the content itself in order to find an appropriate type of sound. If for example one of the key words is “tree” or if therecommendation device204 analyzes the content and finds that there is nature or trees within, then as sound a quiet and slow type of sound may be selected.
In the next step S27 the sound is played in addition to the content displayed on thedisplay109. As previously described, the content can also compriseother attribute data300 comprising instructions for other devices. In the above example, when a content comprising nature is displayed and a corresponding quiet sound is selected, then with the additional instructions the lights may be dimmed in order to create a relaxing atmosphere.
Otherwise if in step S25 it is decided that no additional sound or other types of instructions to external devices is provided, then the process continues with point C.
Also after having played the sound or after having activated other devices in step S27 the process continues with point C.
With reference toFIG. 7 now the mirror function of step S22 will be explained in detail. If the mirror function is selected, then in step S30 an image or a video is captured with thefront camera102. In the following step S31 theimage processing device110 inverts the captured image or video, i. e. turns it around a vertical axis.
In the next step S32 the inverted image or video is displayed on thedisplay109, thereby having a sort of mirror function, i. e. in this case on thedisplay109 an image or video is displayed representing a mirror of the environment in front of thefront camera102. The television can thereby be used as mirror.
With reference toFIG. 8 now the process steps are explained in case that the picture function is selected in step S23.
In step S40 the settings for the picture function are retrieved. These can be either settings stored within the preference data within theuser entry200 or can be settings stored in any other part of thestorage101. That means that the settings can comprise preference data associated tospecific face data202 and/or can be general settings applicable to more than one or alluser entries200. The settings indicate which type of search conditions should be used in order to select content to be displayed. Depending on the settings one or more of the following possible information is retrieved in order to find content matching the preferences of the user.
In step S41key words204 associated to theface data202 and stored within the storage can be retrieved. These key words can be used in order to search theattribute data300 of content in order to find content matching the preferences of a user.
Alternatively or additionally, in step S42 the color of the clothes the user is currently wearing is detected from the previously captured image, i. e. the image captured by thefront camera102. This color can be also used in order to find content which matches the actual mood and situation of the user. If the user for example is carrying green clothes, then content related to nature or to green pictures can be selected by a corresponding analysis carried out by theimage processing device110 and/or theattribute data300 can be searched for the keyword “green”.
Alternatively or additionally, in step S43 the current data and/or time can be retrieved and this information can also be used in order to find content to be displayed which matches the present situation. For example, if the current time is 3 pm on April 1st, then therecommendation device104 may select images and/or videos which were created and/or changed around 3 pm on an April 1st.
Using either a part or all of the previous described conditions, in step S44 the available content is searched, for example all pictures stored within thestorage101 are searched for the previously determined features. Alternatively or additionally, also all other sources providing displayable content can be searched. Even though the function is referred to as “picture function” this function includes the displaying of still images as well as videos.
It can be determined that only pictures and/or videos are selected which match all of the previously determined features or otherwise a sort of priority can be set for the different features so that also pictures and/or videos which do not match all of the features are selected.
In the following step S45 it is checked whether a match has been found. If no match is found then in the following step S46 again standard settings can be used. Otherwise, if in step S45 a match was found, then in step S47 it is checked whether more than one picture and/or video matching the conditions has been found. If this is not the case then in step S48 the single found picture or video is displayed on thedisplay109. The process of step S23 then ends.
Otherwise, if in step S47 it is decided that more than one picture and/or video has been found, then in step S49 the found pictures and/or videos are displayed as a slide show, for example by changing the display every30 seconds. In any case then step S23 ends.
With respect toFIG. 9 in the following the process steps are described when the wall function in step S24 is selected.
In step S50 it is checked whether aback camera111 is present. If a back camera is present then in the following step S51 theback camera111 will take a wall picture of the wall behind the television and then the wall picture will be displayed in step S57 on thedisplay109. The color of the wall picture can be adjusted dependent on the condition of the environment, e.g. if the room is dark, the wall picture is made darker by changing colors.
Otherwise, if in step S50 it is decided that noback camera111 is present, then in the following step S52 it is checked whether thefront camera102 is turnable.
If this is the case then in step S53 thefront camera102 is turned backwards to the wall and in step S54 a wall picture is taken with the turnedfront camera102. The taken wall picture is then also displayed on thedisplay109 in step S57.
Otherwise, if in step S52 it is decided that thefront camera102 is not turnable, then in the following step S55 it is checked whether a wall picture is stored in thestorage101. If this is the case then in step S56 the stored wall picture will be retrieved from thestorage101 and displayed on thedisplay109 in step S57.
In any case it is possible to add an additional step in which theimage processing device110 modifies the captured or stored wall picture before it is displayed. Theimage processing device110 can for example change the wall picture based on the current brightness and/or color of the environment or room.
Otherwise, if in step S55 it is decided that no wall picture is stored in thestorage101, then this means that no wall picture at all can be made available and then in the following step S58 the standard settings are used.
The wall function then ends in step S24.
With the wall function the impression can be created that the television is not present at all. This might be of advantage if the television is very large and when switched off seems to take much clearance within the room. In this way it can be created the impression that the television is not present and the room is larger. Further, the user does not have to pay any attention to the television.
Independent of the selected function, therecommendation device104 can alternatively of additionally show text and/or information on thedisplay109. For example, information about the present, past or future television programs matching the user preference can be shown, e.g. title, time and date.
With reference toFIG. 10 now the process starting from point C will be explained.
In step S60 thesystem100 checks whether a switch-off instruction has been received.
For example thestandby indicator112 indicates that the standby period is over or the complete system of the television is completely turned off, e. g. the power source is stopped. If a switch-off instruction was received then the process ends in step S61. Of course the step of checking whether a switch-off instruction was received can also be accomplished at any other time during the whole process and only for the sake of completeness is shown in the figures.
If no switch-off instruction was received, then in the following step S62 a picture is again captured with thefront camera102. In the next step S63 it is checked whether a user is present. If in step S63 it is found that no user is present anymore then the process ends in step S61 and thesystem100 is switched off or the corresponding components are at least deactivated in order to avoid unnecessary power consumption.
Otherwise, if in step S63 it is found that at least one user is present then in the next step S64 it is checked from the image analysis whether a new user is present, i.e. another user than the one for which the content currently displayed is selected. If this is not the case then in step S65 the display function continues. Otherwise, in the following step S66 it is checked whether the previous user is also still present. If this is the case then the presence of the new user is ignored and the displaying continues in step S65.
Otherwise, if in step S66 it is found that the previous user is not present anymore, then the process in step S67 switches again to the process starting from step S3.
The present invention is not limited to the described embodiments. Rather, all features, components and/or steps described herein with reference to one embodiment can also be incorporated into any other embodiment where appropriate.
With the present invention the television can continue to show interesting and attractive multimedia contents even after a user has terminated watching television programs. Further, with the recommendation of contents based on stored user preferences and the automatic detection of a user, content to be displayed can be selected during the standby mode which perfectly suits the preferences of the user who is currently present.