CROSS REFERENCE TO RELATED APPLICATIONThe present application claims the benefits of and priority, under 35 U.S.C. §119(e), to U.S. Provisional Application Ser. Nos. 61/684,672 filed Aug. 17, 2012, “Smart TV”; 61/702,650 filed Sep. 18, 2012, “Smart TV”; 61/697,710 filed Sep. 6, 2012, “Social TV”; 61/700,182 filed Sep. 12, 2012, “Social TV Roadmap”; 61/736,692 filed Dec. 13, 2012, “SmartTV”; 61/798,821 filed Mar. 15, 2013, “SmartTV”; 61/804,942 filed Mar. 25, 2013, “SmartTV”; 61/804,998 filed Mar. 25, 2013, “SmartTV”; 61/804,971 filed Mar. 25, 2013, “SmartTV”; 61/804,990 filed Mar. 25, 2013, “SmartTV”; 61/805,003 filed Mar. 25, 2013, “SmartTV”; 61/805,053 filed Mar. 25, 2013, “SmartTV”; 61/805,030 filed Mar. 25, 2013, “SmartTV”; 61/805,027 filed Mar. 25, 2013, “SmartTV”; 61/805,042 filed Mar. 25, 2013, “SmartTV”; and 61/805,038 filed Mar. 25, 2013, “SmartTV.” Each of the aforementioned documents is incorporated herein by reference in their entirety for all that they teach and for all purposes.
BACKGROUNDConsolidation of device features or technological convergence is in an increasing trend. Technological convergence describes the tendency for different technological systems to evolve toward performing similar tasks. As people use more devices, the need to carry those devices, charge those devices, update software on those devices, etc. becomes more cumbersome. To compensate for these problems, technology companies have been integrating features from different devices into one or two multi-functional devices. For example, cellular phones are now capable of accessing the Internet, taking photographs, providing calendar functions, etc.
The consolidation trend is now affecting the design and functionality of devices generally used in the home. For example, audio receivers can access the Internet, digital video recorders can store or provide access to digital photographs, etc. The television in home audio/video systems remains a cornerstone device because the display function cannot be integrated into other devices. As such, consolidating home devices leads to integrating features and functionality into the television. The emergence of the Smart Television (Smart TV) is evidence of the trend to consolidate functionality into the television.
A Smart TV is generally conceived as a device that integrates access to the Internet and Web 2.0 features into television sets. The Smart TV represents the trend of technological convergence between computers and television sets. The Smart TV generally focuses on online interactive media, Internet TV, on-demand streaming media, and generally does not focus on traditional broadcast media. Unfortunately, most Smart TVs have yet to provide seamless and intuitive user interfaces for navigating and/or executing the various features of the Smart TV. As such, there are still issues with the consolidation of features and the presentation of these features in Smart TVs.
SUMMARYThere is a need for an Intelligent TV with intuitive user interfaces and with seamless user interaction capability. These and other needs are addressed by the various aspects, embodiments, and/or configurations of the present disclosure. Also, while the disclosure is presented in terms of exemplary embodiments, it should be appreciated that individual aspects of the disclosure can be separately claimed.
According to the disclosure, a non-transitory computer readable storage medium having stored thereon instructions that cause a processor to execute a method for accessing media on a television is disclosed, the method comprising the steps of: searching a network connected to the television to identify a plurality of media sources; determining a number of media items associated with the plurality of media sources; identifying metadata associated with the determined number of media items; storing the metadata in a memory; receiving a request from a user to display one or more of the media items; and displaying, on the television display, the one or more media items based on the stored metadata. The non-transitory computer readable storage medium may further include instructions that cause the processor to execute the steps of: receiving a search request from the user for an individual media item; identifying multiple media sources in the plurality of media sources that have the individual media item; presenting a list of offers from the multiple media sources to the user for the individual item; receiving a selection by the user of an individual offer from the list of offers; and providing access to the individual item. In an embodiment, the list of offers may comprise at least one of an offer to play the individual media item, an offer to view the individual media item, a pay per view offer to view the individual media item, an offer to rent the individual media item, an offer to purchase a ticket to a movie theater showing the individual media item, an offer to purchase the individual media item, a trial access offer to the individual media item, an offer to check out the individual media item, and an offer to access the individual media item on a social media site. The media sources may comprise at least one of: a video server, an audio server, a digital video recorder, a set-top box, a social media site, a voice mail server, a source marked by the user, a content provider, a compact disk player, a digital video device player, a cellular telephone, a personal digital assistant, a notebook, an audio player, a document server, a personal computer, a really simple syndication feed, a social media site, a universal serial bus device, an internet site, and a tablet device. In an embodiment, at least one of the media sources is a device that can be temporarily connected to the network. In yet another embodiment, one of the at least one temporarily connected devices is not connected to the network. In still another embodiment, the one or more media items displayed includes at least one recommended media item based on the stored metadata. The non-transitory computer readable storage medium may still further include instructions that cause the processor to execute the steps of: after receiving the request, identifying the user associated with the request, wherein the one or more media items displayed are based on stored metadata related to the identified user. In another embodiment, identifying metadata may comprise: performing a first scan of the determined number of media items; retrieving basic metadata associated with the determined number of media items; identifying media items that need a second scan; and performing the second scan after the first scan is completed. In yet another embodiment, storing the metadata in memory may comprise assigning a unique media source identifier to each of the plurality of media sources; assigning a unique media item identifier to each of the determined number of media items; creating a personal metadata table to record media items viewed and media items tagged as a favorite; creating a media source table to record metadata for all connected and disconnected media sources; and creating a media data table to record all other identified metadata.
According to the disclosure, a television system comprising: a display; a memory; and a processor in communication with the memory and the display is provided. The processor is operable to: search a network connected to the television to identify a plurality of media sources; determine a number of media items associated with the plurality of media sources; identify metadata associated with the determined number of media items; store the metadata in the memory; receive a request from a user to display one or more of the media items; and display, on the display, the one or more media items based on the stored metadata. In an embodiment, the processor is further operable to: receive a search request from the user for an individual media item; identify multiple media sources in the plurality of media sources that have the individual media item; present a list of offers from the multiple media sources to the user for the individual item; receive a selection by the user of an individual offer from the list of offers; and provide access to the individual item. In yet another embodiment, the list of offers may comprise at least one of: an offer to play the individual media item, an offer to view the individual media item, a pay per view offer to view the individual media item, an offer to rent the individual media item, an offer to purchase a ticket to a movie theater showing the individual media item, an offer to purchase the individual media item, a trial access offer to the individual media item, an offer to check out the individual media item, and an offer to access the individual media item on a social media site. In still another embodiment, plurality of media sources may comprise at least two of: a video server, an audio server, a digital video recorder, a set-top box, a social media site, a voice mail server, a source marked by the user, a content provider, a compact disk player, a digital video device player, a cellular telephone, a personal digital assistant, a notebook, an audio player, a document server, a personal computer, a really simple syndication feed, a social media site, a universal serial bus device, an internet site, and a tablet device. In yet another embodiment, at least one of the media sources may be a device that can be temporarily connected to the network. In still another embodiment, one of the at least one temporarily connected devices is not connected to the network.
According to the disclosure, a method for accessing media on a television is disclosed. The method may include: searching a network connected to the television to identify a plurality of media sources; determining a number of media items associated with the plurality of media sources; identifying metadata associated with the determined number of media items; storing the metadata in a memory; receiving a request from a user to display one or more of the media items; and displaying on the television display the one or more media items based on the stored metadata. The plurality of media sources may comprise at least one of: a video server, an audio server, a digital video recorder, a set-top box, a social media site, a voice mail server, a source marked by the user, a content provider, a compact disk player, a digital video device player, a cellular telephone, a personal digital assistant, a notebook, an audio player, a document server, a personal computer, a really simple syndication feed, a social media site, a universal serial bus device, an internet site, and a tablet device. The method may further include: receiving a search request from the user for an individual media item; identifying multiple media sources in the plurality of media sources that have the individual media item; presenting a list of offers from the multiple media sources to the user for the individual item; receiving a selection by the user of an individual offer from the list of offers; and providing access to the individual item. In an embodiment, identifying metadata may comprise: performing a first scan of the determined number of media items; retrieving basic metadata associated with the determined number of media items; identifying media items that need a second scan; and performing the second scan after the first scan is completed. In yet another embodiment, storing the metadata in memory may comprise: assigning a unique media source identifier to each of the plurality of media sources; assigning a unique media item identifier to each of the determined number of media items; creating a personal metadata table to record media items viewed and media items tagged as a favorite; creating a media source table to record metadata for all connected and disconnected media sources; and creating a media data table to record all other identified metadata.
The present disclosure can provide a number of advantages depending on the particular aspect, embodiment, and/or configuration. The media data service will be able to manage media data and further aggregate metadata from multiple distinct sources. The media data service may provide personalized metadata for media and provides a real-time view of media sources.
These and other advantages will be apparent from the disclosure.
The phrases “at least one,” “one or more,” and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C,” “at least one of A, B, or C,” “one or more of A, B, and C,” “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
The term “a” or “an” entity refers to one or more of that entity. As such, the terms “a” (or “an”), “one or more” and “at least one” can be used interchangeably herein. It is also to be noted that the terms “comprising,” “including,” and “having” can be used interchangeably.
The term “automatic” and variations thereof, as used herein, refers to any process or operation done without material human input when the process or operation is performed. However, a process or operation can be automatic, even though performance of the process or operation uses material or immaterial human input, if the input is received before performance of the process or operation. Human input is deemed to be material if such input influences how the process or operation will be performed. Human input that consents to the performance of the process or operation is not deemed to be “material.”
A “blog” (a blend of the term web log) is a type of website or part of a website supposed to be updated with new content from time to time. Blogs are usually maintained by an individual with regular entries of commentary, descriptions of events, or other material such as graphics or video. Entries are commonly displayed in reverse-chronological order.
A “blogging service” is a blog-publishing service that allows private or multi-user blogs with time-stamped entries.
The term “cable TV” refers to a system of distributing television programs to subscribers via radio frequency (RF) signals transmitted through coaxial cables or light pulses through fiber-optic cables. This contrasts with traditional broadcast television (terrestrial television) in which the television signal is transmitted over the air by radio waves and received by a television antenna attached to the television.
The term “channel” or “television channel,” as used herein, can be a physical or virtual channel over which a television station or television network is distributed. A physical cannel in analog television can be an amount of bandwidth, typically 6, 7, or 8 MHz, that occupies a predetermine channel frequency. A virtual channel is a representation, in cable or satellite television, of a data stream for a particular television media provider (e.g., CDS, TNT, HBO, etc.).
The term “computer-readable medium,” as used herein, refers to any tangible storage and/or transmission medium that participate in providing instructions to a processor for execution. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, NVRAM, or magnetic or optical disks. Volatile media includes dynamic memory, such as main memory. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, magneto-optical medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, a solid state medium like a memory card, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. A digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. When the computer-readable media is configured as a database, it is to be understood that the database may be any type of database, such as relational, hierarchical, object-oriented, and/or the like. Accordingly, the disclosure is considered to include a tangible storage medium or distribution medium and prior art-recognized equivalents and successor media, in which the software implementations of the present disclosure are stored.
The term “enhanced television” (ETV) refers to a collection of specifications developed under the OpenCable project of CableLabs (Cable Television Laboratories, Inc.) that define an ETV Application consisting of resources (files) adhering to the Enhanced TV Binary Interchange Format (EBIF) content format as well as PNG images, JPEG images, and PFR downloadable fonts. An ETV application is normally delivered through an MPEG transport stream and accompanies an MPEG program containing video and audio elementary streams. An “ETV Application” is a collection of resources (files) that include one or more EBIF resources that represent viewable information in the form of pages. Two forms of a given ETV Application may be distinguished: (1) an interchange form and (2) an execution form. The interchange form of an ETV Application consists of the resources (files) that represent the compiled application prior to its actual execution by an ETV User Agent. The execution form of an ETV Application consists of the stored, and possibly mutated forms of these resources while being decoded, presented, and executed by an ETV User Agent. An “ETV User Agent” is a software component that operates on a set-top box, a television, or any other computing environment capable of receiving, decoding, presenting, and processing an ETV Application. This component usually provides, along with its host hardware environment, one or more mechanisms for an end-user to navigate and interact with the multimedia content represented by ETV Applications.
The term “high-definition television” (HDTV) provides a resolution that is substantially higher than that of standard-definition television. HDTV may be transmitted in various formats, namely 1080p -1920×1080p: 2,073,600 pixels (approximately 2.1 megapixels) per frame, 1080i (which is typically either 1920×1080i: 1,036,800 pixels (approximately 1 megapixel) per field or 2,073,600 pixels (approximately 2.1 megapixels) per frame or 1440×1080i:[1] 777,600 pixels (approximately 0.8 megapixels) per field or 1,555,200 pixels (approximately 1.6 megapixels) per frame), or 720p -1280×720p: 921,600 pixels (approximately 0.9 megapixels) per frame. As will be appreciated, “frame size” in pixels is defined as number of horizontal pixels×number of vertical pixels, for example 1280×720 or 1920×1080. Often the number of horizontal pixels is implied from context and is omitted, as in the case of 720p and 1080p, “scanning system” is identified with the letter “p” for progressive scanning or “i” for interlaced scanning, and “frame rate” is identified as number of video frames per second. For interlaced systems an alternative form of specifying number of fields per second is often used. For purposes of this disclosure, “high-definition television” is deemed to include other high-definition analog or digital video formats, including ultra high definition television.
The term “internet television” (otherwise known as Internet TV, Online Television, or Online TV) is the digital distribution of television content via the Internet. It should not be confused with Web television—short programs or videos created by a wide variety of companies and individuals, or Internet protocol television (IPTV)—an emerging internet technology standard for use by television broadcasters. Internet Television is a general term that covers the delivery of television shows and other video content over the internet by video streaming technology, typically by major traditional television broadcasters. It does not describe a technology used to deliver content (see Internet protocol television). Internet television has become very popular through services such as RTE Player in Ireland; BBC iPlayer, 4oD, ITV Player (also STV Player and UTV Player) and Demand Five in the United Kingdom; Hulu in the United States; Nederland 24 in the Netherlands; ABC iview and Australia Live TV in Australia; Tivibu in Turkey; and iWanTV! in the Philippines.
The term “internet protocol television” (IPTV) refers to a system through which television services are delivered using the Internet protocol suite over a packet-switched network such as the Internet, instead of being delivered through traditional terrestrial, satellite signal, and cable television formats. IPTV services may be classified into three main groups, namely live television, with or without interactivity related to the current TV show; time-shifted television: catch-up TV (replays a TV show that was broadcast hours or days ago), start-over TV (replays the current TV show from its beginning); and video on demand (VOD): browse a catalog of videos, not related to TV programming. IPTV is distinguished from Internet television by its on-going standardization process (e.g., European Telecommunications Standards Institute) and preferential deployment scenarios in subscriber-based telecommunications networks with high-speed access channels into end-user premises via set-top boxes or other customer-premises equipment.
The term “silo,” as used herein, can be a logical representation of an input, source, or application. An input can be a device or devices (e.g., DVD, VCR, etc.) electrically connected to the television through a port (e.g., HDMI, video/audio inputs, etc.) or through a network (e.g., LAN WAN, etc.). Rather than a device or devices, the input could be configured as an electrical or physical connection to one or more devices. A source, particularly a content source, can be a data service that provides content (e.g., a media center, a file system, etc.). An application can be a software service that provides a particular type of function (e.g., Live TV, Video on Demand, User Applications, photograph display, etc.). The silo, as a logical representation, can have an associated definition or property, such as a setting, feature, or other characteristic.
The term “panel,” as used herein, can mean a user interface displayed in at least a portion of the display. The panel may be interactive (e.g., accepts user input) or informational (e.g., does not accept user input). A panel may be translucent whereby the panel obscures but does not mask the underlying content being displayed in the display. Panels may be provided in response to a user input from a button or remote control interface.
The term “screen,” as used herein, refers to a physical structure that includes one or more hardware components that provide the device with the ability to render a user interface and/or receive user input. A screen can encompass any combination of gesture capture region, a touch sensitive display, and/or a configurable area. The device can have one or more physical screens embedded in the hardware. However a screen may also include an external peripheral device that may be attached and detached from the device. In embodiments, multiple external devices may be attached to the device. For example, another screen may be included with a remote control unit that interfaces with the Intelligent TV.
The term “media” of “multimedia,” as used herein, refers to content that may assume one of a combination of different content forms. Multimedia can include one or more of, but is not limited to, text, audio, still images, animation, video, or interactivity content forms.
The term “Intelligent TV,” as used herein, refers to a television configured to provide one or more intuitive user interfaces and interactions based on a unique application platform and architecture. The Intelligent TV utilizes processing resources associated with the television to integrate Internet connectivity with parallel application functionality. This integration allows a user the ability to intuitively access various sources of media and content (e.g., Internet, over-the-top content, on-demand streaming media, over-the-air broadcast media, and/or other forms of information) via the Intelligent TV in a quick and efficient manner. Although the Intelligent TV disclosed herein may comprise one or more components of a “smart TV,” it is an aspect of the Intelligent TV to provide expanded intuitive user interaction capability for navigating and executing the various features of the television. A “smart TV,” sometimes referred to as a connected TV, or hybrid TV (not to be confused with IPTV, Internet TV, or with Web TV), describes a trend of integration of the Internet and Web 2.0 features into television sets and set-top boxes, as well as the technological convergence between computers and these television sets/set-top boxes. The smart TV devices have a higher focus on online interactive media, Internet TV, over-the-top content, as well as on-demand streaming media, and less focus on traditional broadcast media than traditional television sets and set-top boxes. As can be appreciated, the Intelligent TV encompasses a broader range of technology than that of the smart TV defined above.
The term “television” is a telecommunication medium, device (or set) or set of associated devices, programming, and/or transmission for transmitting and receiving moving images that can be monochrome (black-and-white) or colored, with or without accompanying sound. Different countries use one of the three main video standards for TVs, namely PAL, NTSC or SECAM. Television is most commonly used for displaying broadcast television signals. The broadcast television system is typically disseminated via radio transmissions on designated channels in the 54-890 MHz frequency band. A common television set comprises multiple internal electronic circuits, including those for receiving and decoding broadcast signals. A visual display device which lacks a tuner is properly called a video monitor, rather than a television. A television may be different from other monitors or displays based on the distance maintained between the user and the television when the user watches the media and based on the inclusion of a tuner or other electronic circuit to receive the broadcast television signal.
The term “Live TV,” as used herein, refers to a television production broadcast in real-time, as events happen, in the present.
The term “standard-definition television” (SDTV) is a television system that uses a resolution that is not considered to be either high-definition television (HDTV 720p and 1080p) or enhanced-definition television (EDTV 480p). The two common SDTV signal types are 576i, with576 interlaced lines of resolution, derived from the European-developed PAL and SECAM systems; and 480i based on the American National Television System Committee NTSC system. In the US, digital SDTV is broadcast in the same 4:3 aspect ratio as NTSC signals. However, in other parts of the world that used the PAL or SECAM analog standards, standard-definition television is now usually shown with a 16:9 aspect ratio. Standards that support digital SDTV broadcast include DVB, ATSC and ISDB. Television signals are transmitted in digital form, and their pixels have a rectangular shape, as opposed to square pixels that are used in modern computer monitors and modern implementations of HDTV. The table below summarizes pixel aspect ratios for various kinds of SDTV video signal. Note that the actual image (be it 4:3 or 16:9) is always contained in thecenter 704 horizontal pixels of the digital frame, regardless of how many horizontal pixels (704 or 720) are used. In case of digital video signal having 720 horizontal pixels, only thecenter 704 pixels contain actual 4:3 or 16:9 image, and the 8 pixel wide stripes from either side are called nominal analogue blanking and should be discarded before displaying the image. Nominal analogue blanking should not be confused with overscan, as overscan areas are part of the actual 4:3 or 16:9 image.
The term “video on demand (VOD),” as used herein, refers to systems and processes which allow users to select and watch/listen to video or audio content on demand. VOD systems may stream content, to view the content in real time, or download the content to a storage medium for viewing at a later time.
The term “satellite positioning system receiver” refers to a wireless receiver or transceiver to receive and/or send location signals from and/or to a satellite positioning system, such as the Global Positioning System (GPS) (US), GLONASS (Russia), Galileo positioning system (EU), Compass navigation system (China), and Regional Navigational Satellite System (India).
The term “display,” as used herein, refers to at least a portion of a screen used to display the output of the television to a user. A display may be a single-screen display or a multi-screen display, referred to as a composite display. A composite display can encompass the touch sensitive display of one or more screens. A single physical screen can include multiple displays that are managed as separate logical displays. Thus, different content can be displayed on the separate displays although part of the same physical screen.
The term “displayed image,” as used herein, refers to an image produced on the display. A typical displayed image is a television broadcast or menu. The displayed image may occupy all or a portion of the display.
The term “display orientation,” as used herein, refers to the way in which a rectangular display is oriented by a user for viewing. The two most common types of display orientation are portrait and landscape. In landscape mode, the display is oriented such that the width of the display is greater than the height of the display (such as a 4:3 ratio, which is 4 units wide and 3 units tall, or a16:9 ratio, which is16 units wide and 9 units tall). Stated differently, the longer dimension of the display is oriented substantially horizontal in landscape mode while the shorter dimension of the display is oriented substantially vertical. In the portrait mode, by contrast, the display is oriented such that the width of the display is less than the height of the display. Stated differently, the shorter dimension of the display is oriented substantially horizontal in the portrait mode while the longer dimension of the display is oriented substantially vertical.
The term “module,” as used herein, refers to any known or later developed hardware, software, firmware, artificial intelligence, fuzzy logic, or combination of hardware and software that is capable of performing the functionality associated with that element.
The terms “determine,” “calculate” and “compute,” and variations thereof, as used herein, are used interchangeably and include any type of methodology, process, mathematical operation or technique.
The term “touch screen” or “touchscreen” refer to screen that can receive user contact or other tactile input, such as a stylus. The touch screen may sense user contact in a number of different ways, such as by a change in an electrical parameter (e.g., resistance or capacitance), acoustic wave variations, infrared radiation proximity detection, light variation detection, and the like. In a resistive touch screen, for example, normally separated conductive and resistive metallic layers in the screen pass an electrical current. When a user touches the screen, the two layers make contact in the contacted location, whereby a change in electrical field is noted and the coordinates of the contacted location calculated. In a capacitive touch screen, a capacitive layer stores electrical charge, which is discharged to the user upon contact with the touch screen, causing a decrease in the charge of the capacitive layer. The decrease is measured, and the contacted location coordinates determined. In a surface acoustic wave touch screen, an acoustic wave is transmitted through the screen, and the acoustic wave is disturbed by user contact. A receiving transducer detects the user contact instance and determines the contacted location coordinates.
The term “web television” is original television content produced for broadcast via the World Wide Web. Some major distributors of web television are YouTube, MySpace, Newgrounds, Blip.tv, and Crackle.
The terms “instant message” and “instant messaging” refer to a form of real-time text communication between two or more people, typically based on typed text.
The term “internet search engine” refers to a web search engine designed to search for information on the World Wide Web and FTP servers. The search results are generally presented in a list of results often referred to as SERPS, or “search engine results pages.” The information may consist of web pages, images, information and other types of files. Some search engines also mine data available in databases or open directories. Web search engines work by storing information about many web pages, which they retrieve from the html itself. These pages are retrieved by a Web crawler (sometimes also known as a spider)—an automated Web browser which follows every link on the site. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. Some search engines, such as Google™, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such as AltaVista™, store every word of every page they find.
The terms “online community,” “e-community,” or “virtual community” mean a group of people that primarily interact via a computer network, rather than face to face, for social, professional, educational or other purposes. The interaction can use a variety of media formats, including wilds, blogs, chat rooms, Internet forums, instant messaging, email, and other forms of electronic media. Many media formats are used in social software separately or in combination, including text-based chatrooms and forums that use voice, video text or avatars.
The term “remote control” refers to a component of an electronics device, most commonly a television set, DVD player and/or home theater system for operating the device wirelessly, typically from a short line-of-sight distance. Remote control normally uses infrared and/or radio frequency (RF) signaling and can include WiFi, wireless USB, Bluetooth™ connectivity, motion sensor enabled capabilities and/or voice control. A touchscreen remote control is a handheld remote control device which uses a touchscreen user interface to replace most of the hard, built-in physical buttons used in normal remote control devices.
The term “satellite TV” refers to television programming delivered by the means of communications satellites and received by an outdoor antenna, usually a parabolic reflector generally referred to as a satellite dish, and as far as household usage is concerned, a satellite receiver either in the form of an external set-top box or a satellite tuner module built into a TV set.
The term “social network service” is a service provider that builds online communities of people, who share interests and/or activities, or who are interested in exploring the interests and activities of others. Most social network services are web-based and provide a variety of ways for users to interact, such as e-mail and instant messaging services.
The term “social network” refers to a web-based social network.
The term “gesture” refers to a user action that expresses an intended idea, action, meaning, result, and/or outcome. The user action can include manipulating a device (e.g., opening or closing a device, changing a device orientation, moving a trackball or wheel, etc.), movement of a body part in relation to the device, movement of an implement or tool in relation to the device, audio inputs, etc. A gesture may be made on a device (such as on the screen) or with the device to interact with the device.
The term “gesture capture” refers to a sense or otherwise a detection of an instance and/or type of user gesture. The gesture capture can occur in one or more areas of the screen. A gesture region can be on the display, where it may be referred to as a touch sensitive display or off the display where it may be referred to as a gesture capture area.
The term “electronic address” refers to any contactable address, including a telephone number, instant message handle, e-mail address, Universal Resource Locator (URL), Universal Resource Identifier (URI), Address of Record (AOR), electronic alias in a database, like addresses, and combinations thereof
It shall be understood that the term “means,” as used herein, shall be given its broadest possible interpretation in accordance with 35 U.S.C., Section 112(f). Accordingly, a claim incorporating the term “means” shall cover all structures, materials, or acts set forth herein, and all of the equivalents thereof. Further, the structures, materials or acts and the equivalents thereof shall include all those described in the summary of the invention, brief description of the drawings, detailed description, abstract, and claims themselves.
It shall be understood that the term “means,” as used herein, shall be given its broadest possible interpretation in accordance with 35 U.S.C.,Section 112,Paragraph 6. Accordingly, a claim incorporating the term “means” shall cover all structures, materials, or acts set forth herein, and all of the equivalents thereof. Further, the structures, materials or acts and the equivalents thereof shall include all those described in the summary of the invention, brief description of the drawings, detailed description, abstract, and claims themselves.
The preceding is a simplified summary of the disclosure to provide an understanding of some aspects of the disclosure. This summary is neither an extensive nor exhaustive overview of the disclosure and its various aspects, embodiments, and/or configurations. It is intended neither to identify key or critical elements of the disclosure nor to delineate the scope of the disclosure but to present selected concepts of the disclosure in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other aspects, embodiments, and/or configurations of the disclosure are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1A includes a first view of an embodiment of an environment of an intelligent television;
FIG. 1B includes a second view of an embodiment of an environment of an intelligent television;
FIG. 2A includes a first view of an embodiment of an intelligent television;
FIG. 2B includes a second view of an embodiment of an intelligent television;
FIG. 2C includes a third view of an embodiment of an intelligent television;
FIG. 2D includes a fourth view of an embodiment of an intelligent television;
FIG. 3 is a block diagram of an embodiment of the hardware of an intelligent television;
FIG. 4 is a block diagram of an embodiment of the intelligent television software and/or firmware;
FIG. 5 is a second block diagram of an embodiment of the intelligent television software and/or firmware;
FIG. 6 is a third block diagram of an embodiment of the intelligent television software and/or firmware;
FIG. 7 is a plan view of an embodiment of a handheld remote control;
FIG. 8 is a side view of an embodiment of a remote control;
FIG. 9A is a bottom view of an embodiment of a remote control with a joystick in a neutral position;
FIG. 9B is a bottom view of an embodiment of a remote control with the joystick in a lower position;
FIG. 9C is a bottom view of an embodiment of a remote control with the joystick in an upper position;
FIG. 10 is a plan view of another embodiment of a handheld remote control;
FIG. 11A is a front view of an embodiment of an Intelligent TV screen;
FIG. 11B is a front view of an embodiment of an Intelligent TV screen;
FIG. 11C is a front view of an embodiment of an Intelligent TV screen;
FIG. 12 is a block diagram of an embodiment of a handheld remote control of eitherFIG. 7 or10;
FIG. 13 is a block diagram of an embodiment of a content data service;
FIG. 14 is an embodiment of an environment for an Intelligent TV;
FIG. 15 is a block diagram of an embodiment of the intelligent television software and/or firmware;
FIG. 16 is a block diagram of an embodiment of a data structure for storing metadata in a media table;
FIG. 17 is a block diagram of an embodiment of a data structure for storing metadata in a personal media table;
FIG. 18 is a block diagram of an embodiment of a data structure for storing metadata in a media sources table;
FIG. 19 is an embodiment of a user interface displayed by the media center application;
FIG. 20 is a process diagram of an embodiment of a method the media scanner may perform to provide metadata to the media data service database;
FIG. 21 is a flow diagram of an embodiment of a method of processing metadata received from a media source; and
FIG. 22 is a flow diagram of an embodiment of a process of providing metadata used to generate in a user interface.
In the appended figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label by a letter that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
DETAILED DESCRIPTIONPresented herein are embodiments of a device. The device can be a network-enabled telecommunications device, such as a television, an electronic visual display device, or other smart device. The device can include one or more screens, or sections of a screen, that are configured to receive and present information from a number of sources. Further, the device can receive user input in unique ways. The overall design and functionality of the device provides for an enhanced user experience making the device more useful and more efficient.
Intelligent Television (TV) Environment:
Referring toFIGS. 1A and 1B, an Intelligent TV, or device,100 is shown. It is anticipated that theIntelligent TV100 may be used for entertainment, business applications, social interaction, content creation and/or consumption, and to organize and control one or more other devices that are in communication with theIntelligent TV100. As can be appreciated, theIntelligent TV100 can be used to enhance the user interactive experience whether at home or at work.
In some embodiments, theIntelligent TV100 may be configured to receive and understand a variety of user and/or device inputs. For example, a user may interface with theIntelligent TV100 via one or more physical or electrical controls, such as buttons, switches, touch sensitive screens/regions (e.g., capacitive touch, resistive touch, etc.), and/or other controls associated with theIntelligent TV100. In some cases, theIntelligent TV100 may include the one or more interactive controls. Additionally or alternatively, the one or more controls may be associated with a remote control. The remote control may communicate with theIntelligent TV100 via wired and/or wireless signals. As can be appreciated, the remote control may operate via radio frequency (RF), infrared (IR), and/or a specific wireless communications protocol (e.g., Bluetooth™, Wi-Fi, etc.). In some cases, the controls, whether physical or electrical, may be configured (e.g., programmed) to suit a user's preferences.
Additionally or alternatively, smart phones, tablets, computers, laptops, netbooks, and other smart devices may be used to control theIntelligent TV100. For example, control of theIntelligent TV100 may be achieved via an application running on a smart device. The application may be configured to present a user withvarious Intelligent TV100 controls in an intuitive user interface (UI) on a screen associated with thedevice100. The screen may be a touch sensitive, or touch screen, display. Selections input by a user via the UI may be configured to control theIntelligent TV100 by the application accessing one or more communication features associated with the smart device.
It is anticipated that theIntelligent TV100 can receive input via various input devices including, but in no way limited to, video, audio, radio, light, tactile, and combinations thereof. Among other things, these input devices may be configured to allow theIntelligent TV100 to see, recognize, and react to user gestures. For instance, a user may talk to theIntelligent TV100 in a conversational manner. TheIntelligent TV100 may hear and understand voice commands in a manner similar to a smart device's intelligent personal assistant and voice-controlled navigator application (e.g., Apple's Siri, Android's Skyvi, Robin, Iris, and other applications).
TheIntelligent TV100 may also be a communications device which can establishnetwork connections104 through many alternate means, including wired108 orwireless112 means, overcellular networks116 to connect viacellular base antenna142 to telephone networks operated bytelephone company146, and by using atelephone line120 to connect to telephone networks operated bytelephone company146. Theseconnections104 enable theIntelligent TV100 to access one ormore communication networks132. The communication networks may comprise any type of known communication medium or collection of communication media and may use any type of protocols to transport messages or signals between endpoints. The communication networks may include wired and/or wireless communication technologies. The Internet is an example of acommunication network132 that constitutes an Internet Protocol (IP) network consisting of many computers, computing networks, and other communication devices located all over the world, which are connected through many telephone systems and other means.
Other examples of thecommunication network132 include, without limitation, a standard Plain Old Telephone System (POTS), an Integrated Services Digital Network (ISDN), the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a cellular network, and any other type of packet-switched or circuit-switched network known in the art. In addition, it can be appreciated that thecommunication network132 need not be limited to any one network type, and instead may be comprised of a number of different networks and/or network types.
In some embodiments, theIntelligent TV100 may be equipped with multiple communication means. The multiple communication means may allow theIntelligent TV100 to communicate across Local Area Networks (LANs)124, wireless local area networks (WLANs)128, andother networks132. Thenetworks132 may be connected in a redundant manner to ensure network access. In other words, if one connection is interrupted, theintelligent TV100 can use an alternate communications path to reestablish and/or maintain thenetwork connection104. Among other things, theIntelligent TV100 may use thesenetwork connections104 to send and receive information, interact with an electronic program guide (EPG)136, receivesoftware updates140, contact customer service144 (e.g., to receive help or service, etc.), and/or access remotely storeddigital media libraries148. In addition, these connections can allow theIntelligent TV100 to make phone calls, send and/or receive email messages, send and/or receive text messages (such as email and instant messages), surf the Internet using an internet search engine, post blogs by a blogging service, and connect/interact with social media sites and/or an online community (e.g., Facebook™, Twitter™, LinkedIn™, Pinterest™, Google+™, MySpace™, and the like) maintained by a social network service. In combination with other components of theIntelligent TV100 described in more detail below, thesenetwork connections104 also enable theIntelligent TV100 to conduct video teleconferences, electronic meetings, and other communications. TheIntelligent TV100 may capture and store images and sound, using associated cameras, microphones, and other sensors. Additionally or alternatively, theIntelligent TV100 may create and save screen shots of media, images, and data displayed on a screen associated with theIntelligent TV100.
Further, as shown inFIG. 1B, theIntelligent TV100 can interact with otherelectronic devices168 by either by the wired108 and/orwireless112 connections. As described herein, components of theIntelligent TV100 allow thedevice100 to be connected todevices168 including, but not limited to,DVD players168a,BluRay players168b, portabledigital media devices168c,smart phones168d,tablet devices168e,personal computers168f,external cable boxes168g,keyboards168h, pointingdevices168i,printers168j, game controllers and/orgame pads168k, satellite dishes168l,external display devices168m, and other universal serial bus (USB), local area network (LAN), Bluetooth™, or high-definition multimedia interface (HDMI) compliant devices, and/or wireless devices. When connected to anexternal cable box168gor satellite dish168l, theIntelligent TV100 can access additional media content. Also, as further described below, theintelligent TV100 is capable of receiving digital and/or analog signals broadcast by TV stations. TheIntelligent TV100 can be configured as one or more of a standard-definition television, enhanced television, and high-definition television. It may operate as one or more of cable, Internet, Internet Protocol, satellite, web, and/or smart television. TheIntelligent TV100 may also be used to control the operation of, and may interface with, other smart components such assecurity systems172, door/gate controllers176,remote video cameras180,lighting systems184,thermostats188,refrigerators192, and other appliances.
Intelligent TV:
FIGS. 2A-2D illustrate components of theIntelligent TV100. In general, as shown byFIG. 2A, theIntelligent TV100 can be supported by a removable base or stand204 that is attached to aframe208. Theframe208 surrounds edges of adisplay screen212, leaving a front surface of thedisplay screen212 uncovered. Thedisplay screen212 may comprise a Liquid Crystal Display (LCD) screen, a plasma screen, Light Emitting Diode (LED) screen, or other screen types. In embodiments, the entire front surface of thescreen212 may be touch sensitive and capable of receiving input by the user touching the front surface of thescreen212.
TheIntelligent TV100 may include integrated speakers216 and at least one microphone220. A first area of theframe208 may comprise a horizontalgesture capture region224 and second areas comprise vertical gesture capture regions228. Thegesture capture regions224,228 may comprise areas or regions that are capable of receiving input by recognizing gestures made by the user, and in some examples, without the need for the user to actually touch thescreen212 surface of theIntelligent TV100. However, thegesture capture regions224,228 may not include pixels that can perform a display function or capability.
One or moreimage capture devices232, such as a camera, can be included for capturing still and/or video images. Theimage capture device232 can include or be associated with additional elements, such as a flash or other light source236 and arange finding device240 to assist focusing of the image capture device. In addition, the microphone220, gesture captureregions224,228,image capture devices232, and therange finding device240 may be used by theIntelligent TV100 to recognize individual users. Additionally or alternatively, theIntelligent TV100 may learn and remember preferences associated with the individual users. In some embodiments, the learning and remembering (i.e., identifying and recalling stored information) may be associated with the recognition of a user.
An IR transmitter andreceiver244 may also be provided to connect theIntelligent TV100 with a remote control device (not shown) or other IR devices. Additionally or alternatively, the remote control device may transmit wireless signals via RF, light, and/or a means other than IR. Also shown inFIG. 2A is anaudio jack248, which may be hidden behind a panel that is hinged or removable. Theaudio jack248 accommodates a tip, ring, sleeve (TRS) connector, for example, to allow the user to utilize headphones, a headset, or other external audio equipment.
TheIntelligent TV100 can also include a number ofbuttons252. For example,FIG. 2A illustrates thebuttons252 on the top of theIntelligent TV100, although the buttons could be placed at other locations. As shown, theIntelligent TV100 includes sixbuttons252a-f, which can be configured for specific inputs. For example, thefirst button252amay be configured as an on/off button used to control overall system power to theIntelligent TV100. Thebuttons252 may be configured to, in combination or alone, control a number of aspects of theIntelligent TV100. Some non-limiting examples include, but are not limited to, overall system volume, brightness, the image capture device, the microphone, and initiation/termination of a video conference. Instead of separate buttons, two of the buttons may be combined into a rocker button. This rocker button arrangement may be useful in situations where the buttons are configured to control features such as volume or brightness. In some embodiments, one or more of thebuttons252 are capable of supporting different user commands. By way of example, a normal press has a duration commonly of less than about 1 second and resembles a quick input. A medium press has a duration commonly of 1 second or more but less than about 12 seconds. A long press has a duration commonly of about 12 seconds or more. The function of the buttons is normally specific to the application that is active on theIntelligent TV100. In the video conference application for instance and depending on the particular button, a normal, medium, or long press can mean end the video conference, increase or decrease the volume, increase a rate speed associated with a response to an input, and toggle microphone mute. Depending on the particular button, a normal, medium, or long press can also control theimage capture device232 to increase zoom, decrease zoom, take a photograph, or record video.
In support of communications functions or capabilities, theIntelligent TV100 can include one or more shared ordedicated antennae256 and wired broadband connections260 as shown inFIG. 2B. Theantennae256 also enable theIntelligent TV100 to receive digital and/or analog broadcast TV channels. The wired broadband connections260 are, for example, a Digital Subscriber Line (DSL), an optical line, an Ethernet port, an IEEE1394 interface, or other interfaces. TheIntelligent TV100 also has atelephone line jack262 to further provide communications capability.
In addition to theremovable base204, theIntelligent TV100 may include hardware and mounting points264 on a rear surface to facilitate mounting theIntelligent TV100 to a surface, such as a wall. In one example, theIntelligent TV100 may incorporate at least one Video Equipment Standards Association (VESA) mounting interface for attaching thedevice100 to the surface.
As shown inFIG. 2C, theIntelligent TV100 may include docking interfaces or ports268. The docking ports268 may include proprietary or universal ports to support the interconnection of theIntelligent TV100 to other devices or components, which may or may not include additional or different capabilities from those integral to theIntelligent TV100. In addition to supporting an exchange of communication signals between theIntelligent TV100 and a connected device or component, the docking ports268 can support the supply of power to the connected device or component. The docking ports268 can also comprise an intelligent element that comprises a docking module for controlling communications or other interactions between theIntelligent TV100 and the connected device or component.
TheIntelligent TV100 also includes a number of card slots272 and network or peripheral interface ports276. The card slots272 may accommodate different types of cards including subscriber identity modules (SIM), secure digital (SD) cards, MiniSD cards, flash memory cards, and other cards. Ports276 in embodiments may include input/output (I/O) ports, such as universal serial bus (USB) ports, parallel ports, game ports, and high-definition multimedia interface (HDMI) connectors.
An audio/video (A/V) I/O module280 can be included to provide audio to an interconnected speaker or other device, and to receive audio input from a connected microphone or other device. As an example, the audio input/output interface280 may comprise an associated amplifier and analog-to-digital converter.
Hardware Features:
FIG. 3 illustrates components of aIntelligent TV100 in accordance with embodiments of the present disclosure. In general, theIntelligent TV100 includes a primary screen304. Screen304 can be a touch sensitive screen and can include different operative areas.
For example, a first operative area, within the screen304, may comprise adisplay310. In some embodiments, thedisplay310 may be touch sensitive. In general, thedisplay310 may comprise a full color, display.
A second area within the screen304 may comprise agesture capture region320. Thegesture capture region320 may comprise an area or region that is outside of thedisplay310 area, and that is capable of receiving input, for example in the form of gestures provided by a user. However, thegesture capture region320 does not include pixels that can perform a display function or capability.
A third region of the screen304 may comprise aconfigurable area312. Theconfigurable area312 is capable of receiving input and has display or limited display capabilities. In embodiments, theconfigurable area312 may present different input options to the user. For example, theconfigurable area312 may display buttons or other relatable items. Moreover, the identity of displayed buttons, or whether any buttons are displayed at all within theconfigurable area312 of a screen304, may be determined from the context in which theIntelligent TV100 is used and/or operated.
In an exemplary touch sensitive screen304 embodiment, the touch sensitive screen304 comprises a liquid crystal display extending across at least those regions of the touch sensitive screen304 that are capable of providing visual output to a user, and a capacitive input matrix over those regions of the touch sensitive screen304 that are capable of receiving input from the user.
One ormore display controllers316 may be provided for controlling the operation of the screen304. Thedisplay controller316 may control the operation of the touch sensitive screen304, including input (touch sensing) and output (display) functions. Thedisplay controller316 may also control the operation of the screen304 and may interface with other inputs, such as infrared and/or radio input signals (e.g., door/gate controllers, alarm system components, etc.). In accordance with still other embodiments, the functions of adisplay controller316 may be incorporated into other components, such as aprocessor364.
Theprocessor364 may comprise a general purpose programmable processor or controller for executing application programming or instructions. In accordance with at least some embodiments, theprocessor364 may include multiple processor cores, and/or implement multiple virtual processors. In accordance with still other embodiments, theprocessor364 may include multiple physical processors. As a particular example, theprocessor364 may comprise a specially configured application specific integrated circuit (ASIC) or other integrated circuit, a digital signal processor, a controller, a hardwired electronic or logic circuit, a programmable logic device or gate array, a special purpose computer, or the like. Theprocessor364 generally functions to run programming code or instructions implementing various functions of theIntelligent TV100.
In support of connectivity functions or capabilities, theIntelligent TV100 can include a module for encoding/decoding and/or compression/decompression366 for receiving and managing digital television information. Encoding/decoding compression/decompression module366 enables decompression and/or decoding of analog and/or digital information dispatched by a public television chain or in a private television network and received acrossantenna324, I/O module348,wireless connectivity module328, and/or otherwireless communications module332. The television information may be sent to screen304 and/or attached speakers receiving analog or digital reception signals. Any encoding/decoding and compression/decompression is performable on the basis of various formats (e.g., audio, video, and data). Encryptingmodule368 is in communication with encoding/decoding compression/decompression module366 and enables the confidentiality of all the data received or transmitted by the user or supplier.
In support of communications functions or capabilities, theIntelligent TV100 can include awireless connectivity module328. As examples, thewireless connectivity module328 can comprise a GSM, CDMA, FDMA and/or analog cellular telephony transceiver capable of supporting voice, multimedia and/or data transfers over a cellular network. Alternatively or in addition, theIntelligent TV100 can include an additional or otherwireless communications module332. As examples, the otherwireless communications module332 can comprise a Wi-Fi, Blutooth™, WiMax, infrared, or other wireless communications link. Thewireless connectivity module328 and the otherwireless communications module332 can each be associated with a shared or adedicated antenna324 and a shared or dedicated I/O module348.
An input/output module348 and associated ports may be included to support communications over wired networks or links, for example with other communication devices, server devices, and/or peripheral devices. Examples of an input/output module348 include an Ethernet port, a Universal Serial Bus (USB) port, Thunderbolt™ or Light Peak interface, Institute of Electrical and Electronics Engineers (IEEE) 1394 port, or other interface.
An audio input/output interface/device(s)344 can be included to provide analog audio to an interconnected speaker or other device, and to receive analog audio input from a connected microphone or other device. As an example, the audio input/output interface/device(s)344 may comprise an associated amplifier and analog-to-digital converter. Alternatively or in addition, theIntelligent TV100 can include an integrated audio input/output device356 and/or an audio jack for interconnecting an external speaker or microphone. For example, an integrated speaker and an integrated microphone can be provided, to support near talk or speaker phone operations.
Aport interface352 may be included. Theport interface352 may include proprietary or universal ports to support the interconnection of thedevice100 to other devices or components, such as a dock, which may or may not include additional or different capabilities from those integral to thedevice100. In addition to supporting an exchange of communication signals between thedevice100 and another device or component, thedocking port136 and/orport interface352 can support the supply of power to or from thedevice100. Theport interface352 also comprises an intelligent element that comprises a docking module for controlling communications or other interactions between theIntelligent TV100 and a connected device or component. The docking module may interface with software applications that allow for the remote control of other devices or components (e.g., media centers, media players, and computer systems).
AnIntelligent TV100 may also includememory308 for use in connection with the execution of application programming or instructions by theprocessor364, and for the temporary or long term storage of program instructions and/or data. As examples, thememory308 may comprise RAM, DRAM, SDRAM, or other solid state memory. Alternatively or in addition,data storage314 may be provided. Like thememory308, thedata storage314 may comprise a solid state memory device or devices. Alternatively or in addition, thedata storage314 may comprise a hard disk drive or other random access memory.
Hardware buttons358 can be included for example for use in connection with certain control operations. One or more image capture interfaces/devices340, such as a camera, can be included for capturing still and/or video images. Alternatively or in addition, an image capture interface/device340 can include a scanner, code reader, or motion sensor. An image capture interface/device340 can include or be associated with additional elements, such as a flash or other light source. The image capture interfaces/devices340 may interface with auser ID module350 that assists in identifying users of theIntelligent TV100.
TheIntelligent TV100 can also include a global positioning system (GPS)receiver336. In accordance with embodiments of the present invention, theGPS receiver336 may further comprise a GPS module that is capable of providing absolute location information to other components of theIntelligent TV100. As will be appreciated, other satellite-positioning system receivers can be used in lieu of or in addition to GPS.
Power can be supplied to the components of theIntelligent TV100 from a power source and/orpower control module360. Thepower control module360 can, for example, include a battery, an AC-to-DC converter, power control logic, and/or ports for interconnecting theIntelligent TV100 to an external source of power.
Communication between components of theIntelligent TV100 is provided bybus322.Bus322 may comprise one or more physical buses for control, addressing, and/or data transmission.Bus322 may be parallel, serial, a hybrid thereof, or other technology.
Firmware and Software:
An embodiment of the software system components andmodules400 is shown inFIG. 4. Thesoftware system400 may comprise one or more layers including, but not limited to, anoperating system kernel404, one ormore libraries408, anapplication framework412, and one ormore applications416. The one or more layers404-416 can communicate with each other to perform functions for theIntelligent TV100.
An operating system (OS)kernel404 contains the primary functions that allow the software to interact with hardware associated with theIntelligent TV100.Kernel404 can include a collection of software that manages the computer hardware resources and provides services for other computer programs or software code. Theoperating system kernel404 is the main component of the operating system and acts as an intermediary between the applications and data processing done with the hardware components. Part of theoperating system kernel404 can include one ormore device drivers420. Adevice driver420 can be any code within the operating system that helps operate or control a device or hardware attached to or associated with the Intelligent TV. Thedriver420 can include code for operating video, audio, and/or other multimedia components of theIntelligent TV100. Examples of drivers include display, camera, flash, binder (IPC), keypad, WiFi, and audio drivers.
Library408 can contain code or other components that may be accessed and implemented during the operation of thesoftware system400. Thelibrary408 may contain one or more of, but is not limited to, an operating systemruntime library424, a TV services hardware abstraction layer (HAL)library428, and/or adata service library432. TheOS runtime library424 may contain the code required by theoperating system kernel404 or other operating system functions to be executed during the runtime of thesoftware system400. The library can include the code that is initiated during the running of thesoftware system400.
The TV services hardwareabstraction layer library428 can include code required by TV services either executed in theapplication framework412 or anapplication416. The TVservices HAL library428 is specific to theIntelligent TV100 operations that control different functions of the Intelligent TV. The TVservice HAL library428 can also be formed from other types of application languages or embodiments of different types of code or formats for code beyond the hardware abstraction layer.
Thedata services library432 can include the one or more components or codes to implement components for the data services function. The data services function can be implemented in theapplication framework412 and/orapplications layer416. An embodiment of a function of the data services and the type of components that may be included is shown inFIG. 6.
Theapplication framework412 can include a general abstraction for providing functionality that can be selected by one ormore applications416 to provide specific application functions or software for those applications. Thus, theframework412 can include one or more different services, or other applications, that can be accessed by theapplications416 to provide general functions across two or more applications. Such functions include, for example, management of one or more of windows or panels, surfaces, activities, content, and resources, Theapplication framework412 can include one or more, but is not limited to, TV services434,TV services framework440,TV resources444, anduser interface components448.
TheTV services framework440 can provide an additional abstraction for different TV services.TV services framework440 allows for the general access and function of services that are associated with the TV functionality. TheTV services436 are general services provided within theTV services framework440 that can be accessed by applications in theapplications layer416. TheTV resources444 provide code for accessingTV resources444 including any type of storage, video, audio, or other functionality provided with theIntelligent TV100. TheTV resources444,TV services436, andTV services framework440 provide for the different implementations of TV functionality that may occur with theIntelligent TV100.
One or moreuser interface components448 can provide general components for display of theIntelligent TV100. Theuser interface components448 might be general components that may be accessed by different applications provided in theapplication framework412. Theuser interface components448 may be accessed to provide for panels and silos as described in conjunction withFIG. 5.
Theapplications layer416 can both contain and execute applications associated with theIntelligent TV100.Applications layer416 may include one or more of, but is not limited to, alive TV application452, a video ondemand application456, amedia center application460, anapplication center application464, and auser interface application468. Thelive TV application452 can provide live TV over different signal sources. For example, thelive TV application452 can provide TV from input from cable television, over air broadcasts, from satellite services, or other types of live TV services.Live TV application452 may then present the multimedia presentation or video and audio presentation of the live television signal over the display of theIntelligent TV100.
The video ondemand application456 can provide for video from different storage sources. UnlikeLive TV application452, video ondemand456 provides for display of videos that are accessed from some memory source. The sources of the video on demand can be associated with users or with the Intelligent TV or some other type of service. For example, the video ondemand456 may be provided from an iTunes library stored in a cloud, from a local disc storage that contains stored video programs, or from some other source.
Themedia center application460 can provide applications for different types of media presentation. For example, themedia center460 can provide for displaying pictures or audio that is different from, but still accessible by the user and different from live TV or video on demand. Themedia center460 allows for the access of different sources to obtain the media in the display of such media on theIntelligent TV100.
Theapplication center464 allows for the provision, storage and use of applications. An application can be a game, a productivity application, or some other application generally associated with computer systems or other devices, but may be operated within the Intelligent TV. Anapplication center464 may obtain these applications from different sources, store them locally and then execute those types of applications for the user on theIntelligent TV100.
User interface application468 provides for the specific user interfaces associated with theIntelligent TV100. These user interfaces can include the silos and panels that are described inFIG. 5. An embodiment of theuser interface software500 is shown inFIG. 5. Here theapplication framework412 contains one or more code components which help control the user interface events while one or more applications in theapplications layer416 affects the user interface use for theIntelligent TV100. Theapplication framework412 can include asilo transition controller504 and/or aninput event dispatcher508. There may be more or fewer code components in theapplication framework412 than those shown inFIG. 5. Thesilo transition controller504 contains the code and language that manages the transitions between one or more silos. A silo can be a vertical user interface feature on the Intelligent TV that contains information for user. Thetransition controller504 can manage the changes between two silos when an event occurs in the user interface. Theinput event dispatcher508 can receive user interface events that may be received from the operating system and provided to theinput event dispatcher508. These events can include selections of buttons on a remote control or on the TV or other types of user interface inputs. Theinput event dispatcher508 may then send these events to asilo manager532 orpanel manager536 depending on the type of the event. Thesilo transition controller504 can interface with thesilo manager532 to affect changes in the silos.
Theapplications layer416 can include auser interface application468 and/or asilo application512. Theapplications layer416 can include more or fewer user interface applications as necessary to control the user interface of theIntelligent TV100 than those shown inFIG. 5. Theuser interface application468 can include asilo manager532, apanel manager536, and one or more types of panels516-528. Thesilo manager532 manages the display and/or features of silos. Thesilo manager532 can receive or send information from thesilo transition controller504 or theinput event dispatcher508 to change the silos displayed and/or to determine types of input received in the silos.
Apanel manager536 is operable to display panels in the user interface to manage transitions between those panels or to affect user interface inputs received in the panel. Thepanel manager536 may thus be in communication with different user interface panels such as aglobal panel516, avolume panel520, asettings panel524, and/or anotification panel528. Thepanel manager536 can display these types of panels depending on the inputs received from theinput event dispatcher508. Theglobal panel516 may include information that is associated with the home screen or top level hierarchal information for the user. Avolume panel520 may display information about an audio volume control or other settings for volume. Asettings panel524 can include information displayed about the settings of the audio or video, or other settable characteristics of theIntelligent TV100. Anotification panel528 can provide information about notifications to a user. These notifications can be associated with information, such as, video on demand displays, favorites, currently provided programs, or other information. Notifications can be associated with the media or with some type of setting, or operation or theIntelligent TV100. Thepanel manager536 may be in communication with thepanel controller552 of thesilo application512.
Thepanel controller552 may operate to control portions of the panels of the types described previously. Thus, thepanel controller552 may be in communication with atop panel application540, anapplication panel544, and/orbottom panel548. These types of panels may be differently displayed in the user interface of theIntelligent TV100. The panel control thus may be based on the configuration of the system or the type of display being used currently, put the types of panels516-528 into a certain display orientation governed by thetop panel application540,application panel544, orbottom panel application548.
An embodiment of thedata service432 and the operation of the data management is shown inFIG. 6. Thedata management600 can include one or more code components that are associated with different types of data. For example, there may be code components within thedata service432 that execute and are associated with video on demand, the electronic program guide, or media data. There may be more or fewer types ofdata service432 components than those shown inFIG. 6. Each of the different types of data may include a data model604-612. The data models govern what information is to be stored and how that information will be stored by the data service. Thus, the data model can govern regardless of where the data comes from, how the data will be received or managed within the Intelligent TV system. Thus, thedata model604,608, and/or612, can provide a translation ability or affect the ability to translate data from one form to another to be used by theIntelligent TV100.
The different types of data services (video on demand, electronic programming guide, media) each have adata subservice620,624, and/or628 that is in communication with one or more internal and/or external content providers616. The data subservices620,624, and628 that communicate with the content providers616 to obtain data that may then be stored indatabases632,636, and640. Thesubservices620,624, and628 may communicate with and initiate or enable one or more source plug-ins644,648, and652 to communicate with the content provider. For each content provider616, there may be a different source plug-in644,648, and652. Thus, if there is more than one source of content for the data, each of the data subservices620,624, and628 may determine and then enable or initiate a different source plug-in644,648, and/or652. The content providers616 may also provide information to aresource arbitrator656 and/orthumbnail cache manager660. Theresource arbitrator656 may operate to communicate withresources664 that are external to thedata service432. Thus, theresource arbitrator656 may communicate with cloud based storage, network based storage, or other types of external storage in theresources664. This information may then be provided through the content provider module616 to the data subservices620,624,628. Likewise, athumbnail cache manager660 may obtain thumbnail information from one of the data subservices620,624,628 and store that information in the thumbnails database668. Further, thethumbnail cache manager660 may extract or retrieve that information from the thumbnails database668 to provide to one of the data subservices620,624,628.
An exemplarycontent aggregation architecture1300 is shown inFIG. 13. The architecture can include auser interface layer1304 and acontent aggregation layer1308. Theuser interface layer1304 may include aTV application1312,media player1316, and application(s)1320. TheTV application1312 enables the viewer to view channels received via an appropriate transmission medium, such as cable, satellite, and/or the Internet. Themedia player1316 views other types of media received via an appropriate transmission medium, such as the Internet. The application(s)1320 include other TV-related (pre-installed) applications, such as content viewing, content searching, device viewing, and setup algorithms, and coordinates with themedia player1316 to provide information to the viewer.
Thecontent source layer1308 includes, as data services, acontent source service1328, acontent aggregation service1332 and acontent presentation service1336. hecontent source service1328 can manage content source investigators, including local and/or network file system(s), digital network device manager (which discovers handheld and non-handheld devices (e.g., digital media servers, players, renderers, controllers, printers, uploaders, downloaders, network connectivity functions, and interoperability units) by known techniques, such as a multicast universal plug and play or UPnP discovery techniques, and, for each discovered device, retrieves, parses, and encodes device descriptors, notifies the content source service of the newly discovered device, and provides information, such as an index, on previously discovered devices), Internet Protocol Television or IPTV, digital television or DTV (including high definition and enhanced TV), third party services (such as those referenced above), and applications (such as Android applications).
Content source investigators can track content sources and are typically configured as binaries. Thecontent source service1328 starts content source investigators and maintains open and persistent channels for communications. The communications include query or command and response pairs. Thecontent aggregation service1332 can manage content metadata fetchers, such as for video, audio, and/or picture metadata. Thecontent presentation service1336 may provide interfaces to thecontent index1340, such as an Android application interface and digital device interfaces.
Thecontent source service1328 can send and receivecommunications1344 to and from thecontent aggregation service1332. The communications can include notifications regarding new and removed digital devices and/or content and search queries and results. Thecontent aggregation service1332 can send and receivecommunications1348 to and from thecontent presentation service1336 including device and/or content lookup notifications, content-of-interest advisories and notifications, and search queries and results.
When a search is performed, particularly when the user is searching or browsing content, a user request may be received from theuser interface layer1300, by thecontent presentation service1336, which responsively opens a socket and sends the request to thecontent aggregation service1332. Thecontent aggregation service1332 first returns results from thelocal database1340. Thelocal database1340 includes an index or data model and indexed metadata. Thecontent source service1328 further issues search and browse requests for all content source investigators and other data management systems. The results are forwarded to thecontent aggregation service1332, which updates thedatabase1340 to reflect the further search results and provides the original content aggregation database search results and the data updates, reflecting the additional content source service search results, over the previously opened socket to thecontent presentation service1336. Thecontent presentation service1336 then provides the results to one or more components in theuser interface layer1300 for presentation to the viewer. When the search session is over (e.g., the search session is terminated by the user or by an action associated with user), theuser interface layer1300 disconnects the socket. As shown, media can be provided directly by thecontent aggregation service1332 to themedia player1316 for presentation to the user.
Remote Control:
A handheld remote control can be provided to enable user interaction with theIntelligent TV100. An exemplary handheld remote control is shown inFIGS. 7-9. The remote control700 can include one or more of, but is not limited to, top, side and bottom housings704,708, and712, an (on/off) power button716, an input source button720 (to select input source such as Live TV, video on demand, media center, application center, high definition multimedia interface or HDMI, component or COMP, audio/Video or A/V, digital or analog television or DTV/ATV, and video graphics array (VGA)), a (volume) mute button724, a Live TV button728 (to activate or select the Live TV silo), a video on demand (VOD) button732 (to activate or select the video on demand silo), a media center button736 (to activate or select the media center application or silo, which access various types of media such as music, TV programming, videos, and the like), an application center button740 (to activate or select the application center application or silo), a global panel button744, an application panel button748, a back button752 (to select a prior user operation or Intelligent TV state and/or navigate up a hierarchy of any displayed image or object(s) (in which case the back button752 does not navigate within application panels or across application silos), a play button756 (to play or pause media), a D-pad760 (which includes north, east, west, and south directional arrows to navigate among displayed images and/or move between levels of an application's or object's hierarchy such as application view navigation, panel navigation, and collection navigation), an OK (or select) button764 (to select a highlighted displayed image (such as displayed speed control, rewind, forward, play, and pause objects and/or objects on menu bar or in a menu box) and/or navigate down a hierarchy of any displayed image or object(s)), a rocker-type volume-up and volume- down button768 (to adjust the volume), a menu/guide button772 (to select for display a menu or guide of programming), a 0-9 (number) button776 (to display a number pad on the TV screen), a settings button780 (which launches an application to access current and change TV settings (such as channel settings and settings used to adjust picture and sound effects (e.g., image mode (e.g., standard, playground, game, cinema, concert, and studio), brightness, contrast, saturation, color temperature, energy savings, 3D noise reduction, hue, sharpness, zoom mode (e.g., full screen, standard, smart zoom, and dot-to-dot), picture position, 3D mode, for picture, and sound retrieval system or SRS TruSurround, sound mode (e.g., standard, live 1, live 2, theatre, music, speech, user equalizer mode, Left/Right speaker balance, auto volume control, Sony/Philips Interconnect Format or S/PDIF (off, auto, pulse code modulation or PCM) for sound) and system settings (such as system (e.g., selected language for graphical user interface, user geographical and/or geopolitical location information, input method, area settings, and sleep time), network (e.g., WiFi, WiFi hotspot, WiFi direct, Point-to-Point Protocol over Ethernet or PPPoE (asymmetric digital subscriber line or ADSL), Ethernet) settings (e.g., enabled and disabled and selected and non-selected) and information (e.g., network information (e.g., electronic address such as Internet Protocol or IP address, subnet mask, gateway, domain name server information, domain name, Media Access Control or MAC address, service set identification or SSID, security information, and password information) and inline status), manage applications (e.g., currently installed applications, currently executing applications, and internal and external computer readable medium usage), and view user information regarding the Intelligent TV100)), a rocker-type channel-up and channel-down button784 (to increment or decrement the selected channel), and first, second, third and fourth hotkeys788,792,794, and796, and/or a moveable joystick900 on a bottom of the remote control700. The first, second, third, and fourth hotkeys are generally assigned different colors, which color indexing is depicted as visual indicia on a selected panel to show the currently assigned function, if any, for each hotkey. As can be seen, the actuator layout can provide a highly efficient, satisfactory, and easily usable experience to the end user.
Unlike the functional associations and functions of many of the actuators, those of some of the actuators are not readily apparent. A number of examples will now be discussed by way of illustration.
Themedia center button736, when selected, can provide information regarding music, videos, photographs, collections or groupings of music, videos, and/or photographs, and internal and external computational devices (such as personal computers, laptops, tablet computers, wireless phones, removable computer readable media, and the like), which can be grouped in a selected manner (such as favorites, most recently viewed, most watched or viewed, and most recently added). The information can includes previews (which can include selected portions of the media content, duration, file size, date created, date last watched, times watched or viewed, and audio and/or video format information).
Theapplication center button740, when selected, may provide information regarding pre-installed and downloaded applications. Unlike downloaded applications, pre-installed applications cannot be removed by the user or manually updated. Exemplary pre-installed applications include web browser, settings control, and content search algorithms. By way of illustration, theapplication center button740 can provide a scrollable graphical grid of icons (each icon being associated with an application) currently available in the application center.
Theglobal panel button744, when selected, can provide the user, via one or more panels or windows, with access to one or more of, but not limited to, silos, notifications, a web browser, system settings, and/or information associated therewith. For example, theglobal panel button744 can enable the user to determine what external devices are currently connected to and/or disconnected from theIntelligent TV100, determine what inputs (e.g., HDMI ports) are currently available for connecting to external devices, determine a connection and/or operational status of a selected external device and/or network (e.g., WiFi connected, Ethernet connected, and offline), assign a custom (or user selected) name to each input source, determine what content is currently being offered on Live TV, on demand, the media center, and/or the application center, access vendor messages and notifications to the user (e.g., system and/or application updates are available), activate the Internet browser, and/or access shortcuts on a displayed shortcut bar to more frequently used and desired applications. Common shortcuts are Internet browser (e.g., Internet search engine), system settings, and notifications. The common types of panels are for information (which is typically information related to a currently displayed image and/or content (e.g., title, date/time, audio/visual indicator, rating, and genre), browse requests, and/or search requests (such as search term field)). Each of the panel types may include a panel navigation bar, detailed information or relevant content to the panel function, operation and/or purpose, and a hotkey bar (defining currently enabled functional associations of hotkeys).
Theapplication panel button748, when selected, can display an application window or panel. One application panel may be an information panel regarding a selected (pre-installed or previously downloaded) application icon. The information panel can one or more of identify the selected application, provide a description of the functionality (including application developer and/or vendor, version, release, and/or last update date and a category or type of application based on the application's functionality) and user ratings and/or degree of other user downloading of the application (e.g., a star rating assigned based on one or more of the foregoing inputs), provide the option to launch, remove, update, and add to favorites the identified application, and provide a listing of selectable links of other (not yet downloaded) recommended applications that provide similar functionality to the identified application. The latter listing can, in turn, provide a description of the functionality (including application developer and/or vendor, version, release, and/or last update date and a category or type of application based on the application's functionality) and user ratings and/or degree of other user downloading of the application (e.g., a star rating assigned based on one or more of the foregoing inputs).
The functions of the first, second, third, andfourth hotkeys788,792,794, and796 can change depending on system state, context, and/or, within a selected screen and/or panel, based on a content or currently selected portion of (or relative cursor position on) the screen. Commonly, a currently assigned function of any of the first, second, third, andfourth hotkeys788,792,794, and796 depends on a currently accessed silo and/or panel (with which the user is currently interacting within the silo). In other words, a first function of one of the first, second, third, andfourth hotkeys788,792,794, and796 is activated by the respective hotkey in a first system state while a different second function is activated by the respective hotkey in a different second system state. In another example, a third function of one of the first, second, third, andfourth hotkeys788,792,794, and796 is activated by the respective hotkey when a user focus (or currently selected cursor position or screen portion) is at a first screen position while a different fourth function is activated by the respective hotkey when a user focus (or currently selected cursor position or screen portion) is at a different second screen position. The first screen position can, for instance, be within an icon while the second screen position is outside of the icon. Hotkey functionality that could be enabled when in the first screen position may be “configure” and “remove” and disabled is “add,” and, when in the second position hotkey functionality enabled can be “add” and disabled is “configure” and “remove.” Generally, the states of hotkeys can include normal (for enabled actions or functions), disabled (when an action or function is temporarily disabled), pressed (when selected by a user to command an action or function to be performed), and unavailable (when no association between the hotkey and an action or function is currently available). While examples of hotkey functions are discussed below, it is to be understood that these are not intended to be exhaustive or limiting examples.
Thefirst hotkey788, when selected in a first system state, can enable the user to assign, change, or edit a name of an input source. It is typically enabled only when the input source of HDMI, Comp/YPbPr (e.g., component video cables), video output, and VGA is in focus. When selected in a second system state, thefirst hotkey788 can return the user to a top of a scrollable collection of objects, such as application icons.
Thesecond hotkey792 may show all or less. In other words, thehotkey792 can allow the user to show all inputs, including the unconnected/undetected ones and to hide the unconnected/undetected inputs, e.g., to expand and collapse the silo/input list. Each input source can have one of two states, namely connected/detected and unconnected/undetected. Some input sources, including Live TV, video on demand, media center, and application center are always connected/detected.
Themoveable joystick900 on the bottom of theremote control700, when manipulated, can cause a displayed image on theIntelligent TV100 screen to be displaced a proportional amount. In other words, the displayed image is displaced substantially simultaneously with displacement of thejoystick900 within thejoystick aperture904 in thebottom housing712 of the remote control. As shown inFIGS. 9B-C, thejoystick900 moves or slides between forward and reverse positions. Releasing thejoystick900 causes thejoystick900 to return to the center position ofFIG. 9A, and the window to move or slide upwardly (when the joystick is released from the joystick position ofFIG. 9B) or downwardly (when the joystick is released from the joystick position ofFIG. 9C) until it disappears from view as shown inFIG. 11A. The effect on the screen of theIntelligent TV100 is shown inFIGS. 11A-C. InFIG. 11A, video content, such as TV programming, a video, movie, and the like, is being displayed by front surface of thescreen212. InFIG. 11B, thejoystick900 is moved or slid to the upper position ofFIG. 9B, and a drop down window orpanel1100 moves or slides down (at the substantially the same rate ofjoystick900 movement) at the top of thescreen212. InFIG. 11C, thejoystick900 is moved or slid to the lower position ofFIG. 9C, and a drop up window orpanel1100 moves or slides up (at the substantially the same rate ofjoystick900 movement) at the bottom of thescreen212. Thewindow1100 partially covers the video content appearing on the remainder of thescreen212 and/or causes a portion of thescreen212 displaying video content to move and/or compress up or down the height of thewindow1100.
Thewindow1100 can include one or more of information (which is typically information related to a currently displayed image and/or content (e.g., panel navigation bar, detailed information (e.g., title, date/time, audio/visual indicator, rating, and genre), and hotkey bar (defining current functional associations of hotkeys)), browse requests, and/or search requests. Commonly, thewindow1100 includes suitable information about the content (such as name, duration, and/or remaining viewing duration of content), settings information, TV or system control information, application (activation) icons (such as for pre-installed and/or downloaded applications such as application center, media center and Web browser), and/or information about input source(s), When thejoystick900 is in either the forward or reverse position, the user can select an actuator on the front of the remote control, such as theOK button764, and be taken, by displayed images on thescreen212, to another location in the user interface, such as a desktop. This process can be done in a nonintrusive manner and without affecting the flow of content that is pushed up or down. Thejoystick900 could be moved, additionally or differently, from side-to-side to cause the window to appear at the left or right edge of thescreen212.
An alternative actuator configuration is shown inFIG. 10. The actuators are substantially the same as those ofFIGS. 7-9 except that thesocial network button1000, when selected, can automatically select content and publish, via a social network service or other social media, the content to a social network or online community. User or viewer comments and/or other messages can be included in the outbound message. For example, all or one or frames or portions of media content (such as a video, music, a photograph, a picture, or text) can be provided automatically to a predetermined or selected group of people via Linked-In™, Myspace™, Twitter™, YouTube™, DailyMotion™, Facebook™, Google+™ or Second Life™. The user, upon activating thebutton1000 could, in response, select a social forum or media upon which the selected content (which is the content displayed to the user when thesocial network button1000 is activated) is to be posted and/or a predetermined group within that social media to which the content is to be posted. Alternatively, these selections could be preconfigured or preselected by the user.
The social network button can also be used to “turn up” or “turn down” a social volume visualization. TheIntelligent TV100 can create dynamically a visualization of aggregated connections (and inbound and/or outbound messages) from a variety of social networks. The aggregation (and inbound and outbound messages) can be depicted graphically on the screen as a volume of connections to influence the viewer user. With a social volume visualization, selected contents of each linked social network profile of a social contact (and inbound and/or outbound messages from or to the linked social network contact and/or current activity of the social contact (such as watching the same programming or content the viewer is currently watching) can be presented in a separate tile (or visually displayed object). The size of the tile can be related to any number of criteria, including a relationship of the linked social contact (e.g., a relative degree of importance or type of relationship can determine the relative size of the tile, a degree of influence of the linked social contact to the current viewer, a geographic proximity of the linked social contact to the current viewer, a degree to which the currently provided media content is of interest to both the viewer and linked social contact (e.g., both parties enjoy war movies, murder mysteries, musicals, comedies, and the like), an assigned ranking of the linked viewer by the viewer, a type of social network type linking the viewer with the linked social contact, a current activity of the social network contact (e.g., currently watching the same content that the viewer is currently watching), a current online or offline status of the linked social contact, and a social network grouping type or category to which both the viewer and linked social contact belong (e.g., work contact, best friend, family member, etc.).
The viewer can designate a portion of the screen to depict the social network aggregation. By turning the social volume up (+) or down (−), the viewer can increase the size and/or numbers of linked contact tiles provided to the viewer. In other words, by increasing the social volume the viewer can view, access, and/or push more social content from those of his or her social networks associated with him or her in a memory of the Intelligent TV. By decreasing the social volume, the viewer can view, access, and/or push less social content from his or her associated social networks. By selecting themute button724, the viewer can stop or pause any interactivity with his or her associated social networks (e.g., inbound or outbound messages). Social volume and/or mute can be separated into two (or more) volume settings for outbound and inbound social network activity. By way of illustration, a first volume setting, control, and/or button can control the volume for outbound social network activity (e.g., outbound social messages) while a second (different) volume setting, control, and/or button can control the volume for inbound social network activity (e.g., inbound social messages). By way of further illustration, a first mute setting, control, and/or button can stop or pause outbound social network activity (e.g., outbound social messages) while a second (different) mute setting, control, and/or button can stop or pause inbound social network activity (e.g., inbound social messages).
A functional block diagram of the remote control is shown inFIG. 12. Theremote control700 includes acontroller1208 to control and supervise remote control operations, optional wireless (RF)transceiver1224 andantenna1244 to send and receive wireless signals to and from theIntelligent TV100 and other external components, optionalinfrared emitter1228 to emit infrared signals to theIntelligent TV100, optional light emitting diode orLED driver1232 to control LED operation to provide video-enabled feedback to the user, actuators1220 (including the various buttons and other actuators discussed above in connection withFIGS. 7 and 10), andjoystick900, all interconnected via abus1248. An onboard power source1200 andpower management module1204 provide power to each of these components viapower circuitry1240. Theinfrared emitter1228 and receiver (not shown) on theIntelligent TV system100 can be used to determine a displayed object illuminated by the infrared signal and therefore adjust the displayed image, for example to indicate a focus of the user (e.g., illuminate a displayed object or show cursor position relative to displayed objects on the screen) and to determine and activate a desired command of the user. This can be done by tracking a position of the remote control in relation to infrared tracking reference points (e.g., a sensor bar or infrared LED's) positioned on or adjacent to the screen of theIntelligent TV100. Motion tracking can further be augmented using position information received from a multi-axis gyroscope and/or accelerometer on board the remote control (not shown).
As shown inFIG. 14, theIntelligent TV100 may use one ormore connections104 to media sources to provide media andapplications150 to a user. A media source may be any type ofdevice168 and/or network site132 (including internet sites and/or cable providers) that can contain media. For example, the media sources may include, but are not limited to, a video server, an audio server, a DVR, anexternal cable box168g, a social media site, a data server, a voice mail server, a source marked by the user, a content provider, an internet site, a CD player, aDVD player168a, ablue ray player168b, a cellular telephone, asmart phone168d, a personal digital assistant, a notebook, an audio player, a document server, aPC168f, a Really Simple Syndication (RSS) feed, a social media site, a USB device, a disk drive, memory, a portabledigital media device168c, atablet device168e, a email server, an Instant Messaging device, a Tweet service, and/or the like. Themedia150 can be any type of media, such as videos, photos, music, social media (i.e., a social media site), data files, recordings, video calls, audio calls, text conversations, text files (i.e., books, emails, letters, etc.), and the like.
Each media source may contain media data in a specific format (i.e., DVD, BluRay, and other digital or analog formats). Further, media from live feeds (i.e., from over-the-air broadcasts, cable or satellite feeds, Internet feeds) are media data in a live format specific to the type of feed. Further, media data and feeds from the various media sources may also include metadata information embedded with these data (i.e., close captions, subtitles, and other information). In one implementation, a plurality ofmedia source plugins652a-nare configured to receive media data and information for one or more of these media sources in a specific format.
TheIntelligent TV100 may use aconnection104 to acommunication network132, including the Internet, to access adigital media library148 and/or to providemedia150 to a user. A variety of media data is distributed over the Internet for equipment or computers that are directly connected to the Internet. Media data distributed over the Internet usually includes more detailed information regarding TV programming than with embedded program guide information in content feeds (i.e., detailed description of programming, reviews of programming, schedule, and future programming). Further, media data information distributed over the Internet may also contain non-text content such as preview images, videos, and sounds. Media data taken from an internet source may contain more detailed information but may require parsing to organize the relevant information withindata management600. Usingconnections104, theIntelligent TV100 can also connect a user to any available external media provider such as iTunes, Netflix, YouTube, Pandora, Amazon Instant Video, Hulu Plus, the Apple App Store, Hisense, Google Play, the Amazon Appstore, Comcast, ESPN, Sirius XM satellite radio, Barnes and Noble, public libraries, and the like. In one implementation, one or moremedia source plugins652 and orVOD source plugins644 andEPG source plugins648, may convert or translate the received media data into a consistent data model for data management600 (i.e., media data model612) for consistency withindata management600 and other reasons.
TheIntelligent TV100 may also useconnections104 to interact with one or more otherelectronic devices168 including, but not limited to,DVD players168a,BluRay players168b, portabledigital media devices168c,smart phones168d,tablet devices168e,personal computers168f,external cable boxes168g, satellite dishes168l, a digital video recorder (DVR)168n, a compact disc (CD) player168o, and other USB, LAN, Bluetooth™, HDMI compliant devices, and/or other wireless devices to providemedia150 to a user.
Further, theIntelligent TV100 may be configured to automatically log a recognized user into one or more media providers. As discussed above in connection withFIGS. 2A-2D, theIntelligent TV100 may recognize individual users through the use of the microphone220, gesture captureregions224,228,image capture devices232, and therange finding device240. TheIntelligent TV100 may also identify a user by smart devices controlled by the user. For example, if a first user operates a first device to control theIntelligent TV100 and a second user operates a second device to control the Intelligent TV, theIntelligent TV100 may differentiate between the first and second users and identify the users based on the devices they use. Users may also login, or be required to login, or otherwise sign-in to a user interface of theIntelligent TV100 to confirm their identity. When the identity of a user is established, theIntelligent TV100 may record and retrieve metadata on each user and may provide or recommend media to the individual users. TheIntelligent TV100 may be configured to store user credentials and automatically log identified users into external media providers that require user credentials to obtain access. For example, after identifying a second user, theIntelligent TV100 may use log-in credentials stored in memory to log the second user into iTunes, Hulu Plus, public libraries, or other password protected sites to identify the user and provide the second user access to media stored at these sites.
An embodiment of amedia data service1500 is shown inFIG. 15. Themedia data service1500 provides personalized and/or customized metadata for media forapplications416 and thedata service432 of the Intelligent TV. Themedia data service1500 is one of the internal content providers616. Auser interface468 may recommend media to a user based on the personalized metadata provided by themedia data service1500. Themedia data service1500 can include one or more code components that may be associated with different types of data. The code components are executable and a part ofdata service432.Media data service1500 may associate and access the code components as needed.Media data service1500 may work with theVOD subservice620, theEPG subservice624, and/or the media subservice628. For example, themedia data service1500 may use metadata regarding media collected by the media subservice628 or the VOD subservice620 in order to generate personalized media metadata.
Themedia data service1500 stores metadata in adatabase1504 for further access. In one implementation, the metadata is stored in a sqlite database in adedicated memory1508. Thedatabase1504 includes an index or data model and indexed metadata. The data model defines what information is to be stored and how it will be stored by the data service. Thus, the data model can be configured to accommodate a variety of data sources without limiting where the information originates and how the information will be received or managed by the Intelligent TV. Thus, the data model provides the ability to translate or transform the information from one form to another for use by the Intelligent TV.
Thedatabase1504 may be organized into one or more tables, such as a media table1512, a media sources table1516, and a personal media table1520 which are described in more detail inFIGS. 16-18. In an implementation, thedatabase1504 may also include a media data application programming interface (API)1524. TheAPI1524 may provide access to a view created as joined media table1512 metadata, outer join personal media table1520 metadata, and inner join media source table1516 metadata. TheAPI1524 may include information about disconnected media sources.
The user may set up a personalized profile or preference for types, genres, or other preferred characteristics of media data. The user may access and set up the profiles in a corresponding application. Alternatively, theIntelligent TV100 may contain pre-defined profiles and/or may automatically build a profile for a user by analyzing the user's past viewing preferences progressively.
Amedia browser1528 is a content provider616 and maintains a list of connected or accessible media sources. Themedia browser1528 may provide one or more program interfaces for media sources and is configured to provide a view of the media sources in real-time such as amedia browser list1532 view, amedia browser item1536 view, and amedia browser source1540 view. Themedia browser1528 may be accessible by the user directly or from other applications. In one implementation,media browser1528 may run in the background or may be periodically run to update the list in real-time. This allows theIntelligent TV100 to have a list of connected media sources available without additional wait time to poll device information when the information is needed.
Themedia browser1528 may work with VOD subservice620, media subservice628, and/or other subservices to gather information pertaining to the media sources and/or contents available. For example, media subservice628 may receive information pertaining toconnected devices168 and whether the connecteddevices168 have accessible media data via one or more respective media source plugins652. In one implementation, themedia browser1528 does not require any permanent storage memory as the media browser is configured to request and/or collect real-time information from the media sources.
Themedia browser list1532 provides access to a virtual data view of media found on connected or accessible media sources. Themedia browser list1532 view may comprise multiple media and may provide basic metadata about the media such as, for example, a media name and a media type. In one embodiment, themedia browser list1532 view may only record a location for each media item. Themedia browser item1536 view may provide a virtual data view for a single media item found on a connected or accessible media source. Themedia browser item1536 view may provide detailed metadata for media items. In an implementation, themedia browser item1536 view may only return metadata for one media item. Themedia browser source1540 view may provide a virtual data view representing connected or accessible media sources. Alternatively, in an implementation, themedia browser source1540 view may provide information about media sources that were connected but are currently inaccessible. Amedia browser API1544 may provide access data views such as a list of media items through themedia browser list1532, detailed metadata for a media item through themedia browser item1536, and a list of media sources throughmedia browser source1540. In an implementation, data fields and a uniform resource identifier for themedia browser API1544 may be defined by contracts with individual content providers.
Amedia scanner1550 is configured to provide metadata from themedia browser1528 todatabase1504 and tables1512,1516, and1520. In one implementation,media scanner1550 rescans the information periodically and updates thedatabase1504 and tables1512,1516, and1520.Media scanner1550 may provide metadata for applications such asmedia center460 to display to the user the accessible media. In one implementation,media scanner1550 may also work with EPG subservice624 to further populate thedatabase1504 with updated EPG information. In another implementation themedia scanner1550 may work with the VOD subservice620 to populate thedatabase1504 with updated VOD information. The media scanner is described in more detail below in conjunction withFIG. 19.
An embodiment of adata structure1600 for a media table1512 is illustrated inFIG. 16. Thedata structure1600 comprises a plurality of data fields to store metadata about each media item found by themedia scanner1550. Examples of such data fields include, without limitation, a media source identifier, a media item identifier, a media source type, and any other metadata that may be associated with a media, including metadata entered by a user. Thedata structure1600 may be comprised of aggregated media metadata and can include information and metadata for media sources and a plurality of media items located on each media source. The media table1512data structure1600 may have one or more rows and each row can be associated with a different media item. The order of the rows may change and rows may be removed when media items are deleted and/or removed from the media source. There may be more or fewer rows than those shown inFIG. 16 as represented byellipsis1604. Thus, each media item may have a different set of data associated therewith.
Each row may include one or more columns representing items of metadata that are associated with the media item. There may be more or fewer columns as represented byellipsis1608. The order of the columns may change. In one implementation, a user may add new columns to record additional user entered metadata for a media item through a user interface of the Intelligent TV.
Each media source may be assigned a uniquemedia source identifier1612 by thedata structure1600. The first column may be themedia source identifier1612 which can include any type of identifier such as a numeric, alphanumeric, globally unique identifier (GUID), or other types of identifiers that uniquely identify the media source in contrast to all other media sources connected to or accessible to thetelevision100. In an implementation, every item may be associated with a numeric identification which is unique within the associated media source. In another implementation, the unique identification may include any combination of numbers and/or letters. All media items on a particular media source will have the same media source identifier. For example,rows1616a,1616bhave amedia source identifier1612 of “01A.” Row1616zzrepresents a different media source and has amedia source identifier1612 of “09Z.”
Metadata associated with each media item may be assigned amedia item identifier1620 which is unique within the associated media source. Themedia item identifier1620 may be used to associate a media item with the data stored in a media source. For example,row1616amay be associated with a media item “524” which is a trailer from the movie “Ender's Game” located on a media source “01A.”Row1616bis a different media item located on the “01A” media source. The combination of themedia source identifier1612 and themedia item identifier1620 enable each media item to have a unique identifier. Identical media items may be identified bymedia scanner1550 but can be uniquely identified by the combination of amedia source identifier1612 and amedia item identifier1620. In the example ofFIG. 16,rows1616aand1616nrepresent two different media sources for an identical media item called “Ender's Game trailer” which has been assigned amedia item identifier1620 of “524.” This example illustrates that one media item may have two different unique identifiers comprised of amedia source identifier1612 andmedia item identifier1620.
Metadata for each media item may include a variety of information in a variety of formats. For example, the metadata may include information, such as, but not limited to a title, a length, a release date, an author, a composer, names of one or more actors and cast members, a rating, artwork associated with an album or video, a location of the media item, a genre, a director, a poster (the person who posted information on a blog site), a source of an audio recording, a person speaking on an audio recording, a caption, a caller name, and/or the like. The metadata can be in various formats, such as Extended Markup Language (XML), Hypertext Markup Language (HTML), text files, and/or the like. A media source type field can include at least some type of identifier indicating what type of media source is associated a particular media source. For example, the media source type can include web service, Media Center, VOD, Input Source, etc. The metadata may include a time stamp, such as seconds since epoch or unixtime. The different types of metadata may be associated with one or more of the columns in thedata structure1600.
An embodiment of adata structure1700 for a personal media table1520 is illustrated inFIG. 17. Thedata structure1700 for the personal media table1520 may be used to organize personal metadata for one or more users. Thedata structure1700 may have a plurality of rows and each row can be associated with a different media item that has been at least viewed and/or tagged as a favorite. Thedata structure1700 may have amedia source identifier1612 column and amedia item identifier1620 column to uniquely identify individual media items in each row. In an implementation, thedata structure1700 for the personal media table1520 may have only two data fields to store metadata about each media item tagged as a favorite by a user or viewed by a user. Afavorite column1704 may record whether or not a media item has been tagged as a favorite. A viewedcolumn1708 may record whether or not a media item has been viewed by a user. In the example ofFIG. 17, a media item inrow1712ahas been viewed and is tagged as a favorite.
In an embodiment, adata structure1700 for the personal media table1520 may contain separate records for each identified user, for guests, and for all users. In this embodiment, the data structure may have a column with a unique identifier for each identified user and guest, and an identifier to record metadata for all users. In another embodiment, a unique data structure for a personal media table may be created for each identified user, for unidentified users, and for one or more guest users.
In still another embodiment, adata structure1700 for the personal media table1520 may also record other personal metadata for one or more users. In this embodiment, the data structure for the personal media table1520 may record metadata, such as, for example: media in-progress information (for example, a location where the user stopped or paused the media item without finishing it); a number of times watched, viewed, or listened to; a date added by a user; a rating assigned by a user; and other similar information. One or more embodiments of thedata structure1700 may be combined.
Anexemplary data structure1800 for a media sources table1516, illustrated inFIG. 18, may contain records for all known media sources currently connected or previously connected to theIntelligent TV100 and for all media sources currently or previously accessible. Thedata structure1800 may have one or more rows and each row can be associated with a different media source. Thedata structure1800 may have one or more columns for a plurality of types of metadata associated with each media source. There may be more or fewer rows and/or columns as represented byellipsis1604,1608. The order of the rows and columns may change and rows may be removed when a media source is deleted or becomes inaccessible. The metadata stored by thedata structure1800 may include, but is not limited to, amedia source identifier1612, a source type, a source name, a status (such as connected, disconnected, accessible, or inaccessible), a number of files stored, the size, a date first connected, a date last accessed, a date last scanned, a connection location (such as “USB port #” or “HMDI port #”), a count of times each file on the source was accessed, and/or a date each file on the source was accessed.
Some media sources may provide more metadata than other sources. In some embodiments, uniform resource identifiers and column names fordata structure1600 for the media table1512 anddata structure1800 for the media sources table1516 may be defined by contracts with media providers.
Auser interface1900, illustrated byFIG. 19, displayed by themedia center application460 may organize and display metadata from the media database. Theuser interface1900 may be displayed, for example, after a user executes a search for media by entering the phrase “Ender's Game” in asearch field1904. TheIntelligent TV100 receives the user's search request through theuser interface application468 and theuser interface layer1300 which may send the search request to thecontent aggregation service1332. Thecontent aggregation service1332 may return search results from thedatabase1340 and metadata fromdatabase1504. In this illustrated example, theuser interface1900 displays seven media items1908a-1908f; however, the number of media items displayed can be any number, including zero media items. The media items can be arranged in various orders, such as based on alphabetical order, recently accessed, media type, tagged as a favorite, and/or the like.
In this example,media item1908ais an audio book that was found on Dana'sDevice3 which is connected to theIntelligent TV100 by a Wi-Fi112connection104. If user selectsmedia item1908a, the panel manager may dismissuser interface1900 and theIntelligent TV100 may begin playing the selected audio book.Media item1908bis a video file that was found on an internet site.Media item1908cis the same video asmedia item1908bbut is located on a different device. If the user selects eithermedia item1908bor1908c, the panel manager may dismissuser interface1900, connect to the internet site, and display the video to the user on the screen of theIntelligent TV100.Media item1908eis an eBook that was located at a public library and may be checked out by the user. If the user selectsmedia item1908e, the user will be connected to the internet site for the library where the user can reserve, checkout, and/or download the eBook. The search also returnedmedia item1908fwhich is located on the media source Tom's NOOK but is not available. Even though the media source formedia item1908fis not connected to the Intelligent TV,media item1908fmay be selected and theIntelligent TV100 may provide more metadata aboutmedia item1908fto the user, such as when the media item was last connected, how the media item was connected to the Intelligent TV, a snippet view of the content, and similar information.Media item1908fmay not be played or displayed for the user. Finally,media item1908gis a recommendation by the same author that was displayed in theuser interface1900 based on the search requested by the user. If the user selectsmedia item1908g, the user may be directed to the Barnes and Noble internet site to purchase the paperback. In an implementation, a recommendation may be provided based on a metadata associated with the user. For example, if the user had tagged the actor “Harrison Ford” as a favorite and conducted a search for science fiction media items, theIntelligent TV100 may recommend the trailer for the movie “Ender's Game”1908b,1908cbecause the actor Harrison Ford is in the movie “Ender's Game.”
FIG. 20 is a process diagram of an embodiment of amethod2000 themedia scanner1550 may perform to provide metadata todatabase1504. Illustratively, the elements described herein may be stored-program-controlled entities, and a computer orprocessor364 can perform themethod2000 ofFIG. 20 and the processes described herein by executing program instructions stored in a tangible computer readable storage medium, such as amemory308 ordata storage312. Although themethod2000 is shown in a specific order, one of skill in the art would recognize that the method ofFIG. 20 may be implemented in a different order and/or be implemented in a multi-threaded environment. Moreover, various steps may be omitted or added based on implementation. Hereinafter, themethod2000 shall be explained with reference to the systems, components, modules, software, etc. described in conjunction withFIGS. 1A-19.
Themethod2000 starts2001 when a media source is connected2002 to theIntelligent TV100 or when a media source becomes accessible such as by logging into an internet media source. Thekernel404 anddevice drivers420 are operable to detect when a media source is connected to or disconnected from the Intelligent TV. In an embodiment, a media source may be tagged as a guest device by a user. Theuser interface component448 can determine2004 if a user has tagged a device as a guest device. Tagging a media source as a guest device may cause themedia scanner1550 to mark metadata associated with the guest device in themetadata database1504 as temporary2008. Temporary metadata of a guest device will be removed from thedatabase1504 when the guest media source is disconnected from theIntelligent TV100.
After determining if a media source is a guest, themedia scanner1550 may start scanning2012 the media source. Themedia scanner1550 may retrieve metadata in two passes. On afirst pass2012, themedia scanner1550 may retrieve2016 basic metadata from themedia browser list1532 view for connected media items. During the first pass themedia scanner1550 may identify and create a record of media that need to be scanned further2020. Media sources that need a further scan are marked with a need scan field set to true.
After completing the first pass, themedia scanner1550 may start asecond pass2024. During thesecond pass2024, themedia scanner1550 may retrieve detailed metadata from themedia browser item1536 view andupdates2028 thedatabase1504. Themedia scanner1550 may update an entry in thedatabase1504 to change a directory entry to a photo album entry if one or more pictures are identified within the directory. A directory entry may also be changed to a music library or playlist library if music files are found in the directory. When audio files are identified, themedia scanner1550 is operable to identify them as audio books and can update an entry in thedatabase1504 to identify the media items as audio books. Themedia scanner1550 may create new entries in thedatabase1504 and may remove entries for media items or directories that have been removed since the last scan. Themedia scanner1550 may also scan theVOD database632, theEPG database636, themedia database640, and thecontext index database1340. After the second pass is completed, themedia scanner1550 may determine2032 if a rescan is required or if a periodic rescan is scheduled. If a rescan is required or a periodic rescan is scheduled, the media scanner can start afirst pass2012 again. If no media sources require a rescan or if no periodic rescan is scheduled, the process may end2034.
In an implementation, the first pass may be paused before the first pass is completed and the second pass may begin, for example, when a user requests detailed information about a media item before themedia scanner1550 has completed a first pass of the entire media source. In yet another embodiment, the first pass and the second pass may run simultaneously. In still another embodiment, the second pass will not start until the first pass has been completed.
FIG. 21 shows a flow diagram of an embodiment of amethod2100 of processing metadata received from a media source according to an embodiment. Illustratively, the elements described herein may be stored-program-controlled entities, and a computer orprocessor364 can perform themethod2100 ofFIG. 21 and the processes described herein by executing program instructions stored in a tangible computer readable storage medium, such as amemory308 ordata storage312. Although themethod2100 is shown in a specific order, one of skill in the art would recognize that the method ofFIG. 21 may be implemented in a different order and/or be implemented in a multi-threaded environment. Moreover, various steps may be omitted or added based on implementation. Hereinafter, themethod2100 shall be explained with reference to the systems, components, modules, software, etc. described in conjunction withFIGS. 1A-19.
Theprocess2100 starts2101 when a media source is connected2102 to theIntelligent TV100 or when a media source becomes accessible such as by logging into an internet media source. Thekernel404 anddevice drivers420 are operable to detect when a media source is connected or disconnected to the Intelligent TV. Amedia source plugin452 may be loaded2106 to communicate with and receive information from the media source.Data management600 may have a plurality ofmedia source plugins452 to communicate with and obtain media information inconnected devices168 or other media sources accessible to theIntelligent TV100 as illustrated inFIG. 14.
TheIntelligent TV100 may then use amedia source plugin452 to communicate with and/or receive metadata from the media source2110. Themedia source plugin452 may access the metadata directly by an API provided by the media source. Media source plugins may also be developed by third parties to parse content and/or metadata provided by a media source without accessing the metadata through an API provided by the media source.
Themedia source plugin452 may then convert the received metadata from the media source into one or more data model formats2114. The mediadata service database1504 may have a number of data models for the internal storage and management of the received metadata. The metadata received from the variousmedia source plugins452 may be converted and/or translated to a specific format handled by the respective data model. Exemplary data models include a media table data model, a media source data model, and a personal media data model. The data models provide uniform formats for subservices, such as theVOD subservice620,EPG subservice624, and media subservice628, and/or the content providers616 which interface withapplications416. In some embodiments,conversion2114 may not be required because the metadata may be processed bydata management600 and stored in memory or in a database without any conversion.
Next, the received metadata may be processed2118 by the media subservice628 for use by the content providers616 and/orapplications416. For example,media data service1500 may require EPG information from EPG subservice624 as well as media information from VOD subservice620 or media subservice628 to provide personalized media metadata to relevant applications. Therefore, themedia data service1500 may process the received metadata in order to generate the personalized media metadata. Further, in one implementation, the subservices may need to allow themedia data service1500 access todatabases632,636, and640 in order to allowmedia data service1500 to store and access the personalized media metadata. In other implementations,media data service1500 may store the personalized media metadata in a dedicated database.
Themedia data service1500 may communicate2122 with and provide processed metadata to other content providers616 and/orapplications416. In an embodiment, the content providers616 are configured to communicate with and access the subservices responsive to user actions or on a schedule in order to generate and present metadata required forapplication416.
FIG. 22 shows a flow diagram of an embodiment of aprocess2200 of providing metadata to a content provider616 and/orapplication416 to generate and present metadata to a user in a user interface. Illustratively, the elements described herein may be stored-program-controlled entities, and a computer orprocessor364 can perform theprocess2200 ofFIG. 22 and the processes described herein by executing program instructions stored in a tangible computer readable storage medium, such as amemory308 ordata storage312. Although theprocess2200 is shown in a specific order, one of skill in the art would recognize that the method ofFIG. 22 may be implemented in a different order and/or be implemented in a multi-threaded environment. Moreover, various steps may be omitted or added based on implementation. Hereinafter, theprocess2200 shall be explained with reference to the systems, components, modules, software, etc. described in conjunction withFIGS. 1A-19.
Process2200 starts2201 by accessing themedia data service1500 in response to user action or scheduledevent2202. In one implementation,applications416 may request themedia data service1500 provide metadata and/or processing of metadata for display to the user. An application may start by user action. For example,media center460 may request metadata when the user performs a search as illustrated inFIG. 19 and the accompanying text. Metadata may also be requested from themedia data service1500 when the user activates theapplication center464 orLive TV452 which may require metadata to generate and display a grid of programming available. Therefore, therelevant application416 will contact themedia data service1500 for the media and/or metadata needed. Themedia data service1500 may also provide metadata in response to scheduled events. For example,media browser1528 provides a real-time view of media sources and maintains a list of connected media sources. Therefore,media browser1528 may be loaded in the background and may run continuously in order to update the list. As such, when applications such asmedia center460 access the list of connected media sources,media browser1528 may be able to provide a list in real-time without further delays required to poll each connected media source when the application requests such information.
Themedia data service1500 may next communicate with and receiverelevant metadata2206 from the corresponding subservices or other content provider modules616. As discussed with respect toprocess2100, the media subservice628 processes and/or stores the received metadata from media sources. In one implementation, the metadata may be processed and stored asmedia data model612 instorage640.
Themedia data service1500 next organizes2210 the requested metadata according to a pre-defined format as content. Themedia data service1500 may process the received metadata from the media subservices628. In one implementation, themedia data service1500 is configured to provide therelevant applications416 with metadata. The media data service may process the received metadata (i.e., in the form of data model612) and organize such metadata into a pre-defined format for use by therelevant applications416. For example,media data service1500 may organize the metadata received fromsubservices620,624, and628 to generate the personalized media metadata and themedia data service1500 may further store the generated metadata in a sqlite database. The relevant application may simply access the database for the personalized media metadata.
Themedia data service1500 next provides2214 the content to video hardware and/or display or to other content provider modules and/or applications. In one implementation, themedia data service1500 may access and provide media data directly to the video hardware and/or display viaresource arbitrator656. For example, themedia data service1500 may provide media content directly to video hardware and/or display such that the media data may display directly the video without processing by an application. This may have the benefit in reducing processor-intensive video processing and other benefits. Media data may also be provided to other relevant content provider modules616 and/orapplications416.
The exemplary systems and methods of this disclosure have been described in relation to media data service. However, to avoid unnecessarily obscuring the present disclosure, the preceding description omits a number of known structures and devices. This omission is not to be construed as a limitation of the scopes of the claims. Specific details are set forth to provide an understanding of the present disclosure. It should however be appreciated that the present disclosure may be practiced in a variety of ways beyond the specific detail set forth herein.
Furthermore, while the exemplary aspects, embodiments, and/or configurations illustrated herein show the various components of the system collocated, certain components of the system can be located remotely, at distant portions of a distributed network, such as a LAN and/or the Internet, or within a dedicated system. Thus, it should be appreciated, that the components of the system can be combined in to one or more devices, such as a set-top box or television, or collocated on a particular node of a distributed network, such as an analog and/or digital telecommunications network, a packet-switch network, or a circuit-switched network. It will be appreciated from the preceding description, and for reasons of computational efficiency, that the components of the system can be arranged at any location within a distributed network of components without affecting the operation of the system. For example, the various components can be located in a switch such as a PBX and media server, gateway, in one or more communications devices, at one or more users' premises, or some combination thereof. Similarly, one or more functional portions of the system could be distributed between a telecommunications device(s) and an associated computing device.
Furthermore, it should be appreciated that the various links connecting the elements can be wired or wireless links, or any combination thereof, or any other known or later developed element(s) that is capable of supplying and/or communicating data to and from the connected elements. These wired or wireless links can also be secure links and may be capable of communicating encrypted information. Transmission media used as links, for example, can be any suitable carrier for electrical signals, including coaxial cables, copper wire and fiber optics, and may take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Also, while the flowcharts have been discussed and illustrated in relation to a particular sequence of events, it should be appreciated that changes, additions, and omissions to this sequence can occur without materially affecting the operation of the disclosed embodiments, configuration, and aspects.
A number of variations and modifications of the disclosure can be used. It would be possible to provide for some features of the disclosure without providing others.
For example in one alternative embodiment, the media browser may have its own dedicated memory and/or permanent storage. In another alternative embodiment, the media scanner may continuously scan for connected media sources.
In another alternative embodiment, the media scanner may populate the media table in only one pass. In this embodiment, the media scanner may retrieve basic and detailed metadata from themedia browser item1536 view and update thedatabase1504 in one pass.
In yet another embodiment, the systems and methods of this disclosure can be implemented in conjunction with a special purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element(s), an ASIC or other integrated circuit, a digital signal processor, a hard-wired electronic or logic circuit such as discrete element circuit, a programmable logic device or gate array such as PLD, PLA, FPGA, PAL, special purpose computer, any comparable means, or the like. In general, any device(s) or means capable of implementing the methodology illustrated herein can be used to implement the various aspects of this disclosure. Exemplary hardware that can be used for the disclosed embodiments, configurations and aspects includes computers, handheld devices, telephones (e.g., cellular, Internet enabled, digital, analog, hybrids, and others), and other hardware known in the art. Some of these devices include processors (e.g., a single or multiple microprocessors), memory, nonvolatile storage, input devices, and output devices. Furthermore, alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.
In yet another embodiment, the disclosed methods may be readily implemented in conjunction with software using object or object-oriented software development environments that provide portable source code that can be used on a variety of computer or workstation platforms. Alternatively, the disclosed system may be implemented partially or fully in hardware using standard logic circuits or VLSI design. Whether software or hardware is used to implement the systems in accordance with this disclosure is dependent on the speed and/or efficiency requirements of the system, the particular function, and the particular software or hardware systems or microprocessor or microcomputer systems being utilized.
In yet another embodiment, the disclosed methods may be partially implemented in software that can be stored on a storage medium, executed on programmed general-purpose computer with the cooperation of a controller and memory, a special purpose computer, a microprocessor, or the like. In these instances, the systems and methods of this disclosure can be implemented as program embedded on personal computer such as an applet, JAVA® or CGI script, as a resource residing on a server or computer workstation, as a routine embedded in a dedicated measurement system, system component, or the like. The system can also be implemented by physically incorporating the system and/or method into a software and/or hardware system.
Although the present disclosure describes components and functions implemented in the aspects, embodiments, and/or configurations with reference to particular standards and protocols, the aspects, embodiments, and/or configurations are not limited to such standards and protocols. Other similar standards and protocols not mentioned herein are in existence and are considered to be included in the present disclosure. Moreover, the standards and protocols mentioned herein and other similar standards and protocols not mentioned herein are periodically superseded by faster or more effective equivalents having essentially the same functions. Such replacement standards and protocols having the same functions are considered equivalents included in the present disclosure.
The present disclosure, in various aspects, embodiments, and/or configurations, includes components, methods, processes, systems and/or apparatus substantially as depicted and described herein, including various aspects, embodiments, configurations embodiments, subcombinations, and/or subsets thereof. Those of skill in the art will understand how to make and use the disclosed aspects, embodiments, and/or configurations after understanding the present disclosure. The present disclosure, in various aspects, embodiments, and/or configurations, includes providing devices and processes in the absence of items not depicted and/or described herein or in various aspects, embodiments, and/or configurations hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease and\or reducing cost of implementation.
The foregoing discussion has been presented for purposes of illustration and description. The foregoing is not intended to limit the disclosure to the form or forms disclosed herein. In the foregoing Detailed Description for example, various features of the disclosure are grouped together in one or more aspects, embodiments, and/or configurations for the purpose of streamlining the disclosure. The features of the aspects, embodiments, and/or configurations of the disclosure may be combined in alternate aspects, embodiments, and/or configurations other than those discussed above. This method of disclosure is not to be interpreted as reflecting an intention that the claims require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed aspect, embodiment, and/or configuration. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the disclosure.
Moreover, though the description has included description of one or more aspects, embodiments, and/or configurations and certain variations and modifications, other variations, combinations, and modifications are within the scope of the disclosure, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative aspects, embodiments, and/or configurations to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.