Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sonic Solutions LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/860,351external-prioritypatent/US20040220926A1/en
Priority claimed from US10/860,350external-prioritypatent/US20040220791A1/en
Application filed by IndividualfiledCriticalIndividual
Publication of CA2550536A1publicationCriticalpatent/CA2550536A1/en
G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
G11B27/34—Indicating arrangements
G—PHYSICS
G06—COMPUTING OR CALCULATING; COUNTING
G06F—ELECTRIC DIGITAL DATA PROCESSING
G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
G06F16/43—Querying
G06F16/435—Filtering based on additional data, e.g. user or group profiles
G06F16/437—Administration of user profiles, e.g. generation, initialisation, adaptation, distribution
G—PHYSICS
G06—COMPUTING OR CALCULATING; COUNTING
G06F—ELECTRIC DIGITAL DATA PROCESSING
G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
G06F16/43—Querying
G06F16/438—Presentation of query results
G06F16/4387—Presentation of query results by the use of playlists
G06F16/4393—Multimedia presentations, e.g. slide shows, multimedia albums
G—PHYSICS
G06—COMPUTING OR CALCULATING; COUNTING
G06F—ELECTRIC DIGITAL DATA PROCESSING
G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
G06F16/48—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
G—PHYSICS
G11—INFORMATION STORAGE
G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
G—PHYSICS
G11—INFORMATION STORAGE
G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
G—PHYSICS
G11—INFORMATION STORAGE
G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
G—PHYSICS
G11—INFORMATION STORAGE
G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
G11B27/32—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
G—PHYSICS
G11—INFORMATION STORAGE
G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
G11B2220/00—Record carriers by type
G11B2220/20—Disc-shaped record carriers
G11B2220/25—Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
G11B2220/2537—Optical discs
G11B2220/2562—DVDs [digital versatile discs]; Digital video discs; MMCDs; HDCDs
Landscapes
Engineering & Computer Science (AREA)
Theoretical Computer Science (AREA)
Multimedia (AREA)
Data Mining & Analysis (AREA)
Databases & Information Systems (AREA)
Physics & Mathematics (AREA)
General Engineering & Computer Science (AREA)
General Physics & Mathematics (AREA)
Library & Information Science (AREA)
Information Transfer Between Computers (AREA)
Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
PERSONALIZATION SERVICES FOR ENTITIES FROM MULTIPLE SOURCES FIELD OF THE INVENTION The present invention relates to the presentation of multimedia entities, and more particularly to the presentation of locally stored media entities and/or with remotely obtained network media entities, that is modified according to a viewer's preferences or entities owner's criteria. In addition the present invention relates to the process of acquiring new multimedia entities for playback. BACKGROUND OF THE INVENTION In marketing, many things have been long recognized as aiding success, such as increasing customer satisfaction through such devices as providing personalized service, fast service, access to related or updated information, etc. Traditional marketing has made use of such things as notice of promotional offers for related products such as providing coupons, for related products etc. Additionally, some studies have shown that simple repeated brand exposure, such as by advertisement, increases recognition and sales. One of the largest marketing industries today is the entertainment industry and related industries: Digital versatile disks (DVDs) are poised to dominate as the delivery media of choice for the consumer sales market of the home entertainment industry, business computer industry, home computer industry, and the business information industry with a single digital format, eventually replacing audio CDs, videotapes, laserdiscs, CD-ROMs, and video game cartridges. To this end, DVD has widespread support from all major electronics companies, all major computer hardware companies, and all major movie and music studios. In addition, new computer readable medium formats and disc formats such as High Definition DVD (HD-DVD), Advanced Optical Discs (AOD), and Blu-Ray Disc (BD), as well as new mediums such as Personal Video Recorders (PVR) and Digital Video Recorders (DVR) are just some of the future mediums under development. The integration of computers, the release of new operating systems including the Microsoft Media Center Edition of Windows XP, the upcoming release of the next Microsoft operating system due in 2005 and codenamed "Longhorn" and many other computer platforms that interface with entertainment systems are also entering into this market as well. Currently, the fastest growing marketing and informational access avenue is the Internet. The share of households with Internet access in the U.S. soared by 58~ in two years, rising from 26.2 in December 1998 to 41.5 in August 2000 (Source: Falling Through the Net: Toward Digital Inclusion by the National Telecommunications and Information Administration, October 2000). However, in the DVD-video arena, little has been done to utilize the vast power for up-to-date, new, and promotional information accessibility to further the aims of improving marketability and customer satisfaction Additionally, content is generally developed for use on a particular type of system. If a person wishes to view the content but does not have the correct system, the content may be displayed poorly or may not be able to be displayed at all. Accordingly, improvements are needed in a way that content is stored, located, distributed, presented and categorized. SUN~lARY OF THE INVENTION One present embodiment advantageously addresses the needs mentioned previously as well as other needs by providing services that facilitates the access and use of related or updated content to provide augmented or improved content with playback of content. Another embodiment additionally provides for the access and use of entities for the creation, modification and playback of collections. One embodiment can include a method comprising receiving a request for content; searching for a plurality of entities in response to the received request, the plurality of entities each having entity metadata associated therewith; and creating a collection, the collection comprising the plurality of entities and collection metadata. Alternatively, the method can further include locating the plurality of entities; analyzing the entity metadata associated with each of the plurality of entities; and downloading only the entities that meet a set of criteria. An alternative embodiment can include a data structure embodied on a computer readable medium comprising a plurality of entities; entity metadata associated with each of the plurality of entities; and a collection containing each of the plurality of entities, the collection comprising collection metadata for playback of the plurality of entities. Yet another embodiment can include a method comprising receiving a request for content; creating a collection comprising a plurality of entities meant for display with a first system and at least one entity meant for display on a second system; and outputting the collection comprising the plurality of entities meant for display on the first system and the at least one entity meant for display on the second system to the first system. Another alternative embodiment can include a method comprising receiving a request for content; searching for a plurality of entities in response to the received request, the plurality of entities each having entity metadata associated therewith; and creating a collection comprising the plurality of entities, the collection having collection metadata. Still another embodiment can include a method for searching for content comprising the steps of receiving at least one search parameter; translating the search parameter into a media identifier; and locating the content associated with the media identifier. Optionally, the content is a collection comprising a plurality of entities, the method further comprising determining one of the plurality of entities can not be viewed; and locating an entity for replacing the one of the plurality of entities that can not be viewed. One optional embodiment includes a system for locating content comprising a playback runtime engine for constructing a request from a set of search parameters; a collection name service for translating the request into a collection identifier; and a content search engine for searching for content associated with the collection identifier. Another embodiment can be characterized as a method comprising receiving a request for content; searching for a plurality of entities in response to the received request, the plurality of entities each having entity metadata associated therewith; creating a first group of entities that meet the received request, each entity within the first group of entities having entity metadata associated therewith; comparing the first group of entities that meet the received request or the associated entity metadata to a user profile; and creating a collection comprising at least one entity from the first group of entities. Yet another embodiment can be characterized as a system comprising a plurality of devices connected via a network; a plurality of shared entities located on at least one of the plurality of devices; and a content management system located on at least one of the plurality of devices for creating a collection using at least two of the plurality of shared entities. Still another embodiment can be characterized as a method of modifying a collection comprising analyzing metadata associated with the collection; and adding at least one new entity to the collection based upon a set of presentation rules. Another preferred embodiment can be characterized as a method of displaying content comprising providing a request to a content manager, the request including a set of criteria; searching for a collection that at least partially fulfills the request, the collection including a plurality of entities; determining which of the plurality of entities within the collection do not meet the set of criteria; and searching for a replacement entity to replace one of the plurality of entities within the collection that do not meet the set of criteria. Another embodiment includes a method of modifying an entity, the entity having entity metadata associated therewith, comprising the steps of comparing the entity or the entity metadata with a set of presentation rules; determining a portion of the entity that does not meet the set of presentation rules; and removing the portion of the entity that does not meet the set of presentation rules. Yet another embodiment can be characterized as a collection embodied on a computer readable medium comprising a digital video entity; an audio entity, for providing an associated audio for the digital video; a menu entity, for providing interactivity points within or associated with the digital video; and collection metadata for defining the playback of the digital video entity, the audio entity, and the menu entity.
Still another embodiment can be characterized as a method of downloading streaming content comprising downloading a first portion of the streaming content; downloading a second portion of the steaming content while the first portion of the streaming content is also downloading; outputting the first portion of the steaming content for display on a presentation device; and outputting the second portion of the steaming content for display on a presentation device after outputting the first portion of the steaming content; wherein a third portion of the steaming content originally positioned in between the first portion of the steaming content and the second portion of the steaming content is not output for display on a presentation device. In one embodiment, the invention can be characterized as an integrated system for combining web or network content and local content (either on disc or cached) comprising a display; a computing device operably coupled to a local media, a network and the display, the computing device at least once accessing data on the network, the computing device comprising: a storage device, a presentation rendering device such as a browser having a presentation engine displaying content on the display, an application programming interface residing in the storage device, a decoder at least occasionally processing content received from the local media and producing media content substantially suitable for display on the display, and a navigator coupled to the decoder and the application programming interface, the navigator facilitating user or network-originated control of the playback of the local media, the computing device receiving network content from the network and combining the network content with the media content, the presentation engine displaying the combined network content and media content on the display. In one exemplary embodiment, the network content may be transferred over a network that supports Universal Plug and Play (UPnP) or other methodology for connecting devices on a network. The UPnP standard brings the PC peripheral Plug and Play concept to the home network. Devices that are plugged into the network are automatically detected and configured. In this way new devices such as an Internet gateway or media server containing content can be added to the network and provide additional access to content to the system. The UPnP architecture is based on standards such as TCP/IP, HTTP, and XML. UPnP can also run over different networks such as IP stack based networks, phone lines, power lines, Ethernet, Wireless (RF), and IEEE 1394 Firewire. UPnP devices may also be used as the presentation device as well. Given this technology and others such as Bluetooth, Wifi 802.11a/b/g etc. the various blocks in the systems do not need to be contained in one device, but are optionally spread out across a network of various devices each performing a specific function. In another embodiment, using REBOL (Relative Expression-Based Object Language) and IOS creates a distributed network where systems can share media. REBOL is not a traditional computer language like C, BASIC, or Java. Instead, REBOL was designed to solve one of the fundamental problems in computing: the exchange and interpretation of information between distributed computer systems. REBOL accomplishes this through the concept of relative expressions. Relative expressions, also called "dialects", provide greater efficiency for representing code as well as data, and are REBOL's greatest strength. The ultimate goal of REBOL is to provide a new architecture for how information is stored, exchanged, and processed between all devices connected over the Internet. IOS provides a better approach to group communications. IOS goes beyond email, the web, and Instant Messaging (IM) to provide real-time electronic interaction, collaboration, and sharing. IOS opens a private, noise-free channel to other nodes on the network.
In another embodiment, the invention can be characterized as a method comprising: a) receiving a removable media; b) checking if said removable media supports media source integration; c) checking if said removable media source is a specific type (such as DVD) responsive to said removable media supporting source integration; d) checking whether said device is in a movie mode or a system mode responsive to said removable media being a DVD; e) launching standard playback and thereafter returning to said step (a) responsive to said device being in said movie mode; f) checking if said device has a default player mode of source integration when said device is in said system mode; g) launching standard playback and thereafter returning to said step (a) responsive to said device not having a default player mode of source integration; h) checking if said removable media contains a device-specific executable program when said device having a default player mode of source integration; i) executing said device-specific executable program when said device has said device-specific executable program and thereafter returning to said step (a); j) checking whether said device has a connection to a remote media source; k) launching a default file (or other specific portion) from said removable media when said device does not have a remote media source connection and thereafter returning to said step (a); 1) checking whether said remote media source has content relevant to said removable media; m) displaying said relevant content when said relevant content exists and thereafter returning to said step (a); n) otherwise launching a default file (or other specific portion) from said removable media and thereafter returning to said step (a); o) returning to said step (f). One embodiment of the present invention can be characterized as a method comprising receiving a request for content; searching for a plurality of entities in response to the received request, the plurality of entities each having entity metadata associated therewith; and creating a collection, the collection comprising the plurality of entities and collection metadata. These requests can be to local devices, to peripherals to the device, or to devices on a local/remote network, or the Internet. In addition, metadata can be optionally encrypted requiring specific decryption keys to unlock them for use. Another embodiment of the present invention can be characterized as a data structure embodied on a computer readable medium comprising a plurality of entities; entity metadata describing each of the plurality of entities; a collection containing each of the plurality of entities; and collection metadata describing the collection. Yet another embodiment of present invention can be characterized as a system comprising receiving a request for content; creating a collection comprising a plurality of entities meant for display on a first type of presentation device; adding at least one entity meant for display on a second type of presentation device to the collection; and outputting the collection comprising the plurality of entities meant for display on the first type of presentation device and the at least one entity meant for display on the second type of presentation device to the first type of presentation device. An alternative embodiment of the present invention can be characterized as a method comprising receiving a request for content; searching for a plurality of entities in response to the received request; creating a collection comprising the plurality of entities, the collection having collection metadata; and generating presentation rules for the entities base at least upon the collection metadata. This embodiment can further comprise outputting the collection to a presentation device based upon the generated presentation rules.
Yet another alternative embodiment of the present invention can include a method comprising receiving a request for content; searching for a plurality of entities in response to the received request, the plurality of entities each having entity metadata; comparing a user profile to the entity metadata for each of the plurality of entities; and creating a collection comprising the plurality of entities base at least upon the comparison of the user profile to the entity metadata. In an alternative embodiment the present invention includes a system comprising a plurality of computers connected via a network; a plurality of shared entities Located on at least one of the plurality of computers; and a content management system located on at least one of the plurality of computers for creating a collection using at least two of the plurality of shared entities. Another alternative embodiment of the present invention includes a method of modifying an existing collection comprising analyzing metadata associated with the existing collection; and adding at least one new entity to the existing collection based upon a system profile. In another embodiment, the method can further comprise removing at least one entity from the existing collection, wherein the added entity takes the place of the removed entity. Yet another embodiment includes a method of displaying a context sensitive interactive menu comprising the steps of outputting content to a display device; receiving a request to display a menu; deriving the context sensitive menu from the current content being output; and outputting the context sensitive menu to the display device. BRIEF DESCRIPTION OF THE DRAWINGS The above and other aspects, features and advantages of the present invention will be more apparent from the following more particular description thereof, presented in conjunction with the following drawings wherein: FIG. 1 is a block diagram illustrating a hardware platform including a playback subsystem, presentation engine, entity decoders, and a content services module; FIG. 2 is a diagram illustrating a general overview of a media player connected to the Internet according to one embodiment; FIG. 3 is a block diagram illustrating a plurality of components interfacing with a content management system in accordance with one embodiment; FIG. 4 is a block diagram illustrating a system diagram of a collection and entity publishing and distribution system connected to the content management system of FIG. 3; FIG. 5 is a diagram illustrating a media player according to one embodiment; FIG. 6 is a diagram illustrating a media player according to another embodiment; FIG. 7 is a diagram illustrating an application programming system in accordance with one embodiment; FIG. 8 is a conceptual diagram illustrating the relationship between entities, collections, and their associated metadata; FIG. 9 is a conceptual diagram illustrating one example of metadata fields for one of the various entities; FIG. 10 is a conceptual diagram illustrating one embodiment of a collection; FIG. 11 is a diagram illustrating an exemplary collection in relation to a master timeline; FIG. 12 is a block diagram illustrating a virtual DVD construct in accordance with one embodiment; FIG. 13 is a diagram illustrating a comparison of a DVD construct as compared to the virtual DVD construct described with reference to FIG. 12;
FIG. 14 is a block diagram illustrating a content management system locating a pre-define collection in accordance with an embodiment; FIG. 15 is a block diagram illustrating a search process of the content management system of FIG. 14 for locating a pre-defined collection in accordance with one embodiment; FIG. 16 is a block diagram illustrating a content management system creating a new collection in accordance with an embodiment; FIG. 17 is a block diagram illustrating a search process of the content management system of FIG. 16 for locating at least one entity in accordance with one embodiment; FIG. 18 is a block diagram illustrating a content management system publishing a new collection in accordance with an embodiment; FIG. 19 is a block diagram illustrating a content management system locating and modifying a pre-define collection in accordance with an embodiment; FIG. 20 is a block diagram illustrating a search process of the content management system of FIG. 19 for locating a pre-defined collection in accordance with one embodiment; FIG. 21 is a block diagram illustrating an example of a display device receiving content from local and offsite sources according to one embodiment; FIG. 22 is a block diagram illustrating an example of a computer receiving content from local and offsite sources according to one embodiment; FIG. 23 is a block diagram illustrating an example of a television set-top box receiving content from local and offsite sources and according to one embodiment;
FIG. 24 is a block diagram illustrating media and content integration according to one embodiment; FIG. 25 is a block diagram illustrating media and content integration according to another embodiment; FIG. 26 is a block diagram illustrating media and content integration according to yet another embodiment; FIG. 27 is a block diagram illustrating one example of a client content request and the multiple levels of trust for acquiring the content in accordance with an embodiment; FIG. 28 shows a general exemplary diagram of synchronous viewing of content according to one embodiment; FIG. 29 is a block diagram illustrating a user with a smart card accessing content in accordance with an embodiment; and FIG. 30 is a diagram illustrating an exemplary remote control according to an embodiment. DETAILED DESCRIPTION OF THE DRAWINGS The following description is not to be taken in a limiting sense, but is made merely for the purpose of describing the general principles of the invention. The scope of the invention should be determined with reference to the claims. Metadata generally refers to data about data. A good example is a library catalog card, which contains data about the nature and location of the data in the book referred to by the card. There are several organizations defining metadata for media. These include Publishing Requirements for Industry Standard Metadata (PRISM http://www.prismstandard.org/), the Dublin CORE initiative (http://dublincore.org/), MPEG-7 and others. A system and method for metadata distribution to customize media content playback is described in United States Patent Publication No. 20030122966. Metadata can be important on the web because of the need to find useful information from the mass of information available. Manually created metadata (or metadata created by a software tool where the user defines the points in the timeline of the audio and video and specifies the metadata terms and keywords) adds value because the manually created metadata ensures consistency. In one embodiment, metadata can be generated by the system described herein. Metadata can be used to create a relationship between web-pages about a particular topic. For example, webpage about a topic that contains within its metadata a word or phrase that relates to the topic, can be identified as topically related to other web pages about that topic when all web pages about that topic contain the same word within their metadata. Metadata can also ensure that variations in terminology are overcome. For example, if one topic has two or more names, terms or phrases that refer to the topic, each of these names will be used within the metadata of entities that relate to the topic. For example, an article about sports utility vehicles could also be given the metadata keywords '4 wheel drives', '4WDs' and 'four wheel drives', as this is what sports utility vehicles are known as, for example, in Australia. As referred to herein, an entity is a piece of data that can be stored on a computer readable medium. For example, an entity can include audio data, video data, graphical data, textual data, or other sensory information. An entity can be stored in any media format, including, multimedia formats, file based formats, or any other format that can contain information whether graphical, textual, audio, or other sensory information. Entities are available on any disk based media, for example, digital versatile disks (DVDs), audio CDs, videotapes, laser-disks, CD-ROMs, or video game cartridges. Furthermore, entities are available on any computer readable medium, for example, a hard drive, a memory of a server computer, RAM, ROM, etc. Furthermore, entities are available over any network, for example, the Internet, WAN, LAN, digital home network, etc. In some embodiments, an entity will have entity metadata associated therewith. Examples of entity metadata will be further described herein at least with reference to FIG. 9. As referred to herein, a collection includes a plurality of entities and collection metadata. The collection metadata defines the properties of the collection and how the plurality of entities are related within the collection. Collection metadata will be further defined herein at least with reference to FIGS. 8-10. In accordance with one embodiment, a user of a content management system can create and modify existing collections. Different embodiments of the content management system will be described herein at least with reference to FIGS. 1-4 and 6-7. Advantageously, the user of the content management system is able to create new collections from entities that are stored on a local computer readable medium, or generated at a local computer system or other device for providing locally generated content. (A local computer readable medium refers to a computer readable medium that is within or mounted within a local computer system or other device for accessing the local computer readable medium, or that is within or mounted within another computer system that is located within the same room, building or facility as the local computer system and coupled to the local computer system through a data channel, such as a network data channel, e.g., a wired or wireless network data channel, e.g., a local area network (LAN). A local computer readable medium is in contrast to a remote computer readable medium, which is a computer readable medium that is within or mounted within a remote computer system or other device for accessing the remote computer readable medium that is not located within the same room, building or facility as the local computer system or the other computer system, and that is coupled to the local computer system through a data channel, such as a data channel, such as a network data channel, e.g., a wired or wireless network data channel, e.g., a wide area network (WAN), such as the Internet.) Alternatively, the user may also be able to retrieve entities stored on a remote computer readable medium or generated at a remote computer system or other device for generating remotely generated content, over a data channel, such as a network data channel, e.g., a wide are network, e.g., the Internet or other network to substitute for entities that are not on a local computer readable medium of locally generated. In accordance with another embodiment, a search engine is provided that searches for entities and collections located within different trust levels. Trust levels will be further described herein with reference to FIG. 27. In one embodiment, the results of a search are based upon at least upon the trust level where the entity is stored. In another embodiment, the results of the search are based upon metadata associated with an entity. In yet another embodiment, the search results can be based upon a user profile or a specified request. An application programming interface (API) can be used in one embodiment based on a scripting model, leveraging, e.g., industry standard XML/HTML and JavaScript standards and other proprietary methods for integrating locally stored media content and remote interactively-obtained network media content, e.g., video content in a local interactive application such a web page. The application programming interface (API) enables embedding, e.g., video content in a local interactive application such web pages, and can display the video in full screen or sub window format. Commands can be executed to control the playback, search, and overall navigation through the embedded content. The application programming interface will be described in greater detail at least with reference to FIGS. 2 and 5-7. In addition behavioral metadata is used by the application programming interface in some embodiments to provide rules for presentation of entities and collections. Behavioral metadata, which one type of collection metadata, will be described in greater detail herein at least with reference to FIG. 11. The application programming interface can be queried and/or set using properties. Effects may be applied to playback. Audio Video (AV) sequences have an associated time element during playback, and events are triggered to provide notification of various playback conditions, such as time changes, title changes, and user operation (UOP) changes. Events can be used for use in scripting and synchronizing audio and/or video-based content (AV content) with other media types, such XML/HTML or locally cached content or read only memory (ROM)-based content, external to the AV content. This will be described in greater detail herein with reference to FIGS. 5-7. In one embodiment the application programming interface (API) enables content developers to create products that seamlessly combine, e.g., content from a network, such as the Internet, e.g. on a remote computer readable medium or remotely generated, with content from other digital versatile disk-read only memory (DVD-ROM), digital versatile disk-audio (DVD-Audio), compact disc-audio (CD-Audio), compact disc-digital audio (CD-DA), high definition discs (Blu-ray, HD-DVD, AOD), e.g., on a local computer readable medium. There are several ways to seamlessly navigate between the AV Video content to the XML/HTML (ROM) content and back. In one example, the AV content is authored as to have internal triggers that cause an event that can be received by external media types. Alternatively, the AV content is authored so as to have portions of the AV content that can be associated with triggering an event that can be received by external media types. For example, in DVD-video entry and exit points can be devised using dummy titles and title traps. A dummy title is an actual title within the DVD, however, in one example, there is no corresponding video content associated with the title. For example, the dummy title can have period, e.g., 2 seconds, of black space associated with it. The dummy title is used to trigger an event, thus is referred to as a title trap. During the DVD-Video authoring, the dummy titles are created that, when invoked, display n seconds (where n is any period of time) of a black screen, then return. Additionally, a middleware software layer informs the user interface that a certain title has been called and the user interface can traps on this (in HTML, using a DOM event and JavaScript event handler) and display an alternate user interface instead of the normal AV content. FIG. 7 depicts how these devices have been employed to integrate HTML as the user interface and DVD-Video content as the AV content. In this example, the introductory AV content usually has user operation control functions, such as UOPs in DVD-Video, for prohibiting forwarding through a FBI warning and the like. As many type of AV content have, there is a scene selection on a main menu. However, in one embodiment, when the middleware layer traps on title number 4 when played on a device such as depicted in FIGS. 1-4, a unique HTML Enhanced Scene Selection menu (web page) is presented. The enhancement can be as simple as showing the scene in an embedded window so the consumer can decide if this is the desired scene before leaving the selection page. After using this enhanced menu, a hyperlink is provided which returns to the Main menu by playing title number 2, which is a dummy title (entry point) back into the main DVD-Video menu. Additionally, the JavaScript can load an Internet server page instead of the ROM page upon invocation thereby updating the ROM content with fresher, newer server content. An example of updating of content is described, for example, in U.S. Patent Application No. 09/476,190, entitled A SYSTEM, METHOD AND ARTICLE OF MANUFACTURE FOR UPDATING CONTENT STORED ON A PORTABLE STORAGE MEDIUM. Hereinafter, by the use of disc, disk, DVD or DVD-Video, it is to be understood that all of these disk/disc media or locally cached content are included. The combination of the Internet with DVD-Video creates a richer, more interactive, and personalized entertainment experience for users. Further, the application programming interface (API) provides a common programming interface allowing playback of this combined content on multiple playback platforms simultaneously. V~hile the application programming interface (API) allows customized content and functions tailored for specific platforms, the primary benefit of the application programming interface (API) is that content developers can create content once for multi-platform playback, without the need of becoming an expert programmer on specific platforms, such as Windows, Macintosh, Linux, Java, Sony Playstation, Microsoft XBOX, Nintendo, real-time operating systems, and other platforms. As described above, this is accomplished through the use of the events. Internet connectivity is not a requirement for the use of the application programming interface (API). In addition, audio media such as compact disc-digital audio (CD-DA) can also be enhanced by use of the application programming interface (API). This is also described in the document InterActual Usage Guide for Developers.
Personal video recorders (PVRs), such as TiVo, Replay, and digital versatile disk-recordable (DVD-R) devices, allow users to purchase video or audio products (entities or collections) by downloading video or audio products from a satellite, a cable television distribution network, the Internet, another network or other high-bandwidth systems. When so downloaded, the video or audio can be stored to a local disk system or burned onto a recordable media such as DVD-R. In one embodiment, the content stored on the PVR or recordable media can be supplemented with additional content, e.g., from a LAN, the Internet and/or another network and displayed or played on a presentation device, such as a computer screen, a television, and/or an audio and/or video playback device. The combination of the content with the additional content can be burned together onto a recordable media, or stored together on, for example a PVR, computer hard drive, or other storage medium. Referring now to FIG. 1, a diagram is shown illustrating the interaction between a playback subsystem 102, a presentation engine 104, entity decoders 106 and a content services module 108 according to an embodiment. The system shown in FIG. 1 can be utilized in many embodiments. Shown are a hardware platform 100, the playback subsystem 102, the content services module 108, the presentation engine 104, and the entity decoders 106. The hardware platform includes the playback subsystem 102, the content services module 108, the presentation engine 104 and the entity decoders 106. The content services module gathers 108, searches, and publishes entities and collections in accordance with one embodiment. The content services module 108 additionally manages the access rights for entities and collections as well as logging the history of access to the entities and collections. These features are described in greater detail herein at least with reference to FIGS. 3 and 4. The presentation engine 104 determines how and where the entities will be displayed on a presentation device (not shown). The presentation engine utilizes the metadata associated with the entities and presentation rules to determine where and when the entities will be displayed. Again, this will be further described herein at least with reference to FIGS. 3 and 4. The playback subsystem 102 maintains the synchronization, timing, ordering and transitions of the various entities. This is done in ITX through the event model (described in greater detail below with reference to FIG. 7) triggering a script event handler. In this system, behavioral metadata will specify what actions will take place based upon a time code or media event during playback and the playback subsystem 102 will start the actions at the correct time in playback. The playback subsystem 102 also processes any scripts of the collections and has the overall control of the entities determining when an entity is presented or decoded based upon event synchronization or actions specified in the behavioral metadata. The playback subsystem 102 accepts user input to provide the various playback functions including but not limited to, play, fast-forward, rewind, pause, stop, slow, skip forward, skip backward, and eject. The user inputs can come from, for example, the remote control depicted in FIG. 30. The playback subsystem 102 receives signals from the remote control and executes a corresponding command such as one of the commands listed above. In one embodiment, the synchronization is done using Events. An event is generally the result of a change of state or a change in data. Thus, the playback subsystem monitors events and uses the events to trigger an action (e. g., the display of an entity). See, e.g., the event section of FIG. 7 for a DVD-Video example of that uses events. In one embodiment, the entity decoder 106 allows entities to be displayed on a presentation device. The entity decoder, as will be described in greater detail with reference to FIGS. 3 and 4, is one or more decoders that read different types of data. For example, the entity decoders can include a video decoder, an audio decoder, and a web browser. The video decoder reads video files and prepares the data within the files for display on a presentation device. The audio decoder will read audio files and prepare the audio for output from the presentation device. There are numerous markup languages that optionally are used in the content management system and that can be interpreted by the browser. The browser optionally supports various markup languages including, but not limited to, HTML, XHTML, MSHTML, MHP, SMIL, etc. While HTML is referenced throughout this document virtually any markup language or alternative meta-language or script language can be used. In one embodiment, the presentation device is a presentation rendering engine that supports virtual machines, scripts, or executable code. Suitable virtual machines, scripts and executable code include, for example, Java, Java Virtual Machine (JVM), MHP, PHP, or some other equivalent \ engine. As described herein, by the use of browser, web browser, presentation device or engine, it is to be understood that all of these presentation devices and rendering engines are included. All of the features of the system in FIG. 1 will be described in greater detail at least with reference to the following description of FIGS. 3 and 4.
Referring to FIG. 2 a diagram is shown illustrating a general overview of a media player connected to the Internet according to one embodiment. Shown are a media player 202, a media subsystem 208, a presentation subsystem 206, a content services module 212, a playback runtime engine 214, a presentation layout engine 214, entity decoders 210, and an Internet 204. In a preferred embodiment, the media player 202 is connected to the Internet 204, for example, though a cable modem, T1 line, DSL or dial-up modem. The media player 202 includes the presentation subsystem 206, the media subsystem 608 and the entity decoders 210. The media subsystem 208 further includes the content services module 212, the playback runtime engine 214 and the presentation layout engine 216. V~hile FIG. 2 shows the content service module 212 as part of the media subsystem 208, alternatively, as shown in FIGS. 3 and 4, the content services module is not part of the media subsystem 208. The playback runtime engine 214 is coupled to the content services module 212 and provides the content services module 212 with a request for a collection. The request can include, e.g., a word search, metatag search, or an entity or a collection ID. The playback runtime engine 214 also provides the content services module 212 with a playback environment description. The playback environment description includes information about the system capabilities, e.g., the display device, Internet connection speed, number of speakers, etc. One example of the playback request described in XML can be as follows: <?xml version="1.0" encoding="UTF-8"?> <Metadata xmlns:xsi="http://www.w3.org/2001IXMLSchema-instance" xsi:noNamespaceSchemaLocation="REQ.xsd"> <Module> <collectionList> 3 5 <id>123456789</id> <id>223456789</id> <id>323456789</id>
</collectionList> <requestedPlayback>-<videoDisplay> evideoDisplaytype>01 </videoDisplaytype> </videoDisplay> <videoResolutions> <resolution> <videoXResolution>1024</videoXResolution> <videoYResolution>768</videoYResolution> ~/resolution> </videoResolutions> <navigationDevices> <device>03</device> </navigationDevices> <textlnputDeviceReqd>01 dtextlnputDeviceReqd> </requestedPlayback> </Module> </Metadata> One example of the playback environment description described in XML can be as follows: <?xml version="1.0" encoding="UTF-8"?> <Metadata xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="CAP.xsd"> <Module> 2 5 <Capabilities> <platforms> <platform>01 <lplatform> <platform>02</platform> </platforms> 3 0 <products> <productlD>01 </productlD>
<productlD>02<IproductlD>
</products>
<videoDisplays>
3 5 <videoDisplaytype>01 <IvideoDisplaytype>
<videoDisplaytype>02</videoDisplaytype>
</videoDisplays>
<videoResolutions>
aesolution>
4 0 <videoXResolution>1024</videoXResolution>
<videoYResolution>768</videoYResolution>
</resolution>
<resolution>
<videoXResolution>800</videoXResolution>
4 5 <videoYResolution>600</videoYResolution>
</resolution>
dvideoResolutions>
<navigationDevices>
<device>02ddevice>
5 0 <device>03<Idevice>
</navigationDevices>
<textlnputDeviceReqd>01 dtextlnputDeviceReqd>
<viewingDistances>
<view>01 dview>
5 5 <view>02dview>
</viewingDistances>
</Capabilities>
<IModule> </Metadata> The presentation layout engine 216 determines where on the presentation device different entities within a collection will be displayed by reading collection metadata and/or entity metadata. As described below, at least with reference to FIGS. 8-10, metadata can be stored, e.g., in an XML file. The presentation layout engine 216 also optionally uses the playback environment description (e.g., the XML example shown above) to determine where on the presentation device the entities will be displayed. The presentation layout engine also reads the playback environment description to determine the type of display device that will be used for displaying the entities or the collection. In one example, multiple entities within a collection will be displayed at the same time (See FIG. 11, for example). The presentation layout engine 216 determines where on the display device each of the entities will be displayed by reading the collection metadata and the presentation environment description. The entity decoders 210 include at least an audio and video decoder. Preferably, the entity decoders 210 include a decoder for still images, text and any other type of media that can be displayed upon a presentation device. The entity decoders 210 allow for the many different types of content (entities) that can be included in a collection to be decoded and displayed. The media player 202 can operate with or without a connection to the Internet 204. V~lhen the media player 202 is connected to the Internet 204, entities and collections not locally stored on the media player 202 are available for display. The content services module, as is shown in FIG. 4, includes a content search engine. The content search engine searches the Internet for entities and collections. The entities and collections can be downloaded and stored locally and then displayed on a display device. Alternatively, the entities and collections are streamed to the media player 202 and directly displayed on the presentation device. The searching features and locating features will be described in greater detail herein at least with reference to FIGS. 3, 4, and 27. The Internet 204 is shown as a specific example of the offsite content source 106 shown in FIGS. 28-30. Thus, in a preferred embodiment, the media subsystem 208 is capable of retrieving, creating, searching for, publishing and modifying collections in accordance with one embodiment. The media subsystem 208 retrieves and searches for entities and collections through the content search engine and new content acquisition agent (both described in greater detail herein at least with reference to FIGS. 4, 14, and 15). The media subsystem publishes entities and collections through the use of an entity name service and collection name service, respectively. The entity name service, the collection name service, and publishing of collections are all described in greater detail at least with reference to FIGS. 4 and 14. The modification of entities and collections will also be described here in greater detail at least with reference to FIGS. 4, 19 and 20. Additionally, the creation on an entity or collection will be described herein in greater detail with reference to FIGS. 4, 16, and 17. The content services module 212 manages the collections and entities. A content search engine within the content services module 212 acquires new collections and entities. The content services module 212 additionally publishes collections and entities for other media players to acquire. Additionally, the content services module 212 is responsible for managing the access rights to the collections and entities.
Referring to FIG. 3, a high level diagram is shown of the~components that are interfaced with in the various parts of a content management system. Shown are a content management system 300, a media subsystem 302, a content services module 304, an entity decoder module 306, a system controller 308, a presentation device 310, a front panel display module 312, an asset distribution and content publishing module 304, a plurality of storage devices 306, a user remote control 308, a front panel input 320, other input devices 322, and system resources 324. The content management system 300 includes the media subsystem 302 (also referred to as the playback engine), the content services module 304, the entity decoder module 306 and the system controller 308. Within the content management system 300 the system controller 308 is coupled to the media subsystem 302. The media subsystem 302 is coupled to the content services module 304 and the entity decoder module 306 entity decoder module 306 is coupled to the media subsystem 302 the content services module 304. The content management system 300 is coupled to the asset distribution and content publishing module 314, the plurality of storage devices 316, the user remote control 318, the front panel input 320, the other input devices 322, and the system resources 324. The user remote control 318 and the other input devices 320, e.g., a mouse, a keyboard, voice recognition, touch screen, etc., are collectively referred to herein as the input devices. The system controller 308 manages the input devices. In some embodiments, multiple input devices exist in the system and the system controller uses a set of rules based on the content type whether an input device can be used and/or which input devices are preferred. For example, content that only has on-screen links and no edit boxes, for example, has a rule for the system controller to ignore keyboard input. The system controller 308 optionally has a mapping table that maps input signals from input devices and generate events or simulates other input devices. For example, the arrow keys on a keyboard map to a tab between fields or the up/down/left/right cursor movement. Optionally, Remote controls use a mapping table to provide different functionality for the buttons on the remote. Various processes subscribe to input events such as remote control events and receive notification when buttons change state. The input devices are, for example, remote controls, keyboards, mice, trackballs, pen (tablet/palm pilot), T9 or numeric keypad input, body sensors, voice recognition, video or digital cameras doing object movement recognition, and an other known or later to be developed mechanism for inputting commands into a computer system, e.g., the content management system 300. Furthermore, an input device, are, in some embodiments, the presentation devices 310 as well. For example, on-screen controls or a touch screen can change based on the presentation of the content. The system controller 308 arbitrates the various input devices and helps determine the functionality of the input devices. Additionally, in one embodiment, arbitration occurs between the operations for playback, the behavioral metadata an entity or collection allows, and the specific immediate request of the user. For example, a user may be inputting a play command and the current entity being acted upon is a still picture. The system controller 300 interprets the command and decides what action to take. The media subsystem 302, also referred to herein as the playback engine, in one embodiment is a state machine for personalized playback of entities through the decoders in the decoder module 306. The media subsystem 302 can be a virtual machine such as a Java Virtual Machine or exist with a browser on the device. Alternatively, the media subsystem 302 can be multiple state machines. Furthermore, the media subsystem can be run on the same processor or with different processors to maintain the one or more state machines. Following is a hierarchy: HTML/JavaScript layer Java VM layer (implementing the Content & Media Services) DVD Navigator DVD-Video decoder The hierarchy demonstrates how different application layers can have their own state machine and that the layer above will take action having knowledge of the state of the layer below it. U~lhen a JavaScript command is issued to change the playback state of the DVD Navigator, the state machine has to ensure the command will be allowed. The level of arbitration of these state machines can be demonstrated in this manner. The playback engine 302 interacts with the content services module 304 to provide scripts and entities for playback on the presentation device 310. The content services module 304 utilizes the plurality of storage devices 1416 as well as network accessible entities to provide the input to the playback engine 302. A presentation layout manager, shown in Fig. 4, exists within the playback engine 302 and controls the display of the content on the presentation device 310. The presentation device 310 comes in various formats or forms. In some cases displays can be in wide screen 16:9 and full screen 4:3 formats. Optionally, the displays types are of various technologies including, TFT, Plasma, LCD, Rear or Front Projection, DLP, Tube (Flat or Curved) with different content safe areas, resolutions, pixel sizing, physical sizes, colors, font support, NTSC vs. PAL, and different distances from the user. In one embodiment, the media subsystem 302 controls the display of content based upon the presentation device 310 available. For example, a user in front of a computer as compared to a user that is 10 feet way from a TV screen needs different text sizing to make something readable. Additionally, the outside environment the presentation device is being viewed in, such as outside in direct sun or in an industrial warehouse, can also effect how the media subsystem will display content on the presentation device. In this example, the contrast or brightness of the presentation device will be adjusted to compensate for the outside light. Multiple presentation devices can be available for displaying different content. For example, the presentation device can be a speaker or headset in the case of audio playback, or can be some other sensory transmitter. Additionally, the presentation device can display a status for the content management system. The entity decoder module 306 decodes any of the different entities available to a user. The entity decoder module 1406 sends the decoded entities to the media subsystem, which as described above controls the output of the entities to the presentation devices. For example, for markup, scripting, and animation (such as Flash or SVG) content a browser is used to decode the content and for a DVD Disc a DVD Navigator/Decoder can used to decode the video stream. The presentation device also has different ways of displaying the entity decoder output. For example, if the source material is 4:3 and the presentation device is 16:9, the content is displayed with black bars on the right side and left side at 4:3, stretched to 16:9, or is displayed in a panoramic view where a logarithmic scaling of the content is used from center to the sides. In one embodiment, the metadata for the entity will prioritize which of these settings works best for the current entity. As described above, this is accomplished in one embodiment by having a preference defined in an XML file.
In one embodiment a user makes a request for content. The playback runtime engine constructs the request and provides a user request to the content manager. A user request is a description of the collection or list of collections requested and can include the specific components of the media playback system desired by the consumer for playback (e. g. "display B" if there are multiple displays available). The user request can be described in the form of metadata which the Content Manager can interpret. In one embodiment, the user request will additionally include a user profile that is used to tailor or interpret the request. A user profile is a description of a specific consumer's preferences which can be embodied in the user request. Optionally, the preferences are compiled by the new content acquisition agent over time and usage by the consumer. Preferably, the request also includes a system profile (also referred to herein as system information). The system profile is a description of the capabilities of the media playback system including a complete characterization of the input, output and signal processing components of the playback system. In one embodiment, the system profile is described in the form of metadata which the Content Manager interprets. The content manager will then search for entities that will be preferred for the given system and also that will be compatible within the playback system. In one embodiment, the content manager uses the user request, the user profile and the system profile in order to search for entities or collections. In one embodiment, the metadata associated with an entity is manually entered by the owner of the entity. Optionally, the manually entered metadata is automatically processed by the content management system that adds additional related metadata to the entity metadata. For example, the metadata of "4WD" is expanded to include 'four wheel drive', or further associated with 'sport utility vehicle' or 'SUV' which are similar terms for 4WD vehicles. This process is done while the metadata is created or done during the search process where search keywords are expanded to similar words as in this example. Alternatively, the content management system is utilized to create the metadata for the entity. Users are able to achieve real-time completely automated meta-tagging, indexing, handling and management of any audio and video entities. In one embodiment, this is done by creating dynamic indexes. The dynamically created index consists of a time-ordered set of time-coded statements, describing attributes of the source content. Because the statements are time-ordered and have millisecond-accurate time-codes, the statements are used to manipulate the source material trans-modally, i.e., allowing the editing of the video, by synchronistically manipulating the text, video and audio components. With this indexing a user is able to jump to particular words, edit a clip by selecting text, speaker or image, jump to next speaker, jump to next instance of current speaker, search for named speaker, search on accent or language, view key-frame of shot, extract pans, fades etc, or to find visually similar material. In real-time multimedia production, the system optionally automates the association of hyperlinked documents with real-time multimedia entities, instant cross-referencing of live material with archived material, triggering of events by attribute (e.g. show name when speaker X is talking). For entity archives, the system provides automatic categorization of live material, automatically re-categorizes multiple archives, makes archives searchable from any production system, enables advanced concept-based retrieval as well as traditional keyword or Boolean methods, automatically aggregates multiple archives, automatically extracts and appends metadata. One technology that is optionally used is high-precision speech recognition and video analysis to actually understand the content of the broadcast stream and locate a specific segment without searching, logging, time coding or creating metadata. Yet another approach directly addresses the problems associated with manual meta-tagging by adding a layer of intelligence and automation to the management of XML by understanding the content and context of either the tags themselves or the associated information. In effect, this removes the need for meta-tags or explicit metadata. Metadata is implicitly (covertly) inferred through the installed layer of intelligence. However, if metadata is required, intuitive user interfaces may be provided to add reassurance and additional information. In situations where there are already large amounts of existing metadata and/or established taxonomies, more intelligent solutions are used to automatically add new content to these schemes and append the appropriate tags. Another option is to automatically integrate disparate metadata schemes and provide a single, unified view of the content with no manual overhead. In a DVD example, the metadata is optionally the subtitles or close caption text that goes along with the video being played back. Using both the video stream and the textual stream an even greater inference of metadata can be derived from the multimedia data. Thus using audio, video, and text simultaneously can improve the overall context and intelligence of the metadata. Video analysis technology can automatically and seamlessly identify the scene changes within a video stream. These scene changes are ordered by time code and using similar pattern matching technology as described above all clips can be "understood". The detected scene changes can also be used as 'chapter points' if the video stream is to be converted to more of a virtual DVD structure for use with time indexes. In addition by using advanced color and shape analysis algorithms it becomes possible to search the asset database for similar video clips, without relying on either metadata or human intervention. These outputs are completely synchronized with all other outputs to the millisecond on a frame-accurate basis. This means that the images are synchronized with the relevant sentences within an automatically generated transcript, the words spoken are synchronized with the relevant speaker, the audio transcript is synchronized with the appropriate scene changes etc. This unsurpassed level of synchronization enables users to simultaneously and inter-changeably navigate through large amounts of audio visual content by image, word, scene, speaker, offset etc., with no manual integration required to facilitate this. In accordance with an embodiment, the system can gather entities and without using metadata assemble a collection including video, audio and text entities. Audio analysis technology can automatically and seamlessly identifies the changes in speakers along with the speech to text translations of the spoken words. The audio recognition may be speaker dependent or speaker independent technology. The audio analysis technology may also utilize the context of the previous words to improve the translations. Referring now to FIG. 4, a block diagram is shown illustrating a system diagram of a collection and entity publishing and distribution system connected to the content management system of FIG. 3. Shown are a plurality of storage devices 400, a content distribution and publishing module 402, a content management system 404, a remote control 406, a plurality of input devices 408, a front panel input 410, system resources 412, a system init 414, a system timer 416, a front panel display module 418, and a plurality of presentation devices 420. In the embodiment shown, the plurality of storage devices 400 includes a portable storage medium 422, local storage medium 424, network accessible storage 426 and a persistent memory 428. The portable storage medium 422 can include, for example, DVD's, CD's, floppy discs, zip drives, HD-DVD's, AOD's, Blu-Ray Discs, flash memory, memory sticks, digital cameras and video recorders. The local storage medium 424 can be any storage medium, for example, the local storage medium 424 can be a hard drive in a computer, a hard drive in a set-top box, RAM, ROM, and any other storage medium located at a display device. The network accessible storage 426 is any type of storage medium that is accessible over a network, such as, for example, a peer-to-peer network, the Internet, a LAN, a wireless LAN, a personal area network (PAN), or Universal Plug and Play (UPnP). All of these storage mediums are in the group of computer readable medium. The persistent memory 428 is a non-volatile storage device used for storing user data, state information, access rights keys, etc. and in one embodiment does not store entities or collections. The user data can be on a per user basis if the system permits a differentiation of users or can group the information for all users together. In one embodiment the information may be high game scores, saved games, current game states or other attributes to be saved from one game session to another. In another embodiment with video or DVD playback entities the information may be bookmarks of where in the current video the user was last playing the content, what audio stream was selected, what layout or format the entity was being played along with. The storage information may also include any entity licenses, decryption keys, passwords, or other information required to access the collections or entities.
The persistent memory stores may include, but not limited to, Bookmarks, Game Scores, DRM & Keys, User preferences and settings, viewing history, and Experience Memory in Non-Volatile Ram (NVRam), which can be stored locally or on a server that can be accessed by the user or device. The local storage can also act as a cache for networked content as well as archives currently saved by the user. The content distribution and publishers module 402 determines what entities and collections are available and who the entities and collections are available to. For example, the establishment (e. g., the owner) that supplies the content (e. g., entities and collections) may only let people who have paid for the content have access to the content. The content management system 404 controls all of the content that is available and has access to all of the local and network accessible storage along with any portable or removable devices currently inserted, however, the content distribution and publishing module 402 will determine if the proper rights exist to actually allow this content to be used or read by others. In another example, on a peer-to-peer network only files that are in a shared folder will be available to people. In another embodiment a database or XML file contains the list of entities, collections, or content available for distributing or publishing along with the associated access rights for each entity, collection, or content. The content distribution publishing module 402 can also control what other people have access to depending upon the version (e.g., a "G" rating for a child who wants information). The content distribution and publishing module 402 enables people to share entities and collections. One example of entity sharing to create a new collection is for a group of parents whose children are on the same soccer team to be able to share content. All of the parents can be on a trusted peer-to-peer network. In this case the parents can set access rights on their files for other parents to use the entities (i.e. digital pictures, videos, games schedules, etc). With this model others can view a collection of the soccer season and automatically go out and get everyone else's entities and view them as a combined collection. Even though different parents may have different display equipment and may not be able to playback all of someone else's entities, the content manager can intelligently select and gracefully degrade the experience as needed to be displayed on the local presentation equipment. The content management system 404 includes a system controller 430, a media subsystem 432, a content services module 434, and an entity decoder module 436. The system controller 430 includes an initiation module 440, a system manager 442, an arbitration manager 444 and an on screen display option module 446. The media subsystem 432 includes a playback runtime engine 450, a rules manager 452, a state module 454, a status module 456, a user preference manager 458, a user passport module 460, a presentation layout manager 462, a graphics compositing module 464, and an audio/video render module 466. The content services module 434 includes a content manager 470, a transaction and playback module 472, a content search engine 474, a content acquisition agent 476, an entity name service module 478, a network content publishing manager 480, an access rights manager 482, and a collection name service module 484. The entity decoder module 436 includes a video decoder 486, an audio decoder 488, a text decoder 490, a web browser 492, an animation 494, a sensory module 496, a media filter 498, and a transcoder (or transrating device) 499. In one embodiment the content services module 434 can run in a Java-Virtual Machine (Java-VM) or within a scriptable language on a platform. The content services module 434 can be part of a PC platform and therefore exist within an executable or within a browser that is scriptable. The Content Manager -There may be various types of entities within a collection and the content manager 470 determines which version to playback based on rules and criteria. The rules or criteria can include: a Rating (e.g., G, PG, PG-13, R), a display device format (e.g., 16:9, 320x240 screen size), bit rates for transferring streaming content, and input devices available (e. g., it does not make sense to show interactive content that requires a mouse when only a TV remote control is available to the user). As will be described below, the content manager 470 provides graceful degradation of the entities and the playback of the collection. The content manager 470 uses the collection name service module 484 to request new content for playback. The content manager 470 coordinates all of the rules and search criteria used to find new content. In one embodiment, the content manager utilizes rules and search criteria provided by the user through a series of hierarchical rankings of decision criteria to use. In another embodiment, the content manager uses rules such as the acquiring the new content at a lost cost where cost is, e.g., either money spent for the content or based on location that has the highest bandwidth and will take the shortest amount of time to acquire the content. Alternatively, the search criteria is defined by the entity or collection meta data. Additionally, the content manager 470 is able to build up collections from various entities that meet the criteria as well. In one embodiment, the content manager 470 applies a fuzzy logic to determine which entities to include in a collection and how the entities are displayed on the screen as well as the playback order of the entities. The content manager 470 also delivers to the presentation layout manager 462 the information to display the entities on the screen and controls the positioning, layers, overlays, and overall output of the presentation layout manager 462. The content manager 470 contains algorithms to determine the best-fit user experience based on the rules or user criteria provided to the content manager 470. Unlike other similar systems the content manager 470 can provide a gracefully degraded user experience and handles errors such as incomplete content, smaller screen dimensions then the content was design for, or handling slower Internet connections for streaming content. The content manager 470 uses system information and collection information to help determine the best playback options for the collection. For example, a collection may be made for a widescreen TV and the content manager 470 will arbitrate how to display the collection on a regular TV because that is the only TV available on the system. The fact that the system for display included a regular TV is part of the system information. The content manager 470 has system information as to the capabilities (screen size etc) and also has the preferred presentation information in the collection metadata. Having these two pieces of info, the content manager 470 can make trade-offs and send the presentation layout manager 462 the results to setup a (gracefully) degraded presentation. This is accomplished by internal rules applied to a strongly correlated set of vocabularies for both the system capabilities and the collection metadata. The content manager 470 has internal rules as to how to optimize the content. The content manager 470 for instance can try to prevent errors in the system playback by correlating the system information with the collection metadata and possibly trying to modify the system or the collection to make sure the collection is gracefully degraded. Optionally, the content manager 470 can modify the content before playback. An example of decisions the content manager can make about acquiring a video stream is when the option for two different formats of an entity exist, such as in Windows Media Player format (WMV file) versus in a Quicktime format are found. The content manager may decide between the two streams based on the playback system having only a decoder for one of the formats. If both decoders are supported then the cost to purchase one format may be different from another and therefore the content manager can minimize the cost if there was not a specific format requirement. In this same example if one format is in widescreen (16:9) and another was full screen (4:3) then a decision can be based on if the presentation device is widescreen or full screen. Entities numbers may also be coded to assist in finding similar content to the original entity desired. In this way if there are different entity ID numbers for specific versions such as the directors cut verses the made for TV version of a movie then while the exact entity ID number may be different the entity ID can be cataloged in such a way that only the last digit of the entity ID number is different to indicate the various of the original feature. This helps in finding similar content as well. In another embodiment, the maximum cost willing to be paid for an entity can be known by the content manager as designated by the user or the preferences. The content manager can search locations that meet this cost criteria to purchase the entity. In addition the content manager can enter into an auction to bid for the entity without bidding above the maximum designated cost. The content manager 470 does personalization through the use of agents and customization based on user criteria.
In one embodiment, the content manager 470 adds content searchability along with smart playback. In the case of a presentation, a collection defines the presentation. The collection has both static data that defines unchanging things like title numbers and behavioral data that define the sequence of playback. Hence, the selection of the collection is one level of personalization ("I go out and find a collection that sounds like what I want to see") and a next level of personalization derives how the playback presentation is customized or personalized to the system and current settings in accordance with the behavioral data. Searching for a collection that meets the personal entertainment desire is like using the GOGGLE search engine for the media experience. As GOGGLE provides a multiplicity of hits on a search argument, a request for a media experience (in the form a collection) can be sought and acquired with the distributed content management system. Content Manager's Content Filter -The content filter is used to provide both the content that the user desires as well as filter out the content that is undesirable. Along these guidelines when accessing network accessible content the content filter may contain: Lists of websites which will be blocked (known as "block lists"); Lists of websites which will be allowed (known as "allow lists"); and rules to block or allow access to websites. Based on the user's usage of various sites the content filter can "learn" which list new sites fall into to improve the content filtering. At another level with a website a content filter can further narrow down the designed material. In the case of a child user than the consideration of the content within a site such as chat rooms; The language used on the site; The nudity and sexual content of a site; The violence depicted on the site; Other content such as gambling, drugs and alcohol. The Platform for Internet Content Selection (PICS) specification enables labels (metadata) to be associated with Internet content. The Platform for Internet Content Selection (PICS) specification was originally designed to help parents and teachers control what children access on the Internet, but the PICS specification also facilitates other uses for labels, including code signing and privacy. The PICS platform is one on which other rating services and filtering software has been built. One method of implementation of PICS or similar metadata methods is to embed labels in HTML documents using a META tag. With this method, labels can be sent only with HTML documents, not with images, video, or anything else. It may also be cumbersome to insert the labels into every HTML document. Some browsers, notably Microsoft's Internet Explorer versions 3 and 4, will download the root document for a web server and look for a generic label there. For example, if no labels were embedded in the HTML for this web page (they are), Internet Explorer would look for a generic label embedded in the page at http://www.w3.org/ (generic labels can be found there). The following is an example of a way to embed a PICS label in an HTML document: <head> <META http-equiv="PICS-Label" content=' 2 5 (PIGS-1.1 "http://www.gcf.org/v2.5" labels on "1994.11.05708:15-0500" until "1995.12.31723:59-0000" for "http://w3.org/PICS/Overview.html" ratings (suds 0.5 density 0 color/hue 1)) '> </head> The content associated with the above label is part of the HTML document. This is used for web-pages. The heading is one example of metadata for an HTML page. The metadata can be used for filtering out scenes that should not be viewed by children. This is but one example.
Regardless of what actions are taken, mechanisms are needed to label content or identify content of a particular type. For any system of labeling or classifying content, it is important to understand who is performing the classification and what criteria the system is using. Classification may be done by content providers, third-party experts, local administrators (usually parents or teachers), survey or vote, or automated tools. Classification schemes may be designed to identify content that is "good for kids", "bad for kids," or both. The content may also be classified on the basis of age suitability or on the basis of specific characteristics or elements of the content. In addition content that is deemed bad for kids can still be acquired but the actual entity will be cleaned up for presentation. This can be done by filtering out tagged parts of the movie that are above a designation age limit for example. Therefore, a movie seen in the theaters with a higher rating can have designations within the movie for parts not acceptable for a television viewing audience and the same entity can be used for presentation on both devices but the filtering of the parts is done to make the two versions. This increases the number of entities that can be used and also reduces the need to create two different entities but instead to create one entity that is annotated with markers or in the entities metadata as to the two different viewable formats. The playback runtime (RT) engine 450 provides the timing and synchronization of the content that is provided by the content manager 470. The content manager 470 determines the overall collection composition and the playback runtime engine 450 controls the playback. The composition of the collection can be in the form of an XML file, a scripting language such as CGI, JVM Code, HTML/Javascript, SMIL, or any other technologies that can be used to control the playback of one or more entities at a time. One example of multiple-entity playback is a DVD-video entity being played back with an alternate audio track and with an alternate subtitle entity. In this manner the synchronization between the various entities is important to maintain the proper lip-sync timing. The content manager 470 is capable of altering existing collections/entities for use with other entities. For example, DVD-Video has a navigational structure for the DVD. The navigational structure contains menus, various titles, PGCs, chapter, and the content is stitched together with predefined links between the various pieces. The content manager 470 has the ability (Assuming the metadata permits modification of an entity/collection) to do navigation command insertion & replacement to change the stitching (flow) of the content to create a new collection or to add additional entities as well. For example, this can be done by creating traps for the playback at various points of the entity. For example, in the case of DVD collection with entities, the time, title, PGC, or chapter, GPRM value, or a menu number can be used to trap and change the playback engines state machine to an alternate location or to an alternate entity. In stitching together various entities a structure that uses time codes, such as the traps or DVD chapter breaks (parts of title or PTTs) can be used. The program or script (or behavioral metadata) can look like the following: Play DVD Title 1 from 0:13:45 to 0:26:00 ... then Play local PVR file "XYZ.PVR" from 0:2:30 to 0:4:30 ... then Play DVD Title 1 Chapter 3 While playing this, overlay "IMAGE1.GIF" at 100,100 at alpha X25 Additionally, an event handler can be used during a presentation and react to clicks of buttons (say during the display of the image) and take an action, e.g., Pause and play a different video in a window. The set of instructions can reference the collection & entity metadata and will depend on these traps to break apart and re-stitch segments together to create a new presentation. The set of instructions is behavioral metadata about the collection. The content manager uses the behavioral metadata for playback and can modify the behavioral metadata depending upon the system information as described above. Collection Name Service (CNS) Keywords go into the collection name service (CNS) module 484 and collections and entities are located that have these keywords. The entity name services (ENS) module 478 is able to locate entities for the new content acquisition agent 476. The entity name services module 478 converts keywords to Entity IDs and then is able to locate the entity IDs by using the content search engine 474. Distinguish keyword searches from collection ID searches and entity ID searches. Entity Name Service (ENS) One of the functions of the entity name services module 478 is mapping entities or collections to the associated metatag descriptions. In one implementation these metatag descriptions may be in XML files. In another implementation this information can be stored in a database. The Entity naming service 478 can use an identifier or an identifier engine to determine an identifier for a given entity. The identifier may vary based on the type of entity. In one embodiment, the entity identifier is assigned and structured the way the Dewey Decimal System is for books in libraries. The principle of the entity IDs assignments is that entities have defined categories, well-developed hierarchies, and a network of relationships among topics. Basic classes can be organized by disciplines or fields of study. In the Dewey Decimal Classification (DDC) the ten main classes are Computers, information & general reference, Philosophy & psychology, Religion, Social Sciences, Language, Science, Technology, Arts & recreation, Literature, History & geography. Then each class can be divided into 10 divisions and then each of the 10 divisions has 10 sections and so on. Near the bottom of the divisions can include different formats, different variations such as made for TV (Parts removed for viewable by families) versus and original on screen versions versus the directories cut extended version. This will aid the search engines in finding similar content requested by the user. Just as books in a library are arranged under subjects, which means that a book in similar fields is physically close to each other on the shelf, so are the Entity IDs. If a book is found that meets a certain criteria, nearby books can be browsed to find many related subject matter. Since features in an index tree are organized based on their similarity and an index tree has a hierarchical structure, we can use this structure to guide user's browsing by restricting the selection to certain levels. The structure can also be used to eliminate branches from further selection if these branches are not direct descendants of the current selection. Parts of entities can also be grouped together as well. So not just the entity may have an id but a smaller segment of an entity may be indexed further in this system as well. Taxonomy also refers to either a hierarchical classification of things, or the principles underlying the classification. Almost anything -- animate objects, inanimate objects, places, and events -- may be classified according to some taxonomic scheme. Mathematically, a taxonomy is a tree structure of classifications for a given set of objects. At the top of this structure is a single classification - the root node - that applies to all objects. Nodes below this root are more specific classifications that apply to subsets of the total set of classified objects. A version control system of entities can also be utilized. If an updated version of an entity is created, for example in a screenplay a spelling correction is made, then the version should be updated and then released. The content manager 1570 may find multiple versions of an entity and then can try and get a newer version or if one is not available go and retrieve a previous version to provide content for the request. The version information is part of the entity or collection metadata. Media Identifiers In one embodiment, an entity may be identified through the use of a media identifier (MediaID). The media identifier may be computed based on the contents of the entity to create a unique ID for that entity. The unique ID will be referred to as an entity ID. The unique identifier can be used to match an entity's identifier and then the entity's associated metadata to the actual entity if the unique identifier and the entity metadata are stored in separate sources. Various permutations of media IDs or serialization may be employed including, but not limited to a watermark, hologram, and any other type in substitution or combination with the Burst Cut Area (BCA) information. Other technologies can be used for entity identification as well such as an RFID. An RFID may be used in replacement of the unique identifier or to correlate with the unique identifier for a database lookup. As RFID technology is beginning to be employed for packaged goods, a packaged media can be considered a Collection and be identified by this RFID. These same technologies can also be used to store all of the entity metadata as well. In one embodiment, a three step process can be utilized. First, a media ID is computed for the given Entity. Second, to find the corresponding entity ID the Media ID can be submitted to a separate centralized server, entity naming service, local server, database or local location or file, to be looked up and retrieved. The final step is with the Entity ID the corresponding Metadata can be found through a similar operation to a separate centralized server, entity service, local server, database, or local location or file, to be looked up and retrieved. Tnlhen new entities are created the entities go though a similar process where the Media ID, Entity, ID and corresponding metadata are submitted to the respective locations for tracking the entities for future use and lookup. This process can be condensed into several variations where the media ID is the same as the entity ID or the two are interchangeable and the lookups can be in a different order. In this case the media ID can be used to lookup the associated metadata as well or both the media ID and entity ID can be used find the metadata. The metadata may also contain references, filepaths, hyperlinks, etc. back to the original entity such that for a given entity ID or media ID the entity can be found through the locator. Again this can be through a separate centralized server, entity service, local server, database, or local location or file. Watermarking Digital video data can be copied repeatedly without loss of quality. Therefore, copyright protection of video data is a more important issue in digital video delivery networks than copyright protection was with analog TV broadcast. One method of copyright protection is the addition of a "watermark" to the video signal which carries information about sender and receiver of the delivered video. Therefore, watermarking enables identification and tracing of different copies of video data. Applications are video distribution over the World-Wide Web (WWW), pay-per-view video broadcast, or labeling of video discs and video tapes. In the mentioned applications, the video data is usually stored in compressed format. Thus, the watermark is embedded in the compressed domain. Holograms MPEG-7 addresses many different applications in many different environments, which means that MPEG-7 needs to provide a flexible and extensible framework for describing audiovisual data. Therefore, MPEG-7 does not define a monolithic system for content description but rather a set of methods and tools for the different viewpoints of the description of audiovisual content. Having this in mind, MPEG-7 is designed to take into account all the viewpoints under consideration by other leading standards such as, among others, TV Anytime, Dublin Core, SMPTE Metadata Dictionary, METS and EBU P/Meta. These standardization activities are focused to more specific applications or application domains, whilst MPEG-7 has been developed as generic as possible. MPEG-7 uses also XML as the language of choice for the textual representation of content description, as XML Schema has been the base for the DDL (Description Definition Language) that is used for the syntactic definition of MPEG-7 Description Tools and for allowing extensibility of Description Tools (either new MPEG-7 ones or application specific). Considering the popularity of XML, usage of XML will facilitate interoperability with other metadata standards in the future. Content Search Engine The content search engine 474 searches various levels for content, for example, local storage, removable storage, trusted peer network, and general Internet access. Many different types of searching and search engines may be used. There are at least three elements to search engines that can be important for helping people to find entities and create new collections: information discovery & the database, the user search, and the presentation and ranking of results.
Crawling search engines are those that use automated programs, often referred to as "spiders" or "crawlers", to gather information from the Internet. Most crawling search engines consist of five main parts: Crawler: a specialized automated program that follows links found on web pages, and directs the spider by finding new sites for the spider to visit; Spider: an automatic browser-like program that downloads documents found on the web by the crawler; Indexer: a program that "reads" the pages that are downloaded by spiders. This does most of the work deciding what a web site is about; Database (the "index"): simply storage of the pages downloaded and processed; and Results engine: generates search results out of the database, according to a submitted query. There can be some minor variations to this. For instance, ASK JEEVES (www.ask.co.uk) uses a "natural language query processor", which allows a user to enter a question in plain language. The query processor then analyses the submitted question, decides what the question means, and "translates" the question into a query that the results engine will understand. This happens very quickly, and out of sight to users of ASK JEEVES, so it seems as though the computer is able to understand English. Spiders and crawlers are often referred to as "robots", especially in official documents like the robots exclusion standard Crawler: then a spider downloads pages, the spider is on the lookout for links. The links are easy for the spider to spot, because the links always look the same. The crawler then decides where the spider should go next, based on the links, and the crawler's existing list of URLs. Often, any new links the spider finds when revisiting a site are added to the spider's list. ln~hen a URL is added to a Search Engine, it is the crawler that is being requested to visit the site. Spider: A spider is an automated program that downloads the documents that the crawler sends the spider to. The spider works very much as a browser does when the browser connects to a website and downloads pages. Most spiders aren't interested in images though, and don't ask for them to be sent. A user can see what the spiders see by going to a web page, clicking the right-hand button on a mouse, and selecting "view source" in the menu that appears. Indexer: This is the part of the system that decides what a page is about. The indexer reads the words in the web site. Some are thrown away, as the words are so common (and, it, the etc). The indexer will also examine the HTML code which makes up a site looking for other clues as to which words are considered to be important. Words in bold, italic or headers tags will be given more weight. This is also where the metadata the keywords and description tags) for a site will be analyzed. Database: The database is where the information gathered by the indexer is stored. GOGGLE claims the to have the largest database, with over 3 billion documents, even assuming that the average size of each document is only a few tens of kilobytes, this can easily run to many terabytes of data (1 terabyte = 1,000 gigabytes = 1 million megabytes), which will obviously require vast amounts of storage. Results engine: The results engine is in many ways the most important part of any search engine. The results engine is the customer-facing portion of a search engine, and as such is the focus of most optimization efforts. The results engine's function is to return the pages most relevant to a users query. When a user types in a keyword or phrase, the results engine decides which pages are most likely to be useful to the user. The method the results engine uses to decide which pages are most likely to be useful to the user is called the results engine's algorithm. Search engine optimization (SEO) experts discuss "algos" and "breaking the algo" for a particular search engine. This is because when a user knows what criteria are being used (the algorithm) a web page can be developed to take advantage of the algorithm. The search engine markets, and the search engines themselves, have undergone huge changes recently, partially due to advances in technology, and partially due to the evolving economic circumstances in the technology sector. However, most are still using a mixture of the following criteria, with different search engines giving more or less weight to the following various criteria: Title: Is the keyword found in the title tag?; Domain/URL: Is the keyword found in the address of the document?; Page text: Is the keyword being emphasized in some way, such as being made bold or italic? How close to the top of the text does the keyword appear?; Keyword (search term) density: How many times does the keyword occur in the text? The ratio of keywords to the total number of words is called keyword density. While having a high ratio indicates that a word is important, repeating a word or phrase many times, solely to improve the standing with the search engines is frowned on, as repeating a word or phrase many times is considered an attempt to fraudulently manipulate the results pages. This often leads to penalties, including a ban in extreme cases;
Meta information: These tags (keywords and description) are hidden in the head of the page, and not visible on the page while browsing. Due to a long history of abuse, meta information is no longer as important as the meta information used to be. Indeed, some search engines completely ignore the keywords tag. However, many search engines do still index keywords tags, and the keyword tags are usually worth including; Outbound links: Where do the links from the page go to, and what words are used to describe the linked-to page; Inbound links: Where do the links from the page come from, and what words are used to describe the page? This is what is meant by "off the page" criteria, because the links are not under the direct control of the page author; and Intrasite links: How are the pages in the site are linked together? A page that is pointed to by many other separately developed pages is more likely to be important. Internal links are not usually as valuable as links from separately developed pages, as the internal links are controlled by the site owner, so more potential for abuse exists. As stated above, there are some minor variations as each search engine has its own approach, and its own technology, but each of the search engines have more similarities than differences. Additionally, that this applies only to crawling search engines that use automated programs to gather information. Directories such as Yahoo! or the Open Directory Project work on a completely different principle, as these directories are human reviewed. Once the metadata is present or inferred (as described above with reference to FIG. 3) the metadata can be searched and utilized. Keyword or metadata searches can consist of various levels of complexity and have different shortcomings associated with each. In the "no context" method a user enters a keyword or term into a search box, for example "penguin". The search engine then searches for any entities containing the word "penguin." The fundamental problem is that the search engine is simply looking for that word, regardless of how the word is used or the context in which the user requires the information, i.e., is the user looking for a penguin bird, a publisher or a chocolate-brand? Moreover, this approach requires the relevant word to be present and for the content to have been tagged with the word. Any new subjects, names or events will not be present and the system. Manual keyword searches do nothing more complex than look for the occurrence of the searched word or term. These processes require a significant amount of hardware resources, which increase systems overheads. In addition keyword search systems require a significant amount of manual intervention so that words and the relationship between similar words can be identified. (Penguin = flightless birds = fish eating birds). With no dynamic intelligence, keyword search engines cannot learn through use, nor do keyword search engines have any understanding of queries on specific words. For example when the word "penguin" is entered, keyword search engines cannot learn that the penguin is a flightless black and white bird that eats fish. Significant user refinement is required to boost accuracy. Keyword search engines rely heavily on the expertise of the end user to create queries in such a way that the results are most accurate. This requires complex and specific Boolean syntaxes, which the ordinary end-user would not be able to complete, e.g., to get an accurate result for penguins, an end user would have to enter the query as follows: "Penguin AND (NOT (Chocolate OR Clothing OR Publishing) AND Bird. In accordance with one embodiment, a more complex matching technology avoids these problems by matching concepts instead of simple keywords. The search takes into account the context in which the search terms appear, thus excluding many inaccurate hits while including assets that may not necessarily contain the keywords, but do contain their concept. This also allows for new words or phrases to be immediately identified and matched with similar ones, based upon the common ideas the words contain as opposed to being constrained by the presence or absence of an individual word; this equally applies to misspelled words. In addition to the concept matching technology, the search criteria may accept standard Boolean text queries or any combination of Boolean or concept queries. Additionally, a searching algorithm can be used that has a cost associated with where content is received from. This will be described further with reference to FIG. 27. Transaction and Playback History (Logging)-The transition and playback module 472 uses the local storage facilities to collect and maintain information about access rights transactions and the acquisition of content (in the form of collections and entities). Additionally, this component tracks the history of playback experiences (presentations of content). In one embodiment the history is built by tracking each individual user (denoted by a secure identifier through a login process) and their playback of content from any and all sources. The transactions performed by the individual user are logged and associated with the user thereby establishing the content rights of that user. In another embodiment the history of playback is associated with the specific collection of content entities that were played back. Additionally, all transactions related to the collection of content entities (acquisition, access rights, usage counters, etc) are logged. These may be logged in the dynamic metadata of the collection, thus preserving a history of use.
New Content Acquisition Agent (NCAA) - the new content acquisition agent 476 acts as a broker on behalf of a specific user to acquire new content collections and the associated access rights for those collections. This can involve an e-commerce transaction. The content acquisition agent 472 uses the content search engine 474 and a content filter to locate and identify the content collection desired and negotiate the access rights through the access rights manager 482. In one embodiment, the content filter is not part of the playback engine 450 but instead part of the content manager 470 and the new content acquisition agent 476. The new content acquisition agent uses the metadata associated with the entities in helping with acquisition. Access Rights Manager - The access rights manager 482 acts as a file system protection system and protects entities and collections from being accessed by different users or even from being published or distributed. This insures the security of the entities and collections is maintained. The access rights may be different for individual parts of an entity or a collection or for the entire entity or collection. An example of this is a movie that has some adult scenes. The adult scenes may have different access rights then the rest of the movie. In one embodiment, the access rights manager 482 contains digital rights management (DRM) technology for files obtained over a network accessible storage device. In most instances, DRM is a system that encrypts digital media content and limits access to only these people who have acquired a proper license to play the content. That is, DRM is a technology that enables the secure distribution, promotion, and sale of digital media content on the Internet. The rights to a file may be for a given period of time. This right specifies the length of time (in hours) a license is valid after the first time the license is stored on the consumer's device. For example, the owner of content can set a license to expire 72 hours after the content is stored. Additionally, the rights to a file may be for a given number of usage counts. For example, each time the file is accessed the allowed usage count is decremented and when a reference count is zero the file is no longer usable. The rights to a file may also limit redistribution or transferring to a portable device. This right specifies whether the user can transfer the content from the device to a portable device for playback. A related right specifies how many times the user can transfer the content to such portable devices. The access rights manager 482 may be required to obtain or validate licenses for entities before allowing playback each time or may internally track the licenses expiration and usage constraints. In another embodiment by owning a particular set of entities or collections, the ownership can allow access rights to additional entities or collections. An example of this is if a user owns a DVD disc then the user can gain access to additional features on-line. A trusted establishment can charge customers for entities. This allows for a user-billing model for paying for content. This can be, e.g., on a per use basis or a purchase for unlimited usages. The access rights manager can also register new content. For example, content registration can be used for new discs or newly downloaded content. The access rights manager 482 may use DRM to play a file or the access rights manager 482 may have to get rights to the file to even read the file in the first place. This is similar to hard disk rights. For streaming files, the right to the content is established before downloading the content. Network Content Publishing Manager - The network content publishing manager 480 provides the publishing service to individual users wishing to publish their own collections or entities. The network content publishing manager 480 negotiates with the new content acquisition agent 482 to acquire the collection, ensuring that all the associated access rights are procured as well. The user can then provide unique dynamic metadata extensions or replacements to publish their unique playback presentation of the specific collection. One embodiment is as simple as a personal home video being published for sharing with family where the individual creates all the metadata. Another embodiment is a very specific scene medley of a recorded TV show where the behavioral metadata defines the specific scenes that the user wishes to publish and share with friends. In one embodiment the Publishing Manager may consist of a service that listens to a particular network port on the device that is connected to the network. Requests to this network port can retrieve an XML file that contains the published entities and collections and the associated Metadata. This function is similar to the Simple Object Access Protocol (SOAP). SOAP combines the proven Web technology of HTTP with the flexibility and extensibility of XML. SOAP is based on a request/response system and supports interoperation between COM, CORBA, Perl, Tcl, the Java-language, C, Python, or PHP programs running anywhere on the Internet. SOAP is designed more for the interoperability across platforms but using the same principles SOAP can be extended to expose and publish available entity and collection resources. A system of this nature allows peer-to-peer interoperability of exchanging entities. Content Acquisition agents can search a defined set of host machines to search for available entities. In another embodiment the Publishing manager is a service that accepts search requests and returns the search results back as the response. In this system the agents contact the publishing manager which searches its entities and collections and returns the results in a given format (i.e. xml, text, hyperlinks to the given entities found, etc.). In this model the search is distributed among the peer server or client computers and a large centralized location is not required. The search can be further expanded or reduced based on the requesters access rights to content which is something a public search engine (such as YAH00 or GOGGLE) cannot offer today. In another embodiment the Content Directory Service in UPnP Devices can be used by the Publishing Manager. The Content Directory Service additionally provides a lookup/storage service that allows clients (e. g. UI devices) to locate (and possibly store) individual objects (e. g. songs, movies, pictures, etc) that the (server) device is capable of providing. For example, this service can be used to enumerate a list of songs stored on an MP3 player, a list of still-images comprising various slide-shows, a list of movies stored in a DVDJukebox, a list of TV shows currently being broadcast (a.k.a an EPG), a list of songs stored in a CDJukebox, a list of programs stored on a PVR (Personal Video Recorder) device, etc. Nearly any type of content can be enumerated via this Content Directory service. For those devices that contain multiple types of content (e. g. MP3, MPEG2, JPEG, etc), a single instance of the Content Directory Service can be used to enumerate all objects, regardless of their type. In addition the services allow search capabilities. This action allows the caller to search the content directory for objects that match some search criteria. The search criteria are specified as a query string operating on properties with comparison and logical operators. Media Subsystem The playback runtime engine 450 is responsible for maintaining the synchronization, timing, ordering and transitions of the various entities. The playback runtime engine 450 will process any scripts (e. g., behavioral metadata) of the collections and has the overall control of the entities. The playback runtime engine 450 accepts user input to provide the various playback functions including but not limited to, play, fast-forward, rewind, pause, stop, slow, skip forward, skip backward, and eject. The synchronization can be done using events and an event manager, such as described herein with reference to FIG. 11. The playback runtime engine 450 can be implemented as a state machine, a virtual machine, or even within a browser. The playback runtime engine 450 can be hard coded for specific functions in a system with fixed input devices and functionality or programmable using various object oriented languages to scripting languages. There are numerous markup languages that can be used in this system as well. A web browser may support various markup languages including, but not limited to, HTML, XHTML, MSHTML, MHP, etc. While HTML is referenced throughout this document HTML is replaced by any markup language or alternative meta-language or script language having the same functionality in different embodiments. In addition the presentation device may be a presentation rendering engine that supports virtual machines, scripts, or executable code, for example, Java, Java Virtual Machine (JVM), MHP, PHP, or some other equivalent engine. The Presentation Layout Manager The presentation layout manager 462 determines the effect of the input devices 408. For example, when multiple windows are on the screen the position of the cursor is as important as to which window will receive the input devices action. The system controller 430 provides on-screen menus or simply processes commands from the input devices to control the playback and content processing of the system. As the system controller 430 presents these on-screen menus, the system controller 430 also requests context-sensitive overlaid menus from a menu generator based upon metadata so that these menus provide more personalized information and choices to the user. This feature will be discussed below in greater detail with reference to FIG. 11. In addition the system controller 430 manages other system resources, such as timers, and interfaces to other processors. The presentation layout manager not only controls the positioning of the various input sources but also can control the layering and blending/transparency of the various layers. DVD Navigation Command Insertion & Replacement The DVD navigational structure can be controlled by commands that are similar to machine assembler language directives such as: Flow control (GOTO, LINK, JUMP, etc.); Register data operations (LOAD, MOVE, SWAP, etc.); Logical operations (AND, OR, XOR, etc.); Math operations (ADD, SUB, MULT, DIV, MOD, RAND, etc.); and Comparison operations (EQ, NE, GT, GTE, LT, LTE, etc.). These commands are authored into the DVD-Video as pre, post and cell commands in program chains (PGCs). Each PGC can optionally begin with a set of pre-commands, followed by cells which can each have one optional command, followed by an optional set of post-commands. In total, a PGC cannot have more than 128 commands. The commands are stored in the IFO file at the beginning and can be referenced by number and can be reused. Cell commands are executed after the cell is presented. Normally in an InterActual title, any Annex J directives like a TitlePlay(8) which tells the navigator to jump to title #8, or AudioStream(3) which tells the navigator to set the audio stream to #3, are sent after these embedded navigation commands have been loaded from the IFO file for the Navigator to reference and executed in addition to the navigation command processing. In one embodiment, new navigation commands can be inserted or navigation commands can replace existing navigation commands in the embedded video stream. This is done by altering the IFO file. The commands are at a lower level of functionality than the Annex J commands that are executed via JavaScript. The IFO file has all the navigation information and is hard coded. For graceful degradation the IFO file is intercepted and intelligently modified. In one embodiment, the playback runtime engine 1550 executes the replacement or insertion action. One way is for the playback runtime engine 450 to replace the navigation commands in the IFO file before the IFO file is loaded and processed by the DVD Navigator by using an interim staging area (DR.AM or L2 cache of file system) or intercepting the file system directives upon an IFO load. Alternatively, the playback runtime engine 450 can replace the navigation commands in the system memory of the DVD Navigator after the navigation commands have been loaded from the IFO file. The former allows one methodology for many systems/navigators where the management of the file system memory is managed by the media services code. The latter requires new interfaces to the DVD Navigator allowing the table containing the navigation commands (located within the Navigator's working memory) to be patched or replaced/inserted somewhat like a program that patches assembler code in the field in computers (this was a common practice for delivering fixes to code in the field by editing hexadecimal data in the object files of the software and forcing the object files to be reloaded). Case I - Browser modifies the Commands individually This case is one where the specific navigation commands are modified by a JavaScript command. In this case, the command is constructed in the following fashion: SetNavCmd(title, PGCNumber, newCmdString, locationOffset);
where, for the specified title (e.g. as specified by "t" in VTS_Ot_0), the newCmdString is the hexadecimal command string, and the locationOffset is the hexadecimal offset in the PGC command table for PGC referenced in the PGCNum~ber (e.g. as specified by "n" here: VTS_PGC n). Case II - Media Subsystem modifies the Command Table This case is where the media subsystem acquires the full set of modifications to the navigation command table and applies the modifications similar to a software patch. In one embodiment, the method of acquiring the full set of modifications is as follows: 1. By locating the modifications on a specific ROM directory (this enables the DVD-Video to be burned without re-authoring the DVD-video by simply placing the "patch" on the ROM). 2. By receiving the modifications from the server after a disc identification exchange that occurs during the startup process. This is where the web server provides the modifications to media services upon verifying the DVD-Video disc (title). 3. By receiving the modifications via a JavaScript command, but as an entire command table, such as ApplyNavCmdTable(title, PGCNumber, newCmdTable); Additionally, for the above Case 1 command in the media subsystem (exposed to JavaScript) can be employed to modify individual navigation commands by the media services. Referring to FIG. 5 a diagram is shown illustrating a media player according to one embodiment. Shown are a media storage device 500, a media player 502, an output 504, a presentation device 506, a browser 508, an ITX API 510, a media services module 512, and a decoder module 514.
The ITX API 510 is a programming interface allowing a JavaScript/HTML application to control the playback of DVD video creating new interactive applications which are distinctly different from watching the feature movie in a linear fashion. The JavaScript is interpreted line-by-line and each ITX instruction is sent to the media subsystem in pseudo real-time. This can create certain timing issues and system latency that adversely affect the media playback. One example of the programming interface is discussed in greater detail with reference to FIGS. 6 AND 7. Referring to FIG. 6 a diagram is shown illustrating a media player according to another embodiment. Shown is a media storage device 600, a media player 602, an output 604, a presentation device 606, an on screen display 608, a media services module 610, a content services module 612 a behavioral metadata component 614 and a decoder module 616. The media player 602 includes the on screen display 608, the media services module 610 and the decoder module 616. The media services module 610 includes the content services module 612 and the behavioral metadata component 614. The media services module 610 controls the presentation of playback in a declarative fashion that can be fully prepared before playback of an entity or collection. This process involves queuing up files in a playlist for playback on the media player 602 through various entity decoders. Collection metadata is used by the content manager (shown in FIG. 4) to create the playlist and the content manager will also manage the sequencing when multiple entity decoders are required. In one example, the media services module 610 gathers (i.e., locates in a local memory or download from remote content source if not locally stored) the necessary entities for a requested collection and fully prepares the collection for playback based upon, e.g., the system requirements (i.e., capabilities) the properties of the collection (defined by the entity metadata). An example of the media service module 610 fully preparing the collection for playback is described below with reference to the W3C SMIL timing model. The W3C standard can be found at http://www.w3.org/TR/smi120/smil-timing.html. SMIL Timing defines elements and attributes to coordinate and synchronize the presentation of media over time. The term media covers a broad range, including discrete media types such as still images, text, and vector graphics, as well as continuous media types that are intrinsically time-based, such as video, audio and animation. Three synchronization elements support common timing use-cases: ~ The <seq> element plays the child elements one after another in a sequence. ~ The <excl> element plays one child at a time, but does not impose any order. ~ The <par> element plays child elements as a group (allowing "parallel" playback). These elements are referred to as time containers. The time containers group their contained children together into coordinated timelines. SMIL Timing also provides attributes that can be used to specify an element's timing behavior. Elements have a begin, and a simple duration. The begin can be specified in various ways - for example, an element can begin at a given time, or based upon when another element begins, or when some event (such as a mouse click) happens. The simple duration defines the basic presentation duration of an element. Elements can be defined to repeat the simple duration, a number of times or for an amount of time. The simple duration and any effects of repeat are combined to define the active duration. When an element's active duration has ended, the element can either be removed from the presentation or frozen (held in its final state), e.g. to fill any gaps in the presentation. An element becomes active when the element begins its active duration, and becomes inactive when the element ends its active duration. Within the active duration, the element is active, and outside the active duration, the element is inactive. In another example, a timeline is constructed from behavioral metadata which is used by the playback engine. The behavioral metadata attaches entities to the timeline and then, using the timeline like a macro of media service commands, executes them to generate the presentation. A full set of declarations can be given to the media subsystem such that media playback can be setup completely before the start of playback. This allows for a simpler authoring metaphor and also for a more reliable playback experience compared to the system shown in FIG. 5. The actions associated with each declaration can be a subset (with some possible additions) of the ITX commands provided to JavaScript. In JavaScript, Methods are actions applied to particular objects, that is, things that the objects can do. For example, document.open(index.htm) or document. write("text here"), where open() and write() are methods and document is an object. Events associate an object with an action. JavaScript uses commands called event handlers to program events. Event handlers place the string "on" before the event. For example, the onMouseover event handler allows the page user to change an image, and the onSubmit event handler can send a form. Page user actions typically trigger events. For example onClick="#"claims" itemscope>
Claims (99)
1. A method comprising: receiving a request for content; searching for a plurality of entities in response to the received request, the plurality of entities each having entity metadata associated therewith; and creating a collection, the collection comprising the plurality of entities and collection metadata.
2. The method of claim 1 further comprising: locating the plurality of entities; analyzing the entity metadata associated with each of the plurality of entities; and downloading only the entities that meet a set of criteria.
3. The method of claim 2 wherein the set of criteria include at least one of a system criteria and a request criteria.
4. A data structure for storing data embodied on a computer readable medium comprising: a plurality of entities; entity metadata associated with each of the plurality of entities; and a collection containing each of the plurality of entities, the collection comprising collection metadata for playback of the plurality of entities.
5. The data structure of claim 4 wherein the computer readable medium is a portable storage medium.
6. The data structure of claim 4 wherein the computer readable medium is a plurality of storage devices.
7. The data structure of claim 6 wherein the plurality of storage devices are local storage devices.
8. The data structure of claim 6 wherein at least one of the plurality of storage devices is a remote storage device.
9. The data structure of claim 4 wherein the computer readable medium is a local storage medium.
10. The data structure of claim 4 wherein the computer readable medium is a remote storage medium.
11. A method comprising: receiving a request for content; creating a collection comprising a plurality of entities meant for display with a first system and at least one entity meant for display on a second system; and outputting the collection comprising the plurality of entities meant for display on the first system and the at least one entity meant for display on the second system to the first system.
12. The method of claim 11 wherein the collection further comprises collection metadata.
13. The method of claim 12 further comprising using presentation rules that are based at least upon the collection metadata.
14. The method of claim 11 wherein the plurality of entities meant for display with the first system and the at least one entity meant for display on the second system each have entity metadata.
15. The method of claim 14 further comprising using presentation rules that are based at least upon the entity metadata.
16. The method of claim 11 further comprising using presentation rules that are based at least upon a set of system criteria.
17. A method for searching for content comprising the steps of: receiving at least one search parameter; translating the search parameter into a media identifier; and locating the content associated with the media identifier.
18. The method of claim 17 wherein the content comprises an entity; wherein the media identifier is an entity identifier.
19. The method of claim 17 wherein the content comprises a collection; wherein the media identifier is a collection identifier.
20. The method of claim 17 further comprising creating a set of presentation rules for the content.
21. The method of claim 20 further playing back the content on a presentation device.
22. The method of claim 20 further comprising acquiring access rights to the content.
23. The method of claim 17 wherein the content is a collection comprising a plurality of entities, the method further comprising: determining one of the plurality of entities can not be viewed; and locating an entity for replacing the one of the plurality of entities that can not be viewed.
24. The method of claim 23 wherein the step of determining one of the plurality of entities can not be viewed comprises determining that there are not access rights for the entity.
25. The method of claim 23 wherein the step of determining one of the plurality of entities can not be viewed comprises determining that one of a set of presentation rules are not compatible with the entity.
26. The method of claim 25 wherein the presentation rules are based upon system information.
27. The method of claim 25 wherein the presentation rules are based upon a user request.
28. The method of claim 25 wherein the presentation rules are base upon a user profile.
29. A system for locating content comprising: a playback runtime engine for constructing a request from a set of search parameters;
a collection name service for translating the request into a collection identifier; and a content search engine for searching for content associated with the collection identifier.
30. The system of claim 29 further comprising an access rights manager for determining if access rights are needed for the content.
31. The system of claim 29 further comprising a presentation layout manager for creating rules for presentation.
32. The method of claim 31 wherein the presentation rules are based upon system information.
33. The method of claim 31 wherein the presentation rules are based upon a user request.
34. The method of claim 31 wherein the presentation rules are base upon a user profile.
35. The system of claim 31 wherein the presentation layout manager additionally sets up a playback subsystem according to the presentation rules.
36. The system of claim 31 wherein the presentation layout manager provides a collection identifier to a playback runtime engine.
37. The system of claim 36 wherein the playback runtime engine outputs the content to a display device.
38. A method comprising:
receiving a request for content; searching for a plurality of entities in response to the received request, the plurality of entities each having entity metadata associated therewith; creating a first group of entities that meet the received request, each entity within the first group of entities having entity metadata associated therewith; comparing the first group of entities that meet the received request or the associated entity metadata to a user profile; and creating a collection comprising at least one entity from the first group of entities.
39. The method of claim 38 wherein the collection further comprises collection metadata.
40. The system of claim 38 further comprising an access rights manager for determining if access rights are needed for any of the first group of entities.
41. The system of claim 38 further comprising a presentation layout manager for creating rules for presentation.
42. The method of claim 41 wherein the presentation rules are based upon system information.
43. The method of claim 41 wherein the presentation rules are based upon the received request.
44. The method of claim 41 wherein the presentation rules are base upon the user profile.
45. The system of claim 41 wherein the presentation layout manager additionally sets up a playback subsystem according to the presentation rules.
46. The system of claim 41 wherein the presentation layout manager provides a collection identifier to a playback runtime engine.
47. The system of claim 46 wherein the playback runtime engine outputs the content to a display device.
48. A system comprising: a plurality of devices connected via a network; a plurality of entities located on at least one of the plurality of devices; and a content management system located on at least one of the plurality of devices for creating a collection using at least two of the plurality of entities.
49. The system of claim 48 wherein the entities are public domain entities.
50. The system of claim 48 wherein the entities are shared within a LAN, a trusted network, a WAN, or an Internet.
51. The system of claim 48 wherein the entities require access privileges.
52. The system of claim 51 wherein the access privileges include a password.
53. The system of claim 51 wherein the access privileges include a key.
54. The system of claim 48 further comprising a content search engine for locating entities.
55. The system of claim 54 wherein the content search engine searches for entities based upon a cost of retrieving the entities.
56. The system of claim 55 wherein the cost of retrieving the entities includes determining a trust level where the entities are stored.
57. A method of modifying a collection comprising: analyzing metadata associated with the collection; and removing at least one entity from the collection based upon a set of presentation rules.
58. The method of claim 57 further comprising adding at least one new entity from to the collection, wherein the added entity takes the place of the removed entity.
59. The method of claim 57 wherein the presentation rules include system information.
60. The method of claim 57 wherein the presentation rules include a user profile.
61. The method of claim 57 wherein the presentation rules are based upon a user request.
62. A method of displaying content comprising: providing a request to a content manager, the request including a set of criteria;
searching for a collection that at least partially fulfills the request, the collection including a plurality of entities; determining which of the plurality of entities within the collection do not meet the set of criteria; and searching for a replacement entity to replace one of the plurality of entities within the collection that do not meet the set of criteria.
63. The method of claim 62 wherein the set of criteria include system information.
64. The method of claim 62 wherein the set of criteria include a user profile.
65. The method of claim 62 further comprising determining if access rights exist for each of the plurality of entities within the collection.
66. The method of claim 65 further comprising replacing one of the plurality of entities for which there are not access rights with a second replacement entity.
67. The method of claim 62 further comprising: replacing one of the plurality of entities within the collection that do not meet the set of criteria with the replacement entity; and modifying a set of collection metadata in response to the replacing step.
68. A method of modifying an entity, the entity having entity metadata associated therewith, comprising the steps of: comparing the entity or the entity metadata with a set of criteria ;
determining a portion of the entity that does not meet the set of criteria; and removing the portion of the entity that does not meet the set of criteria.
69. The method of claim 68 further comprising modifying the entity metadata.
70. The method of claim 68 further comprising adding a portion of a second entity to replace the portion of the entity that was removed.
71. The method of claim 70 further comprising modifying the entity metadata.
72. The method of claim 68 wherein the set of criteria include a user profile.
73. The method of claim 68 wherein the set of criteria include system information.
74. The method of claim 68 wherein the entity is a video entity.
75. The method of claim 68 wherein the entity is an audio entity.
76. The method of claim 68 wherein the entity is a graphics entity.
77. A collection embodied on a computer readable medium comprising: a digital video entity;
an audio entity, for providing an associated audio for the digital video entity; a menu entity, for providing points within the digital video entity; and collection metadata for defining the playback of the digital video entity, the audio entity, and the menu entity.
78. The collection embodied on a computer readable medium of claim 77 the points correspond to titles or parts of titles within a digital versatile disk.
79. The collection embodied on a computer readable medium of claim 77 further comprising an entity for providing subtitles corresponding to the audio entity.
80. The collection embodied on a computer readable medium of claim 77 wherein the computer readable medium is a portable storage medium.
81. The collection embodied on a computer readable medium of claim 77 wherein the computer readable medium is a plurality of storage devices.
82. The collection embodied on a computer readable medium of claim 81 wherein the plurality of storage devices are local storage devices.
83. The collection embodied on a computer readable medium of claim 81 wherein at least one of the plurality of storage devices is a remote storage device.
84. The collection embodied on a computer readable medium of claim 77 wherein the computer readable medium is a local storage medium.
85. The collection embodied on a computer readable medium of claim 77 wherein the collection metadata includes system information.
86. A method of downloading streaming content comprising: downloading a first portion of the streaming content; downloading a second portion of the steaming content while the first portion of the streaming content is also downloading; outputting the first portion of the steaming content for display on a presentation device; and outputting the second portion of the steaming content for display on a presentation device after outputting the first portion of the steaming content; wherein a third portion of the steaming content originally positioned in between the first portion of the steaming content and the second portion of the steaming content is not output for display on a presentation device.
87. The method of claim 86 wherein the third portion of the streaming content does not meet a set of presentation rules.
88. The method of claim 86 wherein the streaming content is an audio file.
89. The method of claim 86 wherein the streaming content is a video file.
90. The method of claim 86 wherein the third portion of the streaming content does not meet a set of user criteria.
91. The method of claim 86 further comprising receiving a request from a user to skip the third portion of the streaming content.
92. A method of displaying a context sensitive menu comprising the steps of: outputting content to a display device; receiving a request to display a menu; deriving the context sensitive menu from the current content being output; and outputting the context sensitive menu to the display device.
93. The method of claim 92 wherein the context sensitive menu is derived from video content.
94. The method of claim 92 further comprising the step of deriving the context sensitive menu from a user profile.
95. The method of claim 92 further comprising the step of receiving the context sensitive menu from a server.
96. The method of claim 95 wherein the context sensitive menu is an update of video content on a DVD.
97. The method of claim 95 further comprising altering the context sensitive menu received from the server base upon a user profile.
98. The method of claim 92 wherein the context sensitive menu is overlaid on the content.
99. The method of claim 98 wherein the context sensitive menu is overlaid using alpha blending.
CA002550536A2003-12-192004-12-15Personalization services for entities from multiple sourcesAbandonedCA2550536A1 (en)
Information storage medium, information processing method, information transfer method, information reproduction method, information reproduction device, information recording method, information recording device, and program
Dynamically enabling guest device supporting network-based media sharing protocol to share media content over local area computer network of lodging establishment with subset of in-room media devices connected thereto