Movatterモバイル変換


[0]ホーム

URL:


US9021537B2 - Pre-buffering audio streams - Google Patents

Pre-buffering audio streams
Download PDF

Info

Publication number
US9021537B2
US9021537B2US12/964,728US96472810AUS9021537B2US 9021537 B2US9021537 B2US 9021537B2US 96472810 AUS96472810 AUS 96472810AUS 9021537 B2US9021537 B2US 9021537B2
Authority
US
United States
Prior art keywords
audio
video stream
stream pair
rate
pair
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/964,728
Other versions
US20120151539A1 (en
Inventor
John Funge
Greg Peters
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netflix Inc
Original Assignee
Netflix Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US12/964,728priorityCriticalpatent/US9021537B2/en
Application filed by Netflix IncfiledCriticalNetflix Inc
Assigned to NETFLIX, INC.reassignmentNETFLIX, INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: FUNGE, JOHN, PETERS, GREG
Priority to EP11847680.3Aprioritypatent/EP2649792B1/en
Priority to PCT/US2011/064112prioritypatent/WO2012078963A1/en
Priority to DK11847680.3Tprioritypatent/DK2649792T3/en
Publication of US20120151539A1publicationCriticalpatent/US20120151539A1/en
Priority to US14/697,527prioritypatent/US9510043B2/en
Publication of US9021537B2publicationCriticalpatent/US9021537B2/en
Application grantedgrantedCritical
Priority to US15/293,738prioritypatent/US10305947B2/en
Activelegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

One embodiment of the present invention sets forth a technique for identifying and pre-buffering audio/video stream pairs. The method includes the steps of predictively identifying for pre-buffering at least one audio/video stream pair that may be selected for playback by a user subsequent to a currently playing audio/video stream pair, computing a first rate for pre-buffering an audio portion of the at least one audio/video stream pair and a second rate for pre-buffering a video portion of the at least one audio/video stream pair, downloading the audio portion at the first rate and downloading the video portion at the second rate, and storing the downloaded audio portion and the downloaded video portion in a content buffer.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
Embodiments of the present invention relate generally to digital media and, more specifically, to pre-buffering audio streams.
2. Description of the Related Art
Digital content distribution systems conventionally include a content server, a content player, and a communications network connecting the content server to the content player. The content server is configured to store digital content files, which can be downloaded from the content server to the content player. Each digital content file corresponds to a specific identifying title, such as “Gone with the Wind,” which is familiar to a user. The digital content file typically includes sequential content data, organized according to playback chronology, and may comprise audio data, video data, or a combination thereof.
The content player is configured to download and play a digital content file, in response to a user request selecting the title for playback. The process of playing the digital content file includes decoding and rendering audio and video data into an audio signal and a video signal, which may drive a display system having a speaker subsystem and a video subsystem. Playback typically involves a technique known in the art as “streaming,” whereby the content server sequentially transmits the digital content file to the content player, and the content player plays the digital content file while content data is received that comprises the digital content file.
In a typical streaming system, a certain amount of the audio and video data associated with the currently selected digital content file needs to be buffered before the digital content file can be played with an acceptable quality. In a scenario where a user rapidly switches between digital content files, the buffering requirements results in interrupted playback since the newly selected digital content first must be buffered.
As the foregoing illustrates, what is needed in the art is an approach for buffering digital content files that may be selected by the user for viewing next.
SUMMARY OF THE INVENTION
One embodiment of the present invention sets forth a computer-implemented method for identifying and pre-buffering audio/video stream pairs. The method includes the steps of predictively identifying for pre-buffering at least one audio/video stream pair that may be selected for playback by a user subsequent to a currently playing audio/video stream pair, computing a first rate for pre-buffering an audio portion of the at least one audio/video stream pair and a second rate for pre-buffering a video portion of the at least one audio/video stream pair, downloading the audio portion at the first rate and downloading the video portion at the second rate, and storing the downloaded audio portion and the downloaded video portion in a content buffer.
Advantageously, pre-buffering audio/video stream pairs having a high probability of being selected for viewing next allows for a seamless transition when a user selects one of the pre-buffered audio/video stream pairs for viewing. In addition, pre-buffering the audio portion of an audio/video stream pair at a higher rate than the video portion of the audio/video stream pair allows for playback to be started faster without compromising audio quality.
BRIEF DESCRIPTION OF THE DRAWINGS
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
FIG. 1 illustrates a content distribution system configured to implement one or more aspects of the present invention;
FIG. 2 is a more detailed view of the content player ofFIG. 1, according to one embodiment of the invention;
FIG. 3 is a more detailed view of the content server ofFIG. 1, according to one embodiment of the invention;
FIG. 4A is a more detailed view of the sequence header index ofFIG. 1, according to one embodiment of the invention;
FIG. 4B illustrates data flow for buffering and playback of digital content associated with a digital content file, according to one embodiment of the invention; and
FIG. 5 is a flow diagram of method steps for identifying and pre-buffering audio/video stream pairs that may be selected for viewing next, according to one embodiment of the invention.
DETAILED DESCRIPTION
In the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skill in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present invention.
FIG. 1 illustrates acontent distribution system100 configured to implement one or more aspects of the present invention. As shown, thecontent distribution system100 includes, without limitation, acontent player110, one ormore content servers130, and acommunications network150. Thecontent distribution system100 may also include acontent directory server120. In one embodiment, the one ormore content servers130 comprise a content distribution network (CDN)140.
Thecommunications network150 includes a plurality of network communications systems, such as routers and switches, configured to facilitate data communication between thecontent player110 and the one ormore content servers130. Persons skilled in the art will recognize that many technically feasible techniques exist for building thecommunications network150, including technologies practiced in deploying the well-known internet communications network.
Thecontent directory server120 comprises a computer system configured to receive atitle lookup request152 and generatefile location data154. Thetitle lookup request152 includes, without limitation, a name of a movie or song requested by a user. Thecontent directory server120 queries a database (not shown) that maps a video stream of a given title encoded at a particular playback bit rate to adigital content file132, residing within an associatedcontent server130. Thefile location data154 includes, without limitation, a reference to acontent server130 that is configured to provide thedigital content file132 to thecontent player110.
Thecontent server130 is a computer system configured to serve download requests fordigital content files132 from thecontent player110. The digital content files may reside on a mass storage system accessible to the computer system. The mass storage system may include, without limitation, direct attached storage, network attached file storage, or network attached block-level storage. Thedigital content files132 may be formatted and stored on the mass storage system using any technically feasible technique. A data transfer protocol, such as the well-known hyper-text transfer protocol (HTTP), may be used to downloaddigital content files132 from thecontent server130 to thecontent player110.
Each title (a movie, song, or other form of digital media) is associated with one or moredigital content files132. Eachdigital content file132 comprises, without limitation, asequence header index114, audio data and an encoded sequence. An encoded sequence comprises a complete version of the video data corresponding title encoded to a particular playback bit rate. For example, a given title may be associated with digital content file132-1 and digital content file132-2. Digital content file132-1 may comprise sequence header index114-1 and an encoded sequence encoded to an average playback bit rate of approximately 250 kilobits per second (Kbps). Digital content file132-2 may comprise sequence header index114-2 and an encoded sequence encoded to an average playback bit rate of approximately 1000 Kbps. The 1000 Kbps encoded sequence enables higher quality playback and is therefore more desirable for playback versus the 250 Kbps encoded sequence.
An encoded sequence within adigital content file132 is organized as units of video data representing a fixed span of playback time. Overall playback time is organized into sequential time slots, each corresponding to one fixed span of playback time. For a given time slot, one unit of video data is represented within thedigital content file132 for the playback bit rate associated with thedigital content file132.
Persons skilled in the art will readily recognize that each encoded sequence, as defined above, comprises a digital content “stream.” Furthermore, the process of downloading a particular encoded sequence from thecontent server130 to thecontent player110 comprises “streaming” the digital content to thecontent player110 for playback at a particular playback bit rate.
Thecontent player110 may comprise a computer system, a set top box, a mobile device such as a mobile phone, or any other technically feasible computing platform that has network connectivity and is coupled to or includes a display device and speaker device for presenting video frames, and generating acoustic output, respectively.
Although, in the above description, thecontent distribution system100 is shown with onecontent player110 and oneCDN140, persons skilled in the art will recognize that the architecture ofFIG. 1 contemplates only an exemplary embodiment of the invention. Other embodiments, may include any number ofcontent players110 and/orCDNs140. Thus,FIG. 1 is in no way intended to limit the scope of the present invention in any way.
FIG. 2 is a more detailed view of thecontent player110 ofFIG. 1, according to one embodiment of the invention. As shown, thecontent player110 includes, without limitation, a central processing unit (CPU)210, agraphics subsystem212, an input/output (I/O)device interface214, anetwork interface218, aninterconnect220, and amemory subsystem230. Thecontent player110 may also include amass storage unit216.
TheCPU210 is configured to retrieve and execute programming instructions stored in thememory subsystem230. Similarly, theCPU210 is configured to store and retrieve application data residing in thememory subsystem230. Theinterconnect220 is configured to facilitate transmission of data, such as programming instructions and application data, between theCPU210,graphics subsystem212, I/O devices interface214,mass storage216,network interface218, andmemory subsystem230.
The graphics subsystem212 is configured to generate frames of video data and transmit the frames of video data to displaydevice250. In one embodiment, thegraphics subsystem212 may be integrated into an integrated circuit, along with theCPU210. Thedisplay device250 may comprise any technically feasible means for generating an image for display. For example, thedisplay device250 may be fabricated using liquid crystal display (LCD) technology, cathode-ray technology, and light-emitting diode (LED) display technology (either organic or inorganic). An input/output (I/O)device interface214 is configured to receive input data from user I/O devices252 and transmit the input data to theCPU210 via theinterconnect220. For example, user I/O devices252 may comprise one of more buttons, a keyboard, and a mouse or other pointing device. The I/O device interface214 also includes an audio output unit configured to generate an electrical audio output signal. User I/O devices252 includes a speaker configured to generate an acoustic output in response to the electrical audio output signal. In alternative embodiments, thedisplay device250 may include the speaker. A television is an example of a device known in the art that can display video frames and generate an acoustic output. Amass storage unit216, such as a hard disk drive or flash memory storage drive, is configured to store non-volatile data. Anetwork interface218 is configured to transmit and receive packets of data via thecommunications network150. In one embodiment, thenetwork interface218 is configured to communicate using the well-known Ethernet standard. Thenetwork interface218 is coupled to theCPU210 via theinterconnect220.
Thememory subsystem230 includes programming instructions and data that comprise anoperating system232,user interface234, andplayback application236. Theoperating system232 performs system management functions such as managing hardware devices including thenetwork interface218,mass storage unit216, I/O device interface214, andgraphics subsystem212. Theoperating system232 also provides process and memory management models for theuser interface234 and theplayback application236. Theuser interface234 provides a specific structure, such as a window and object metaphor, for user interaction withcontent player110. Persons skilled in the art will recognize the various operating systems and user interfaces that are well-known in the art and suitable for incorporation into thecontent player110.
Theplayback application236 is configured to retrieve adigital content file132 from acontent server130 via thenetwork interface218 and play thedigital content file132 through thegraphics subsystem212. The graphics subsystem212 is configured to transmit a rendered video signal to thedisplay device250. In normal operation, theplayback application236 receives a request from a user to play a specific title. Theplayback application236 then locates the digital content files132 associated with the requested title, where eachdigital content file132 associated with the requested title includes an encoded sequence encoded to a different playback bit rate. In one embodiment, theplayback application236 locates the digital content files132 by postingtitle lookup request152 to thecontent directory server120. Thecontent directory server120 replies to thetitle lookup request152 withfile location data154 for eachdigital content file132 associated with the requested title. Eachfile location data154 includes a reference to the associatedcontent server130, in which the requesteddigital content file132 resides. Thetitle lookup request152 may include the name of the requested title, or other identifying information with respect to the title. After theplayback application236 has located thedigital content file132 associated with the requested title, theplayback application236 downloads thesequence header index114 associated with thedigital content file132 associated with the requested title from thecontent server130. Asequence header index114 associated withdigital content file132, described in greater detail inFIG. 4A, includes information related to the encoded sequence included in thedigital content file132.
In one embodiment, theplayback application236 begins downloading thedigital content file132 associated with the requested title. The requesteddigital content file132 is downloaded into thecontent buffer112, configured to serve as a first-in, first-out queue. In one embodiment, each unit of downloaded data comprises a unit of video data or a unit of audio data. As units of video data associated with the requesteddigital content file132 are downloaded to thecontent player110, the units of video data are pushed into thecontent buffer112. Similarly, as units of audio data associated with the requesteddigital content file132 are downloaded to thecontent player110, the units of audio data are pushed into thecontent buffer112. In one embodiment the units of video data are stored invideo buffer246 within thecontent buffer112, and units of audio data are stored in audio buffer224, also within thecontent buffer112.
Avideo decoder248 reads units of video data from thevideo buffer246, and renders the units of video data into a sequence of video frames corresponding in duration to the fixed span of playback time. Reading a unit of video data from thevideo buffer246 effectively de-queues the unit of video data from the video buffer246 (and from the content buffer112). The sequence of video frames is processed bygraphics subsystem212 and transmitted to thedisplay device250.
Anaudio decoder242 reads units of audio data from theaudio buffer244, and renders the units of audio data into a sequence of audio samples, generally synchronized in time with the sequence of video frames. In one embodiment, the sequence of audio samples is transmitted to the I/O device interface214, which converts the sequence of audio samples into the electrical audio signal. The electrical audio signal is transmitted to the speaker within the user I/O devices252, which, in response, generates an acoustic output.
FIG. 3 is a more detailed view of thecontent server130 ofFIG. 1, according to one embodiment of the invention. Thecontent server130 includes, without limitation, a central processing unit (CPU)310, anetwork interface318, aninterconnect320, amemory subsystem330, and amass storage unit316. Thecontent server130 may also include an I/O devices interface314.
TheCPU310 is configured to retrieve and execute programming instructions stored in thememory subsystem330. Similarly, theCPU310 is configured to store and retrieve application data residing in thememory subsystem330. Theinterconnect320 is configured to facilitate transmission of data, such as programming instructions and application data, between theCPU310, I/O devices interface314,mass storage unit316,network interface318, andmemory subsystem330.
Themass storage unit316 stores digital content files132-1 through132-N. The digital content files132 may be stored using any technically feasible file system on any technically feasible media. For example themass storage unit316 may comprise a redundant array of independent disks (RAID) system incorporating a conventional file system.
Thememory subsystem330 includes programming instructions and data that comprise anoperating system332, auser interface334, and afile download application336. Theoperating system332 performs system management functions such as managing hardware devices including thenetwork interface318,mass storage unit316, and I/O devices interface314. Theoperating system332 also provides process and memory management models for theuser interface334 and thefile download application336. Theuser interface334 provides a specific structure, such as a window and an object metaphor or a command line interface, for user interaction withcontent server130. A user may employ theuser interface334 to manage functions of the content server. In one embodiment, theuser interface334 presents a management web page for managing operation of thecontent server130. Persons skilled in the art will recognize the various operating systems and user interfaces that are well-known in the art and suitable for incorporation into thecontent player130.
Thefile download application336 is configured to facilitate transfer of digital content files132-1 to132-N, to thecontent player110, via a file download operation or set of operations. The downloadeddigital content file132 is transmitted throughnetwork interface318 to thecontent player110 via thecommunications network150. In one embodiment, file contents of adigital content file132 may be accessed in an arbitrary sequence (known in the art as “random access”). As previously described herein, eachdigital content file132 includes asequence header index114 and an encoded sequence. An encoded sequence comprises a full version of a given movie or song encoded to a particular bit rate, and video data associated with the encoded sequence is divided into units of video data. Each unit of video data corresponds to a specific span of playback time and begins with a frame including a sequence header specifying the size and the resolution of the video data stored in the unit of video data.
FIG. 4A is a more detailed view of thesequence header index114 ofFIG. 1, according one embodiment of the invention. Thesequence header index114 is a data structure that includes a videobit rate profile452 and can be populated in any technically feasible fashion.
Thesequence header index114 included in thedigital content file132 specifies information related to the encoded sequence also included in thedigital content file132. The videobit rate profile452 includes a corresponding set ofentries464 that specifies the locations and the timestamp offsets of the different sequence headers associated with the units of video data of the encoded sequence. Typically, the sequence headers in the encoded sequence are located at predictable timestamp offsets within the encoded sequence (e.g. every 3 seconds). A givenentry464 indicates a timestamp offset and the location of a specific sequence header included in a unit of video data of the encoded sequence associated with videobit rate profile452. For example, entry464-1 indicates the timestamp offset and the location of the sequence header associated with a first unit of video data of the encoded sequence. Entry464-2 indicates the timestamp offset and the location of the sequence header associated with a second unit of video data of the same encoded sequence. Importantly, a total byte count characterizing how many bytes comprise a given encoded sequence from a current playback position, associated with entry464-K, through completion of playback may be computed based on the timestamp offsets included in the set ofentries464.
The audio data associated with the enhanced sequence is also stored in thedigital content file132. In one embodiment, the audio data has a fixed bit rate encoding. In alternative embodiments a variable bit rate encoding technique is applied to audio data, and an audiobit rate profile472 is included in thesequence header index114. The audiobit rate profile472 includesentries484 configured to store a timestamp offset and a sequence header location for each respective unit of audio data at a respective time of playback.
FIG. 4B illustrates a data flow for buffering and playback ofdigital content494 associated with adigital content file132, according to one embodiment of the invention. Thecontent server130 ofFIG. 1 providescontent data494, comprising units of audio data and units of video data, of thedigital content file132 to abuffering process490. Thebuffering process490 may be implemented as a thread executing within thecontent player110. Thebuffering process490 is configured to download thecontent data494 and write thecontent data494 to thecontent buffer112. Thebuffering process490 writes units of audio data to theaudio buffer244 within thecontent buffer112, and units of video data to thevideo buffer246, also within thecontent buffer112. In one embodiment thecontent buffer112 is structured as a first-in first-out (FIFO) queue. Aplayback process492, also executing within thecontent player110, de-queues units of audio data and units of video data from thecontent buffer112 for playback. In order to maintain uninterrupted playback ofcontent data494, thecontent buffer112 should always have at least one unit of audio data and one unit of video data available when theplayback process492 needs to perform a read on thecontent buffer112.
During the playback of a digital content file132 (referred to herein as “the currently playingdigital content file132”) associated with a particular title, the predictivepre-buffering engine254 identifies one or more other digital content files132 associated with different titles that may be selected for viewing next and, thus, should be pre-buffered. In operation, the predictivepre-buffering engine254 first determines a subset of digital content files132 that may be selected for viewing next. In one embodiment, the subset of digital content files132 may be determined based on the close proximity, in a user-interface, of different identifiers associated with the digital content files132 included in the subset of digital content files132 and the currently playingdigital content file132.
Once the subset of digital content files132 that may be selected for viewing next is determined, the predictivepre-buffering engine254 computes, an ordering of the subset of digital content files132 is computed to indicate whichdigital content file132 is most likely to be played next. In one embodiment, the ordering can be used to determine the amount to pre-buffer of eachdigital content file132. The allocation could be arbitrary, such as allocate 50% to the most likely, 25% to the second and third. In an alternative embodiment, a numerical measure that induces an ordering is computed for the subset of digital content files. The numerical measure is then used to determine the amount to pre-buffer. For example, digital content file A is given a numerical measure of x and digital content file B is given a measure of y. The amount to pre-buffer can then be allocated proportionally, such as 2× can be allocated to digital content file A and the remainder to digital content file B.
In one embodiment, for eachdigital content file132 in the subset of digital content files132, a probability indicating the likelihood of thedigital content file132 being selected for viewing next is computed. In one embodiment, the probability (P) of a digital content file132 (file i) being selected for viewing next can be computed as follows:
calculate P(“file i”|“various information”).
The “various information” may include information such as the “currently playingdigital content file132,” “the digital content files132 that would be selected as the result of various UI actions” (for example, whichdigital content file132 would start playing if a user pressed up/down/left/right on their controller), “titles that the user previously watched”, “ratings from titles the user previously watched”, “what UI actions other users performed in similar situations”, “history of UI actions from the current and previous sessions.” These examples of “various information” are not meant to be limiting in any way and those skilled in the art would recognize that any other relevant information can be used when computing the probability of adigital content file132 being selected for viewing next.
There are many techniques know to those skilled in the art for computing the above probability based on the supplied information. In one embodiment, the predictivepre-buffering engine254 includes one or more machine learning techniques, including for example, decision trees, hidden Markov models, Bayesian learning techniques, and other alternatives. Several machine learning techniques are known in the arts of artificial intelligence and machine learning. Among the many alternatives include techniques related to evolution strategies, genetic algorithms, genetic programming, multidimensional clustering, neural networks, and weighted majority techniques. In addition, the predictivepre-buffering engine254 may compute a weighted average of a set of relatively simpler elements, updated in real time during actual user interaction using an exponential gradient technique, or some other machine learning technique.
Below an exemplary computation of a probability is illustrated. The example is provided for pedagogical purposes only and is not intended to be limiting in any way. In particular, Naïve Bayes is the method described, but some more sophisticated technique would almost always be used in practice. To simplify even further, it is assumed that a digital content file is currently being played and that the user may employ a channel surfing metaphor to either move up to select a new digital content file, or down to select a new digital content file. Some indication of which digital content file the user views next as they move up or down is also provided.
Based on these assumptions, the following probabilities are computed:
  • P(“file above selected”|“various information”)
  • P(“file below selected”|“various information”).
  • To simplify further, the information that the computation is conditioned on includes:
    • Is the title associated with the file above more popular or the title associated with the file below more popular?
    • For the current user, does the title associated with the file above have a higher predicted rating from a recommendation engine or does the title associated with the file below have a higher predicted rating from some recommendation engine.
To compute the probabilities, the following is determined:
  • P(above|“most popular”, “highest rated”), versus
  • P(below|“most popular”, “highest rated”), where the possible values for “most popular” and “highest rated” are “above” or “below”.
Suppose that from previous historical records for the information that the probabilities are conditioned on, the following table can be constructed.
TABLE 1
file selected?most popularhighest rated
aboveabovebelow
aboveaboveabove
belowbelowabove
abovebelowabove
belowabovebelow
abovebelowbelow
aboveaboveabove
From the Bayes rule then, it can be determined that:
  • P(above|“most popular”, “highest rated”)=k P(“most popular”, “highest rated”|above) P(above), where k is some constant that is factored out, as shown below. Applying the assumption of conditional independence, the following can be determined: P(above|“most popular”, “highest rated”)=k P(“most popular”|above) P(“highest rated”|above) P(above). Based on Table 1, P(above|“most popular”, “highest rated”)=k ⅖*⅖* 5/7=k 4/35 and P(below|“most popular”, “highest rated”)=k 1/14.
Since k 4/35>k 1/14, it can be concluded given this data that the user is more likely to select the digital content file from above. Furthermore, the probabilities can be calculated to be: P(above|“most popular”, “highest rated”)= 56/91 and P(below|“most popular”, “highest rated”)= 35/91. These probabilities can potentially be used to allocate memory to pre-buffering the digital content files132 in proportion to the estimated probability with which those files will be selected.
Again, the example above is illustrated purely to show how a probability for a particulardigital content file132 can be computed. Persons skilled in the art will recognize that any other mathematical approach as well as other types of data can be used to compute the probability.
Based on the probabilities computed for the digital content files132 in the subset of digital content files132, the predictivepre-buffering engine254 selects one or more digital content files132 from the subset of digital content files132 that should be pre-buffered. For each of the one or more digital content files132 that should be pre-buffered, the predictivepre-buffering engine254 determines a rate for pre-buffering the units of video data associated with thedigital content file132 and a rate for pre-buffering the units of audio data associated with thedigital content file132.
For a particulardigital content file132 selected for pre-buffering, the rates of pre-buffering the units of audio and video data are determined based on two factors. First, because audio data is typically much smaller than video data, the audio data is pre-buffered at a higher rate than the video data. Pre-buffing audio data at a higher rate than video data allows for a quick start to playing thedigital content file132 if thedigital content file132 is selected for playback without compromising audio delivery quality. Second, the rates of pre-buffering the units of audio and video data are proportional to the probability that thedigital content file132 will be selected for viewing next. If the probability is high, then the rates of pre-buffering the units of audio and video data are higher than the rates of pre-buffering units of audio and video data associated with a differentdigital content file132 with a lower probability.
In one embodiment, if there are five digital content files132 that are to be pre-buffered, theplayback application236 may download five seconds of audio data from the beginning of each of the five digital content files132. In contrast, theplayback application236 may download only one second of video data from the beginning of each of the five digital content files132, only two seconds of video data from the beginnings of two of the five digital content files132 or no video data at all.
FIG. 5 is a flow diagram of method steps for identifying and pre-buffering audio/video stream pairs that may be selected for viewing next, according to one embodiment of the invention. Although the method steps are described in conjunction with the systems ofFIGS. 1,2, and3, persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the inventions.
Atstep502, the predictivepre-buffering engine254 computes the probability of each of a set of audio/video stream pairs being selected for viewing next. The set of audio/video stream pairs are determined based on the currently playing audio/video stream pair. Each probability is computed based on various information, as described above.
Atstep504, the predictivepre-buffering engine254 selects, based on the respective probabilities, a subset of the audio/video stream pairs that should be pre-buffered. Atstep506, the predictivepre-buffering engine254 computes, for each selected audio/video stream pair, a rate for pre-buffering the audio stream and a rate for pre-buffering the video stream. For a particular audio/video stream pair, the rates of pre-buffering the audio stream and the video stream are determined based on two factors. First, because audio data is typically much smaller than video data, the audio stream is pre-buffered at a higher rate than the video stream. Pre-buffing audio data at a higher rate than video data allows for a quick start to playing the audio/video stream pair if the audio/video stream pair is selected for playback without compromising audio delivery quality. Second, the rates of pre-buffering the audio stream and the video stream are proportional to the probability that the audio/video stream pair will be selected for viewing next. If the probability is high, then the rates of pre-buffering the audio stream and the video stream are higher than the rates of pre-buffering a different audio/stream pair a lower probability.
Atstep508, the predictivepre-buffering engine254 causes each of the selected audio/video stream pairs to be downloaded for pre-buffering at the rates computed instep506.
Advantageously, pre-buffering audio/video stream pairs having a high probability of being selected for viewing next allows for a seamless transition when a user selects one of the pre-buffered audio/video stream pairs for viewing. In addition, pre-buffering the audio portion of an audio/video stream pair at a higher rate than the video portion of the audio/video stream pair allows for playback to be started faster without compromising audio quality.
While the forgoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. For example, aspects of the present invention may be implemented in hardware or software or in a combination of hardware and software. One embodiment of the invention may be implemented as a program product for use with a computer system. The program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, flash memory, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. Such computer-readable storage media, when carrying computer-readable instructions that direct the functions of the present invention, are embodiments of the present invention.
In view of the foregoing, the scope of the present invention is determined by the claims that follow.

Claims (21)

We claim:
1. A computer-implemented method for identifying and pre-buffering audio/video stream pairs, the method comprising:
outputting, for display, identification information for each of a plurality of audio/video stream pairs, each instance of identification information being directly selectable to play the corresponding audio/video stream;
predictively identifying for pre-buffering at least one audio/video stream pair, of the plurality of audio/video stream pairs, that may be selected for playback by a user subsequent to a currently playing audio/video stream pair;
computing a first rate for pre-buffering an audio portion of the predictively identified at least one audio/video stream pair and a second rate for pre-buffering a video portion of the predictively identified at least one audio/video stream pair, wherein the first rate and the second rates are proportional to a probability that the at least one audio/video stream pair may be selected for playback by the user subsequent to the currently playing audio/video stream pair;
initiating a download of the audio portion of the predictively identified at least one audio/video stream pair at the first rate and a download of the video portion of the predictively identified at least one audio/video stream pair at the second rate; and
storing the downloaded audio portion of the predictively identified at least one audio/video stream pair and the downloaded video portion of the predictively identified at least one audio/video stream pair in a content buffer, thereby pre-buffering the predictively identified at least one audio/video stream pair while playing the currently playing audio/video stream pair.
2. The method ofclaim 1, wherein the second rate is less than the first rate, wherein the download of the predictively identified at least one audio/video stream pair has not started prior to initiation of the download, wherein the identification information comprises: (i) a title, and (ii) an image representing each respective at least one audio/video stream pair.
3. The method ofclaim 2, wherein predictively identifying the at least one audio/video stream pair comprises computing an ordering of the audio/video stream pairs computed to indicate which audio/video stream pair is most likely to be played next.
4. The method ofclaim 3, wherein computing an order comprises computing a probability indicating the likelihood of the at least one audio/video stream pair being selected for playback by the user subsequent to the currently playing audio/video stream pair.
5. The method ofclaim 4, wherein the probability computed for the at least one audio/video stream pair is greater than a pre-determined threshold.
6. The method ofclaim 5, wherein computing the first rate and the second rate is based on the probability computed for the predictively identified at least one audio/video stream pair.
7. The method ofclaim 6, wherein the probability computed for the predictively identified at least one audio/video stream pair is greater than a probability computed for a second audio/video stream pair, wherein the first rate is greater than a third rate computed for pre-buffering an audio portion of the second audio/video stream pair, and wherein the second rate is greater than a fourth rate computed for pre-buffering a video portion of the second audio/video stream pair, wherein the probability computed for the second audio/video stream pair is greater than a probability computed for a third audio/video stream pair, wherein the third rate is greater than a fifth rate computed for pre-buffering an audio portion of the third audio/video stream pair, wherein the fourth rate is greater than a sixth rate computed for pre-buffering a video portion of the third audio/video stream pair, wherein the third, fourth, fifth, and sixth rates are based on the probabilities for each respective audio/video stream pair.
8. The method ofclaim 7, wherein the probability computed for the predictively identified at least one audio/video stream pair is further based on a rating associated with the currently playing audio/video stream pair.
9. The method ofclaim 8, wherein the probability computed for the at least one audio/video stream pair is further based on a rating associated with the predictively identified at least one audio/video stream pair.
10. The method ofclaim 9, wherein the probability computed for the predictively identified at least one audio/video stream pair is further based on: (i) an interaction with a user-interface performed by the user to select the currently playing audio/video stream pair for playback, and (ii) a number of other users having viewed the currently playing audio/video stream pair and subsequently selecting the predictively identified at least one audio/video stream pair, wherein the method is configured to allow playback of the predictively identified at least one audio/video stream pair to start faster without compromising audio quality.
11. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to identify and pre-buffer audio/video stream pairs, by performing the steps of:
outputting, for display, identification information for each of a plurality of audio/video stream pairs, each instance of identification information being directly selectable to play the corresponding audio/video stream;
predictively identifying for pre-buffering at least one audio/video stream pair, of the plurality of audio/video stream pairs, that may be selected for playback by a user subsequent to a currently playing audio/video stream pair;
computing a first rate for pre-buffering an audio portion of the predictively identified at least one audio/video stream pair and a second rate for pre-buffering a video portion of the predictively identified at least one audio/video stream pair, wherein the first rate and the second rates are proportional to a probability that the at least one audio/video stream pair may be selected for playback by the user subsequent to the currently playing audio/video stream pair;
initiating a download of the audio portion of the predictively identified at least one audio/video stream pair at the first rate and a download of the video portion of the predictively identified at least one audio/video stream pair at the second rate; and
storing the downloaded audio portion of the predictively identified at least one audio/video stream pair and the downloaded video portion of the predictively identified at least one audio/video stream pair in a content buffer, thereby pre-buffering the predictively identified at least one audio/video stream pair while playing the currently playing audio/video stream pair.
12. The non-transitory computer-readable medium ofclaim 11, wherein the second rate is less than the first rate, wherein the download of the at least one audio/video stream pair has not started prior to initiation of the download, wherein the identification information comprises: (i) a title, and (ii) an image representing each respective at least one audio/video stream pair.
13. The non-transitory computer-readable medium ofclaim 12, wherein predictively identifying the at least one audio/video stream pair comprises computing an ordering of the audio/video stream pairs computed to indicate which audio/video stream pair is most likely to be played next.
14. The non-transitory computer-readable medium ofclaim 13, wherein computing an order comprises computing a probability indicating the likelihood of the at least one audio/video stream pair being selected for playback by the user subsequent to the currently playing audio/video stream pair.
15. The non-transitory computer-readable medium ofclaim 14, wherein the probability computed for the at least one audio/video stream pair is greater than a pre-determined threshold.
16. The non-transitory computer-readable medium ofclaim 15, wherein computing the first rate and the second rate is based on the probability computed for the predictively identified at least one audio/video stream pair.
17. The non-transitory computer-readable medium ofclaim 16, wherein the probability computed for the predictively identified at least one audio/video stream pair is greater than a probability computed for a second audio/video stream pair, wherein the first rate is greater than a third rate computed for pre-buffering an audio portion of the second audio/video stream pair, and wherein the second rate is greater than a fourth rate computed for pre-buffering a video portion of the second audio/video stream pair, wherein the probability computed for the second audio/video stream pair is greater than a probability computed for a third audio/video stream pair, wherein the third rate is greater than a fifth rate computed for pre-buffering an audio portion of the third audio/video stream pair, wherein the fourth rate is greater than a sixth rate computed for pre-buffering a video portion of the third audio/video stream pair, wherein the first, second, third, fourth, fifth, and sixth rates are based on the probabilities for each respective audio/video stream pair.
18. The non-transitory computer-readable medium ofclaim 17, wherein the probability computed for the predictively identified at least one audio/video stream pair is further based on a rating associated with the currently playing audio/video stream pair.
19. The non-transitory computer-readable medium ofclaim 18, wherein the probability computed for the predictively identified at least one audio/video stream pair is further based on a rating associated with the predictively identified at least one audio/video stream pair.
20. The non-transitory computer-readable medium ofclaim 19, wherein the probability computed for the predictively identified at least one audio/video stream pair is further based on: (i) an interaction with a user-interface performed by the user to select the currently playing audio/video stream pair for playback, and (ii) a number of other users having viewed the currently playing audio/video stream pair and subsequently selecting the at predictively identified least one audio/video stream pair, wherein the instructions are configured to allow playback of the predictively identified at least one audio/video stream pair to start faster without compromising audio quality.
21. A system, comprising:
a processor; and
a memory configured to store instructions that, when executed by the processor, cause the processor to:
output, for display, identification information for each of a plurality of audio/video stream pairs, each instance of identification information being directly selectable to play the corresponding audio/video stream;
predictively identify for pre-buffering at least one audio/video stream pair, of the plurality of audio/video stream pairs, that may be selected for playback by a user subsequent to a currently playing audio/video stream pair;
compute a first rate for pre-buffering an audio portion of the predictively identified at least one audio/video stream pair and a second rate for pre-buffering a video portion of the predictively identified at least one audio/video stream pair, wherein the first rate and the second rates are proportional to a probability that the at least one audio/video stream pair may be selected for playback by the user subsequent to the currently playing audio/video stream pair;
initiate a download of the audio portion of the predictively identified at least one audio/video stream pair at the first rate and a download of the video portion of the predictively identified at least one audio/video stream pair at the second rate; and
store the downloaded audio portion of the predictively identified at least one audio/video stream pair and the downloaded video portion of the predictively identified at least one audio/video stream pair in a content buffer, thereby pre-buffering the predictively identified at least one audio/video stream pair while playing the currently playing audio/video stream pair.
US12/964,7282010-12-092010-12-09Pre-buffering audio streamsActive2032-04-11US9021537B2 (en)

Priority Applications (6)

Application NumberPriority DateFiling DateTitle
US12/964,728US9021537B2 (en)2010-12-092010-12-09Pre-buffering audio streams
EP11847680.3AEP2649792B1 (en)2010-12-092011-12-09Pre-buffering audio/video stream pairs
PCT/US2011/064112WO2012078963A1 (en)2010-12-092011-12-09Pre-buffering audio streams
DK11847680.3TDK2649792T3 (en)2010-12-092011-12-09 AUDIO STORAGE OF AUDIO / VIDEO STREAMING COUPLES
US14/697,527US9510043B2 (en)2010-12-092015-04-27Pre-buffering audio streams
US15/293,738US10305947B2 (en)2010-12-092016-10-14Pre-buffering audio streams

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
US12/964,728US9021537B2 (en)2010-12-092010-12-09Pre-buffering audio streams

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
US14/697,527ContinuationUS9510043B2 (en)2010-12-092015-04-27Pre-buffering audio streams

Publications (2)

Publication NumberPublication Date
US20120151539A1 US20120151539A1 (en)2012-06-14
US9021537B2true US9021537B2 (en)2015-04-28

Family

ID=46200847

Family Applications (3)

Application NumberTitlePriority DateFiling Date
US12/964,728Active2032-04-11US9021537B2 (en)2010-12-092010-12-09Pre-buffering audio streams
US14/697,527ActiveUS9510043B2 (en)2010-12-092015-04-27Pre-buffering audio streams
US15/293,738ActiveUS10305947B2 (en)2010-12-092016-10-14Pre-buffering audio streams

Family Applications After (2)

Application NumberTitlePriority DateFiling Date
US14/697,527ActiveUS9510043B2 (en)2010-12-092015-04-27Pre-buffering audio streams
US15/293,738ActiveUS10305947B2 (en)2010-12-092016-10-14Pre-buffering audio streams

Country Status (4)

CountryLink
US (3)US9021537B2 (en)
EP (1)EP2649792B1 (en)
DK (1)DK2649792T3 (en)
WO (1)WO2012078963A1 (en)

Cited By (35)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20110202562A1 (en)*2010-02-172011-08-18JBF Interlude 2009 LTDSystem and method for data mining within interactive multimedia
US9190110B2 (en)2009-05-122015-11-17JBF Interlude 2009 LTDSystem and method for assembling a recorded composition
US9257148B2 (en)2013-03-152016-02-09JBF Interlude 2009 LTDSystem and method for synchronization of selectably presentable media streams
US9271015B2 (en)2012-04-022016-02-23JBF Interlude 2009 LTDSystems and methods for loading more than one video content at a time
US9520155B2 (en)2013-12-242016-12-13JBF Interlude 2009 LTDMethods and systems for seeking to non-key frames
US9530454B2 (en)2013-10-102016-12-27JBF Interlude 2009 LTDSystems and methods for real-time pixel switching
US9607655B2 (en)2010-02-172017-03-28JBF Interlude 2009 LTDSystem and method for seamless multimedia assembly
US9641898B2 (en)2013-12-242017-05-02JBF Interlude 2009 LTDMethods and systems for in-video library
US9653115B2 (en)2014-04-102017-05-16JBF Interlude 2009 LTDSystems and methods for creating linear video from branched video
US9672868B2 (en)2015-04-302017-06-06JBF Interlude 2009 LTDSystems and methods for seamless media creation
US9792026B2 (en)2014-04-102017-10-17JBF Interlude 2009 LTDDynamic timeline for branched video
US9792957B2 (en)2014-10-082017-10-17JBF Interlude 2009 LTDSystems and methods for dynamic video bookmarking
US9832516B2 (en)2013-06-192017-11-28JBF Interlude 2009 LTDSystems and methods for multiple device interaction with selectably presentable media streams
US10218760B2 (en)2016-06-222019-02-26JBF Interlude 2009 LTDDynamic summary generation for real-time switchable videos
US10257578B1 (en)2018-01-052019-04-09JBF Interlude 2009 LTDDynamic library display for interactive videos
US10448119B2 (en)2013-08-302019-10-15JBF Interlude 2009 LTDMethods and systems for unfolding video pre-roll
US10462202B2 (en)2016-03-302019-10-29JBF Interlude 2009 LTDMedia stream rate synchronization
US10460765B2 (en)2015-08-262019-10-29JBF Interlude 2009 LTDSystems and methods for adaptive and responsive video
US10474334B2 (en)2012-09-192019-11-12JBF Interlude 2009 LTDProgress bar for branched videos
US10582265B2 (en)2015-04-302020-03-03JBF Interlude 2009 LTDSystems and methods for nonlinear video playback using linear real-time video players
US10694221B2 (en)2018-03-062020-06-23At&T Intellectual Property I, L.P.Method for intelligent buffering for over the top (OTT) video delivery
US11050809B2 (en)2016-12-302021-06-29JBF Interlude 2009 LTDSystems and methods for dynamic weighting of branched video paths
US11128853B2 (en)2015-12-222021-09-21JBF Interlude 2009 LTDSeamless transitions in large-scale video
US11164548B2 (en)2015-12-222021-11-02JBF Interlude 2009 LTDIntelligent buffering of large-scale video
US11245961B2 (en)2020-02-182022-02-08JBF Interlude 2009 LTDSystem and methods for detecting anomalous activities for interactive videos
US11412276B2 (en)2014-10-102022-08-09JBF Interlude 2009 LTDSystems and methods for parallel track transitions
US11429891B2 (en)2018-03-072022-08-30At&T Intellectual Property I, L.P.Method to identify video applications from encrypted over-the-top (OTT) data
US11490047B2 (en)2019-10-022022-11-01JBF Interlude 2009 LTDSystems and methods for dynamically adjusting video aspect ratios
US11601721B2 (en)2018-06-042023-03-07JBF Interlude 2009 LTDInteractive video dynamic adaptation and user profiling
US11856271B2 (en)2016-04-122023-12-26JBF Interlude 2009 LTDSymbiotic interactive video
US11882337B2 (en)2021-05-282024-01-23JBF Interlude 2009 LTDAutomated platform for generating interactive videos
US11934477B2 (en)2021-09-242024-03-19JBF Interlude 2009 LTDVideo player integration within websites
US12047637B2 (en)2020-07-072024-07-23JBF Interlude 2009 LTDSystems and methods for seamless audio and video endpoint transitions
US12096081B2 (en)2020-02-182024-09-17JBF Interlude 2009 LTDDynamic adaptation of interactive video players using behavioral analytics
US12155897B2 (en)2021-08-312024-11-26JBF Interlude 2009 LTDShader-based dynamic video manipulation

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP6142488B2 (en)*2012-09-132017-06-07株式会社Jvcケンウッド Content playback apparatus, content playback method, and content playback program
US9928209B2 (en)*2012-11-072018-03-27Telefonaktiebolaget Lm Ericsson (Publ)Pre-buffering of content data items to be rendered at a mobile terminal
TWI507022B (en)*2012-12-052015-11-01Ind Tech Res InstBuffer output method for multimedia stream and multimedia stream buffer module
US9374438B2 (en)2013-07-292016-06-21Aol Advertising Inc.Systems and methods for caching augmented reality target data at user devices
US20150039593A1 (en)*2013-07-312015-02-05Opanga Networks, Inc.Pre-delivery of content to a user device
US10368110B1 (en)*2013-08-212019-07-30Visualon, Inc.Smooth media data switching for media players
EP3072258B1 (en)*2013-11-202019-11-06Opanga Networks, Inc.Fractional pre-delivery of content to user devices
US20150334204A1 (en)*2014-05-152015-11-19Google Inc.Intelligent auto-caching of media
US10581943B2 (en)*2016-04-222020-03-03Home Box Office, Inc.Streaming media state machine
CN106850629B (en)*2017-02-092020-05-12Oppo广东移动通信有限公司 Method for processing streaming media data and mobile terminal
TWI859009B (en)2017-09-082024-10-11美商開放電視股份有限公司Bitrate and pipeline preservation for content presentation
CN108337553A (en)*2018-02-082018-07-27深圳市兆驰股份有限公司A kind of multi-medium data pre-download method
CN110324680B (en)*2018-03-302021-09-28腾讯科技(深圳)有限公司Video pushing method and device, server, client and storage medium
CN109803179A (en)*2018-12-252019-05-24北京凯视达科技有限公司Video automatic broadcasting method, device, storage medium and electronic equipment
US11388471B2 (en)*2019-09-272022-07-12At&T Intellectual Property I, L.P.Pre-fetching of information to facilitate channel switching
US11115688B2 (en)*2019-12-162021-09-07Netflix, Inc.Global approach to buffering media content
US11350150B2 (en)*2019-12-262022-05-31Hughes Network Systems, LlcMethod for estimation of quality of experience (QoE) metrics for video streaming using passive measurements
CN117082296A (en)*2022-05-092023-11-17北京字节跳动网络技术有限公司Video playing method and device
WO2024035804A1 (en)*2022-08-112024-02-15Block, Inc.Predictive media caching
US12307159B2 (en)2022-08-112025-05-20Block, Inc.Predictive media caching

Citations (30)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5586264A (en)*1994-09-081996-12-17Ibm CorporationVideo optimized media streamer with cache management
US20020089587A1 (en)*2000-05-182002-07-11Imove Inc.Intelligent buffering and reporting in a multiple camera data streaming video system
US6519011B1 (en)*2000-03-232003-02-11Intel CorporationDigital television with more than one tuner
US20030172134A1 (en)*2000-09-112003-09-11Konstantin ZervasMethod for dynamic caching
US20040001500A1 (en)*2002-07-012004-01-01Castillo Michael J.Predictive tuning to avoid tuning delay
US20040111741A1 (en)*2002-12-062004-06-10Depietro MarkMethod and apparatus for predictive tuning in digital content receivers
US20040128694A1 (en)*2002-12-302004-07-01International Business Machines CorporationFast selection of media streams
US6766376B2 (en)2000-09-122004-07-20Sn Acquisition, L.L.CStreaming media buffering system
US20040181813A1 (en)*2003-02-132004-09-16Takaaki OtaMethods and systems for rapid channel change within a digital system
US20050138658A1 (en)*2003-12-172005-06-23Bryan David A.Digital audio/video recorders with user specific predictive buffering
US20050149975A1 (en)*2003-12-242005-07-07Curtis JutziMethod and system for predicting and streaming content utilizing multiple stream capacity
US20050216948A1 (en)*2004-03-262005-09-29Macinnis Alexander GFast channel change
US20060085828A1 (en)*2004-10-152006-04-20Vincent DureauSpeeding up channel change
US7240162B2 (en)2004-10-222007-07-03Stream Theory, Inc.System and method for predictive streaming
US20070204320A1 (en)*2006-02-272007-08-30Fang WuMethod and apparatus for immediate display of multicast IPTV over a bandwidth constrained network
US7430222B2 (en)*2004-02-272008-09-30Microsoft CorporationMedia stream splicer
US7474359B2 (en)*2004-12-062009-01-06At&T Intellectual Properties I, L.P.System and method of displaying a video stream
US7530090B2 (en)*2001-06-152009-05-05Koninklijke Philips Electronics N.V.System for transmitting programs to client terminals
US20100146569A1 (en)*2007-06-282010-06-10The Trustees Of Columbia University In The City Of New YorkSet-top box peer-assisted video-on-demand
US7817557B2 (en)2006-08-292010-10-19Telesector Resources Group, Inc.Method and system for buffering audio/video data
US7996872B2 (en)*2006-12-202011-08-09Intel CorporationMethod and apparatus for switching program streams using a variable speed program stream buffer coupled to a variable speed decoder
US20110214148A1 (en)*2007-03-302011-09-01Gossweiler Iii Richard CInteractive Media Display Across Devices
US20120131627A1 (en)*2010-11-222012-05-24Sling Media Pvt LtdSystems, methods and devices to reduce change latency in placeshifted media streams using predictive secondary streaming
US20120144438A1 (en)*2010-12-022012-06-07Alcatel-Lucent USA Inc. via the Electronic Patent Assignment System (EPAS)Method and apparatus for distributing content via a network to user terminals
US8239909B2 (en)*2007-01-242012-08-07Nec CorporationMethod of securing resources in a video and audio streaming delivery system
US20120222065A1 (en)*2009-09-032012-08-30Nederlandse Organisatie Voor Toegepast- Natuurwetenschappelijk Onderzoek TnoPre-loading follow-up content
US20130042288A1 (en)*2010-04-262013-02-14Telefonaktiebolaget LmMethod and arrangement for playing out a media object
US20130191511A1 (en)*2012-01-202013-07-25Nokia CorporationMethod and apparatus for enabling pre-fetching of media
US8577993B2 (en)*2011-05-202013-11-05International Business Machines CorporationCaching provenance information
US20140149532A1 (en)*2012-11-262014-05-29Samsung Electronics Co., Ltd.Method of packet transmission from node and content owner in content-centric networking

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5758257A (en)*1994-11-291998-05-26Herz; FrederickSystem and method for scheduling broadcast of and access to video programs and other data using customer profiles
US6128712A (en)*1997-01-312000-10-03Macromedia, Inc.Method and apparatus for improving playback of interactive multimedia works
US20020138630A1 (en)*2000-12-272002-09-26Solomon Barry M.Music scheduling algorithm
US20020188956A1 (en)*2001-06-072002-12-12Michael FiccoMethod and system for electronic program guide temporal content organization
US20060037037A1 (en)*2004-06-142006-02-16Tony MiranzSystem and method for providing virtual video on demand
WO2006041784A2 (en)*2004-10-042006-04-20Wave7 Optics, Inc.Minimizing channel change time for ip video
US20070234395A1 (en)*2004-10-152007-10-04Vincent DureauSpeeding up channel change
US7788698B2 (en)*2005-08-312010-08-31Microsoft CorporationPre-negotiation and pre-caching media policy
JP2007115293A (en)*2005-10-172007-05-10Toshiba Corp Information storage medium, program, information reproducing method, information reproducing apparatus, data transfer method, and data processing method
US20070112973A1 (en)*2005-11-162007-05-17Harris John MPre-cached streaming content method and apparatus
JP2007272451A (en)*2006-03-302007-10-18Toshiba Corp RECOMMENDED PROGRAM INFORMATION PROVIDING DEVICE, RECOMMENDED PROGRAM INFORMATION PROVIDING METHOD, AND PROGRAM
US8611285B2 (en)*2006-04-182013-12-17Sony CorporationMethod and system for managing video data based on a predicted next channel selection
EP1879376A3 (en)*2006-06-132011-04-06Samsung Electronics Co., Ltd.Fast channel switching method and apparatus for digital broadcast receiver
US8522291B2 (en)*2006-12-292013-08-27Avermedia Technologies, Inc.Video playback device for channel browsing
EP2113155A4 (en)*2007-02-212010-12-22Nds LtdMethod for content presentation
US20080244665A1 (en)*2007-04-022008-10-02At&T Knowledge Ventures, LpSystem and method of providing video content
EP1978704A1 (en)*2007-04-022008-10-08British Telecommunications Public Limited CompanyContent delivery
US20080313674A1 (en)*2007-06-122008-12-18Dunton Randy RUser interface for fast channel browsing
US20090190582A1 (en)2008-01-302009-07-30Texas Instruments IncorporatedSystem and method for streaming media in master or slave mode with ease of user channel configuration
US8739196B2 (en)*2010-06-152014-05-27Echostar Broadcasting CorporationApparatus, systems and methods for pre-tuning a second tuner in anticipation of a channel surfing activity
US9178633B2 (en)*2010-10-282015-11-03Avvasi Inc.Delivery quality of experience (QoE) in a computer network
US9398347B2 (en)*2011-05-302016-07-19Sandvine Incorporated UlcSystems and methods for measuring quality of experience for media streaming
US9479562B2 (en)*2011-12-162016-10-25Netflix, Inc.Measuring user quality of experience for a streaming media service
US8804042B2 (en)*2013-01-142014-08-12International Business Machines CorporationPreemptive preloading of television program data

Patent Citations (31)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5586264A (en)*1994-09-081996-12-17Ibm CorporationVideo optimized media streamer with cache management
US6519011B1 (en)*2000-03-232003-02-11Intel CorporationDigital television with more than one tuner
US20020089587A1 (en)*2000-05-182002-07-11Imove Inc.Intelligent buffering and reporting in a multiple camera data streaming video system
US20030172134A1 (en)*2000-09-112003-09-11Konstantin ZervasMethod for dynamic caching
US7613792B2 (en)*2000-09-112009-11-03Handmark, Inc.Method for dynamic caching
US6766376B2 (en)2000-09-122004-07-20Sn Acquisition, L.L.CStreaming media buffering system
US7530090B2 (en)*2001-06-152009-05-05Koninklijke Philips Electronics N.V.System for transmitting programs to client terminals
US20040001500A1 (en)*2002-07-012004-01-01Castillo Michael J.Predictive tuning to avoid tuning delay
US20040111741A1 (en)*2002-12-062004-06-10Depietro MarkMethod and apparatus for predictive tuning in digital content receivers
US20040128694A1 (en)*2002-12-302004-07-01International Business Machines CorporationFast selection of media streams
US20040181813A1 (en)*2003-02-132004-09-16Takaaki OtaMethods and systems for rapid channel change within a digital system
US20050138658A1 (en)*2003-12-172005-06-23Bryan David A.Digital audio/video recorders with user specific predictive buffering
US20050149975A1 (en)*2003-12-242005-07-07Curtis JutziMethod and system for predicting and streaming content utilizing multiple stream capacity
US7430222B2 (en)*2004-02-272008-09-30Microsoft CorporationMedia stream splicer
US20050216948A1 (en)*2004-03-262005-09-29Macinnis Alexander GFast channel change
US20060085828A1 (en)*2004-10-152006-04-20Vincent DureauSpeeding up channel change
US7240162B2 (en)2004-10-222007-07-03Stream Theory, Inc.System and method for predictive streaming
US7474359B2 (en)*2004-12-062009-01-06At&T Intellectual Properties I, L.P.System and method of displaying a video stream
US20070204320A1 (en)*2006-02-272007-08-30Fang WuMethod and apparatus for immediate display of multicast IPTV over a bandwidth constrained network
US7817557B2 (en)2006-08-292010-10-19Telesector Resources Group, Inc.Method and system for buffering audio/video data
US7996872B2 (en)*2006-12-202011-08-09Intel CorporationMethod and apparatus for switching program streams using a variable speed program stream buffer coupled to a variable speed decoder
US8239909B2 (en)*2007-01-242012-08-07Nec CorporationMethod of securing resources in a video and audio streaming delivery system
US20110214148A1 (en)*2007-03-302011-09-01Gossweiler Iii Richard CInteractive Media Display Across Devices
US20100146569A1 (en)*2007-06-282010-06-10The Trustees Of Columbia University In The City Of New YorkSet-top box peer-assisted video-on-demand
US20120222065A1 (en)*2009-09-032012-08-30Nederlandse Organisatie Voor Toegepast- Natuurwetenschappelijk Onderzoek TnoPre-loading follow-up content
US20130042288A1 (en)*2010-04-262013-02-14Telefonaktiebolaget LmMethod and arrangement for playing out a media object
US20120131627A1 (en)*2010-11-222012-05-24Sling Media Pvt LtdSystems, methods and devices to reduce change latency in placeshifted media streams using predictive secondary streaming
US20120144438A1 (en)*2010-12-022012-06-07Alcatel-Lucent USA Inc. via the Electronic Patent Assignment System (EPAS)Method and apparatus for distributing content via a network to user terminals
US8577993B2 (en)*2011-05-202013-11-05International Business Machines CorporationCaching provenance information
US20130191511A1 (en)*2012-01-202013-07-25Nokia CorporationMethod and apparatus for enabling pre-fetching of media
US20140149532A1 (en)*2012-11-262014-05-29Samsung Electronics Co., Ltd.Method of packet transmission from node and content owner in content-centric networking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
International search report and written opinion for application No. PCT/US2011/064112 dated Feb. 29, 2012.

Cited By (56)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11314936B2 (en)2009-05-122022-04-26JBF Interlude 2009 LTDSystem and method for assembling a recorded composition
US9190110B2 (en)2009-05-122015-11-17JBF Interlude 2009 LTDSystem and method for assembling a recorded composition
US11232458B2 (en)*2010-02-172022-01-25JBF Interlude 2009 LTDSystem and method for data mining within interactive multimedia
US9607655B2 (en)2010-02-172017-03-28JBF Interlude 2009 LTDSystem and method for seamless multimedia assembly
US20110202562A1 (en)*2010-02-172011-08-18JBF Interlude 2009 LTDSystem and method for data mining within interactive multimedia
US12265975B2 (en)2010-02-172025-04-01JBF Interlude 2009 LTDSystem and method for data mining within interactive multimedia
US9271015B2 (en)2012-04-022016-02-23JBF Interlude 2009 LTDSystems and methods for loading more than one video content at a time
US10474334B2 (en)2012-09-192019-11-12JBF Interlude 2009 LTDProgress bar for branched videos
US9257148B2 (en)2013-03-152016-02-09JBF Interlude 2009 LTDSystem and method for synchronization of selectably presentable media streams
US10418066B2 (en)2013-03-152019-09-17JBF Interlude 2009 LTDSystem and method for synchronization of selectably presentable media streams
US9832516B2 (en)2013-06-192017-11-28JBF Interlude 2009 LTDSystems and methods for multiple device interaction with selectably presentable media streams
US10448119B2 (en)2013-08-302019-10-15JBF Interlude 2009 LTDMethods and systems for unfolding video pre-roll
US9530454B2 (en)2013-10-102016-12-27JBF Interlude 2009 LTDSystems and methods for real-time pixel switching
US9520155B2 (en)2013-12-242016-12-13JBF Interlude 2009 LTDMethods and systems for seeking to non-key frames
US9641898B2 (en)2013-12-242017-05-02JBF Interlude 2009 LTDMethods and systems for in-video library
US9653115B2 (en)2014-04-102017-05-16JBF Interlude 2009 LTDSystems and methods for creating linear video from branched video
US9792026B2 (en)2014-04-102017-10-17JBF Interlude 2009 LTDDynamic timeline for branched video
US10755747B2 (en)2014-04-102020-08-25JBF Interlude 2009 LTDSystems and methods for creating linear video from branched video
US11501802B2 (en)2014-04-102022-11-15JBF Interlude 2009 LTDSystems and methods for creating linear video from branched video
US11348618B2 (en)2014-10-082022-05-31JBF Interlude 2009 LTDSystems and methods for dynamic video bookmarking
US10692540B2 (en)2014-10-082020-06-23JBF Interlude 2009 LTDSystems and methods for dynamic video bookmarking
US9792957B2 (en)2014-10-082017-10-17JBF Interlude 2009 LTDSystems and methods for dynamic video bookmarking
US10885944B2 (en)2014-10-082021-01-05JBF Interlude 2009 LTDSystems and methods for dynamic video bookmarking
US11900968B2 (en)2014-10-082024-02-13JBF Interlude 2009 LTDSystems and methods for dynamic video bookmarking
US11412276B2 (en)2014-10-102022-08-09JBF Interlude 2009 LTDSystems and methods for parallel track transitions
US9672868B2 (en)2015-04-302017-06-06JBF Interlude 2009 LTDSystems and methods for seamless media creation
US10582265B2 (en)2015-04-302020-03-03JBF Interlude 2009 LTDSystems and methods for nonlinear video playback using linear real-time video players
US12132962B2 (en)2015-04-302024-10-29JBF Interlude 2009 LTDSystems and methods for nonlinear video playback using linear real-time video players
US12119030B2 (en)2015-08-262024-10-15JBF Interlude 2009 LTDSystems and methods for adaptive and responsive video
US11804249B2 (en)2015-08-262023-10-31JBF Interlude 2009 LTDSystems and methods for adaptive and responsive video
US10460765B2 (en)2015-08-262019-10-29JBF Interlude 2009 LTDSystems and methods for adaptive and responsive video
US11164548B2 (en)2015-12-222021-11-02JBF Interlude 2009 LTDIntelligent buffering of large-scale video
US11128853B2 (en)2015-12-222021-09-21JBF Interlude 2009 LTDSeamless transitions in large-scale video
US10462202B2 (en)2016-03-302019-10-29JBF Interlude 2009 LTDMedia stream rate synchronization
US11856271B2 (en)2016-04-122023-12-26JBF Interlude 2009 LTDSymbiotic interactive video
US10218760B2 (en)2016-06-222019-02-26JBF Interlude 2009 LTDDynamic summary generation for real-time switchable videos
US11050809B2 (en)2016-12-302021-06-29JBF Interlude 2009 LTDSystems and methods for dynamic weighting of branched video paths
US11553024B2 (en)2016-12-302023-01-10JBF Interlude 2009 LTDSystems and methods for dynamic weighting of branched video paths
US10257578B1 (en)2018-01-052019-04-09JBF Interlude 2009 LTDDynamic library display for interactive videos
US11528534B2 (en)2018-01-052022-12-13JBF Interlude 2009 LTDDynamic library display for interactive videos
US10856049B2 (en)2018-01-052020-12-01Jbf Interlude 2009 Ltd.Dynamic library display for interactive videos
US11606584B2 (en)2018-03-062023-03-14At&T Intellectual Property I, L.P.Method for intelligent buffering for over the top (OTT) video delivery
US11166053B2 (en)2018-03-062021-11-02At&T Intellectual Property I, L.P.Method for intelligent buffering for over the top (OTT) video delivery
US10694221B2 (en)2018-03-062020-06-23At&T Intellectual Property I, L.P.Method for intelligent buffering for over the top (OTT) video delivery
US11429891B2 (en)2018-03-072022-08-30At&T Intellectual Property I, L.P.Method to identify video applications from encrypted over-the-top (OTT) data
US11699103B2 (en)2018-03-072023-07-11At&T Intellectual Property I, L.P.Method to identify video applications from encrypted over-the-top (OTT) data
US11601721B2 (en)2018-06-042023-03-07JBF Interlude 2009 LTDInteractive video dynamic adaptation and user profiling
US11490047B2 (en)2019-10-022022-11-01JBF Interlude 2009 LTDSystems and methods for dynamically adjusting video aspect ratios
US11245961B2 (en)2020-02-182022-02-08JBF Interlude 2009 LTDSystem and methods for detecting anomalous activities for interactive videos
US12096081B2 (en)2020-02-182024-09-17JBF Interlude 2009 LTDDynamic adaptation of interactive video players using behavioral analytics
US12047637B2 (en)2020-07-072024-07-23JBF Interlude 2009 LTDSystems and methods for seamless audio and video endpoint transitions
US12316905B2 (en)2020-07-072025-05-27JBF Interlude 2009 LTDSystems and methods for seamless audio and video endpoint transitions
US11882337B2 (en)2021-05-282024-01-23JBF Interlude 2009 LTDAutomated platform for generating interactive videos
US12284425B2 (en)2021-05-282025-04-22JBF Interlude 2009 LTDAutomated platform for generating interactive videos
US12155897B2 (en)2021-08-312024-11-26JBF Interlude 2009 LTDShader-based dynamic video manipulation
US11934477B2 (en)2021-09-242024-03-19JBF Interlude 2009 LTDVideo player integration within websites

Also Published As

Publication numberPublication date
US20170034233A1 (en)2017-02-02
US9510043B2 (en)2016-11-29
US10305947B2 (en)2019-05-28
EP2649792B1 (en)2020-04-08
DK2649792T3 (en)2020-06-08
WO2012078963A1 (en)2012-06-14
US20150245093A1 (en)2015-08-27
EP2649792A4 (en)2016-01-20
EP2649792A1 (en)2013-10-16
US20120151539A1 (en)2012-06-14

Similar Documents

PublicationPublication DateTitle
US10305947B2 (en)Pre-buffering audio streams
US10972772B2 (en)Variable bit video streams for adaptive streaming
US9769505B2 (en)Adaptive streaming for digital content distribution
US9781183B2 (en)Accelerated playback of streaming media
US8689267B2 (en)Variable bit video streams for adaptive streaming
US9648385B2 (en)Adaptive streaming for digital content distribution
AU2012207151A1 (en)Variable bit video streams for adaptive streaming
US9712580B2 (en)Pipelining for parallel network connections to transmit a digital content stream

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:NETFLIX, INC., CALIFORNIA

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FUNGE, JOHN;PETERS, GREG;REEL/FRAME:025766/0544

Effective date:20101123

STCFInformation on status: patent grant

Free format text:PATENTED CASE

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:4

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:8


[8]ページ先頭

©2009-2025 Movatter.jp