Movatterモバイル変換


[0]ホーム

URL:


US11727201B2 - Annotation framework for video - Google Patents

Annotation framework for video
Download PDF

Info

Publication number
US11727201B2
US11727201B2US17/892,480US202217892480AUS11727201B2US 11727201 B2US11727201 B2US 11727201B2US 202217892480 AUS202217892480 AUS 202217892480AUS 11727201 B2US11727201 B2US 11727201B2
Authority
US
United States
Prior art keywords
video
annotation
annotations
client
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/892,480
Other versions
US20220398375A1 (en
Inventor
Mayur Datar
Ashutosh Garg
Vibhu Mittal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLCfiledCriticalGoogle LLC
Priority to US17/892,480priorityCriticalpatent/US11727201B2/en
Publication of US20220398375A1publicationCriticalpatent/US20220398375A1/en
Application grantedgrantedCritical
Publication of US11727201B2publicationCriticalpatent/US11727201B2/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

A system and method for transferring annotations associated with a media file. An annotation associated with a media file is indexed to a first instance of that media file. By comparing features of the two instances, a mapping is created between the first instance of the media file and a second instance of the media file. The annotation can be indexed to the second instance using the mapping between the first and second instances. The annotation can be processed (displayed, stored, or modified) based on the index to the second instance.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is a continuation of U.S. patent application Ser. No. 17/107,018, filed Nov. 30, 2020, which is a continuation of U.S. patent application Ser. No. 16/384,289, filed Apr. 15, 2019, which is a continuation of U.S. patent application Ser. No. 15/795,635, filed Oct. 27, 2017, which is a continuation of U.S. patent application Ser. No. 14/145,641 filed Dec. 31, 2013, which is a continuation of U.S. patent application Ser. No. 13/414,675, filed Mar. 7, 2012, which is a continuation of U.S. patent application Ser. No. 12/477,762, filed Jun. 3, 2009, which is a continuation of U.S. patent application Ser. No. 11/615,771, filed Dec. 22, 2006, each of which is hereby incorporated by reference herein in its entirety.
TECHNICAL FIELD
The disclosed embodiments relate generally to the authoring and display of annotations for video, and to the collaborative sharing and editing of annotations over a network.
BACKGROUND
Annotations provide a mechanism for supplementing video with useful information. Annotations can contain, for example, metadata describing the content of the video, subtitles, or additional audio tracks. Annotations can be of various data types, including text, audio, graphics, or other forms. To make their content meaningful, annotations are typically associated with a particular video, or with a particular portion of a video.
One method by which the useful information contained in annotations can be exchanged is by transferring annotated video over a network. However, transferring video content over a network introduces several obstacles. First, video files are generally quite large, and transferring video requires substantial amounts of bandwidth, as well as host and recipient computers that can support the required bandwidth and storage needs. Second, many video files are likely to be copyrighted, or to be otherwise prohibited from distribution without payment of a fee. Compliance with copyright restrictions requires additional software and hardware investments to prevent unauthorized copying. Third, as the recipient of an annotated video may already have an unannotated copy of the video, from a data efficiency perspective the transfer of an annotated copy of the video to such a recipient unnecessarily consumes both bandwidth and storage.
Thus, exchanging annotated video by transferring a complete copy of the video is an inadequate solution.
SUMMARY
Annotations associated with a media file are transferred between devices independently of the associated media file, while maintaining the appropriate temporal or spatial relationship of the annotation with any segment of the media file. An annotation associated with a media file is indexed to a first instance of that media file. A mapping is created between the first instance of the media file and a second instance of the media file by comparing features of the two instances. The annotation can be indexed to the second instance using the mapping between the first and second instances. The annotation can be displayed, stored, or modified based on the index to the second instance.
Comparing features of instances allows the annotations to be consistently indexed to a plurality of independently acquired instances of a media file. Consistent indexing of annotations supports sharing of annotations and allows for a collaborative community of annotation authors, editors, and consumers. Annotations can include advertisements or premium for-pay content. Privileges for submitting, editing or viewing annotations can be offered for sale on a subscription basis, free of charge, or can be bundled with purchase of media files.
According to one embodiment, a first user submits to an annotation server annotations that are indexed to his instance of a media file. The annotation server maps the first user's instance of the media file to a canonical instance of the media file and stores the submitted annotation indexed to the canonical instance of the media file. A second user requests annotations, and the annotation server maps the second user's instance of the media file to the canonical instance of the media file. The annotation server sends the annotation to the second user indexed to the second user's instance of the media file.
The features and advantages described in this summary and the following detailed description are not all-inclusive. Many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims hereof.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG.1 shows a network connecting a community of video providers and consumers.
FIG.2 illustrates frames of a video, and the indexing of annotations to one or more frames.
FIG.3 illustrates frames of two instances of a video.
FIG.4(a) illustrates annotations indexed to a canonical instance of video.
FIG.4(b) illustrates mapping a client instance of video to a canonical instance of video.
FIG.5 illustrates one embodiment for storing video and annotations.
FIG.6 is an event trace of the display and modification of annotations associated with a video.
FIG.7(a) illustrates a user interface for viewing, creating, and editing annotations.
FIG.7(b) illustrates a user interface for creating a new annotation.
FIG.8 illustrates a method for determining which annotations to display.
The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
DESCRIPTION OF EMBODIMENTS
FIG.1 shows a network connecting a community of video providers and consumers.FIG.1 illustrates one embodiment by which a plurality of users can exchange videos and annotations. Video is used herein as an example of a media file with which annotation can be associated. This example is chosen for the purposes of illustration and is not limiting. Other types of media files with which annotations can be associated include, but are not limited to, audio programs, Flash, movies (in any encoding format), slide presentations, photo collections, animated programs, and other documents. Other examples will be apparent to one of skill in the art without departing from the scope of the present invention.
A user views, authors, and edits annotations using aclient104. An annotation is any data which can usefully supplement a media file. For example, an annotation can be an audio or textual commentary, translation, advertisement or summary, rating on a predetermined scale (1-5 stars), metadata, or a command for how the media file should be displayed. An annotation can also include video content. Theclients104 include software and hardware for displaying video. For example, aclient104 can be implemented as a television, a personal computer, a digital video recorder (DVR), a personal digital assistant (PDA), a cellular telephone, or another device having or connected to a display device; software includes any video player adapted to decode video files, such as MPEG-2, MPEG-4, QuickTime, VCD, or any other current or future video format. Other examples of clients will be apparent to one of skill in the art without departing from the scope of the present invention. A graphical user interface used by theclient104 according to one embodiment is described herein with references toFIGS.7(a) and7(b).
Theclients104 are connected to anetwork105. Thenetwork105 can be implemented as any electronic medium by which annotation content can be transferred. Through thenetwork105, theclients104 can send and receive data fromother clients104. Thenetwork105 can be a global (e.g., the Internet), regional, wide-area, or local area network.
Avideo server106 stores a collection of videos on an electronic medium. Responsive to a request by aclient104 for a particular video (or a set of videos matching certain criteria), thevideo server106 transfers a video over thenetwork105 to theclient104. Thevideo server106 may be configured to charge a fee for the service of providing the video to the client, or it may provide the video free of charge. Thevideo server106 can be implemented, for example, as an on-demand content service, an online store, or a streaming video server. Other examples of video servers will be apparent to one of skill in the art without departing from the scope of the present invention.
Some of theclients104 are also connected to video sources102. A video source102 is a device providing video to the client. For example, a video source102 could be a cable box, a television antenna, a digital video recorder, a video cassette player, a camera, a game console, a digital video disk (DVD) unit, or any other device capable of producing a video output in a format readable by theclient104. Other examples of video sources102 will be apparent to one of skill in the art without departing from the scope of the present invention.
According to one embodiment of the present invention,clients104 can send video over thenetwork105. For example, theclient104B can receive video from thevideo source102B and transfer it through the network to another client, such as theclient104D.Clients104 can also send video through thenetwork105 to thevideo server106. Video sent from aclient104 to thevideo server106 is stored on an electronic medium and is available toother clients104.
Annotation server110 is connected to thenetwork105. Theannotation server110 stores annotations on an electronic medium. Responsive to a request from aclient104 for an annotation associated with a particular media file, theannotation server110 sends one or more annotations associated with the media file to theclient104 through thenetwork105. Responsive to a submission by theclient104 of one or more annotations associated with a media file, theannotation server110 stores the one or more annotations in association with the media file. Theannotation server110 stores annotations indexed to instances of one or more media files or portions thereof. A method used by theannotation server110, according to various embodiments of the present invention, is described herein with reference toFIGS.4-6.
Optionally, avideo server108 is communicatively connected to theannotation server110, either locally or over thenetwork105. Thevideo server108 can have many of the same capabilities as described herein with reference to thevideo server106. Thevideo server108 can transfer video to theclients104 over thenetwork105. In one embodiment, theannotation server110 andvideo server108 in combination transfer annotated video to aclient104. In another embodiment, thevideo server108 stores a canonical instance of a video, as described herein with reference toFIG.5.
As shown in the figure, any given client may have access to video from a variety of sources. For example, theclient104A can receive video directly from thevideo source102A or from thevideo server106 via thenetwork105. Different clients sometimes have access to different video sources. For example, like theclient104A, theclient104B can receive video from thevideo server106 via thenetwork105, but, in contrast to theclient104A, has direct access to thevideo source102B instead of thevideo source102A.
Although a client can obtain video from a potentially wide range of video sources, the present invention allows annotations sent from theannotation server110 to the client to be consistently associated with a particular media file and portion thereof, regardless of the source from which the client's copy of the video was obtained. The consistent association of annotations with media files facilitates the exchange of annotations between users having different instances (or copies) of a given media file. The present invention enables the sharing and exchange of annotations among a plurality of clients by reindexing annotations for various instances of client media files. For example, theannotation server110 sends annotations indexed to theclient104A's instance of a video and sends annotations indexed to theclient104B's instance of the video, despite the fact that the two clients may have acquired their copies of the video from different sources. Theannotation server110 beneficially provides annotations that are not only appropriate for the video displayed by theclient104, but for the particular instance of the video which theclient104 is displaying, as described herein with reference toFIG.4.
Referring now toFIG.2, there is shown a conceptual diagram illustrating how annotations are associated temporally and/or spatially with a video file and one or more frames of thereof.FIG.2 shows a series of video frames, running fromframe200 to frame251. Theclient104 displays these frames, and can also pause, rewind, fast-forward, skip, or otherwise adjust the order or speed with which the frames are displayed.
For the purposes of illustration, the following discussion refers to a video as being composed of frames. Video is sometimes stored or transmitted as blocks of frames, fields, macroblocks, or in sections of incomplete frames. When reference is made herein to video being composed of frames, it should be understood that during intermediate steps video may in fact be stored as any one of various other forms. The term “frame” is used herein for the sake of clarity, and is not limiting to any particular format or convention for the storage or display of video.
Some of the frames have annotations associated with them as provided by a particular user. In the example illustrated,frame201 is drawn in greater to detail to illustrate some of its associated annotations. As shown in the figure, annotations can be associated with a particular spatial location of a frame, or they can be associated with an entire frame. For example,annotation1 is associated with a rectangular box in the upper-left corner offrame201. In contrast,annotation4 is associated with the entire frame.
Annotations can also be associated with overlapping spatial locations. For example,annotation1 is associated with a rectangular box overlapping a different rectangular box associated withannotation2. In one embodiment, annotations can be associated with a spatial location defined by any closed form shape. For example, as shown inFIG.2,annotation3 is associated with spatial locations defined by an elliptical shape.
Annotation list280 maintains associations between the spatial definition of annotations and the content of annotations.Annotation1, associated with a rectangular box inframe201, includes the text “Vice President.”Annotation1 is an example of an annotation useful for highlighting or adding supplemental information to particular portions of a frame.Annotation4 is associated with theentire frame201 and contains the text “State of the Union.”Annotation4 is an example of an annotation used to summarize the content of a frame.Annotation5 is associated with theentire frame201 and contains some audio, which, in this case, is a French audio translation.Annotation5 is an example of an annotation used to provide supplemental audio content.
Annotations can also have temporal associations with a media file or any portion thereof. For example, an annotation can be associated with a specific frame, or a specific range of frames. InFIG.2, for example,annotation2 could be associated withframe200 to frame251, whileannotation5 is associated only withframe201. The spatial definition associated with an annotation can also change over time. For example,annotation1 can be associated with a first region inframe201, and with a second region inframe202. Time and spatially-dependent annotation associations are particularly useful for providing supplemental information regarding objects in motion, and can accommodate, as in the example shown in the figure, the movement of the Vice-President of the United States. The temporal associations can be defined in terms of frame numbers, timecodes, or any other indexing basis. The illustration of theannotation list280 as a table is not meant to limit the underlying storage format used; any format or organization of the annotation information may be employed including optimized formats that reduce storage requirements and/or increase retrieval speed.
During playback of a media file, theclient104 is adapted to display the annotations associated with the frames of the file. Annotations can be displayed, for example, as text superimposed on the video frame, as graphics shown alongside the frame, or as audio reproduced simultaneously with video; annotations may also appear in a separate window or frame proximate to the video. Annotations can also include commands for how the media file with which they are associated is to be displayed. Displaying command annotations can include displaying video as instructed by the annotation. For example, responsive to an annotation, theclient104 might skip to a different place in a video, display a portion of the video in slow motion, or jump to a different video altogether.
Theclient104 is capable of displaying a subset of the available annotations. For example, a user watching the video ofFIG.2 can select which annotations should be displayed by theclient104 by designation of various criteria. The user can choose to receive only certain types of annotations (e.g. commentary, text, graphic, audio), or only annotations that are defined by a particular region of the display. The user can choose to receive only annotations in a particular language, matching a certain search criteria (such as keywords), or authored by a particular user. As another example, when annotations are written and edited in a collaborative community of users, a user can choose to receive only annotations authored by users with reputations above a certain threshold, or to receive only annotations with ratings above a certain threshold. Users can also search for annotations, and retrieve associated video based on the results of the annotation search.
Certain annotations can be given a priority that does not allow a user to prevent them from being displayed. For example, annotations can include advertisements, which may be configured so that no other annotations are displayed unless the advertisement annotations are also displayed. Such a configuration would prevent users from viewing certain annotations while avoiding paid advertisement annotations. A method for determining which annotations to display is described herein with reference toFIG.8.
Users can also edit annotations using theclient104. For example, a user viewing the annotations shown inFIG.2 may be dissatisfied withannotation1. The user changes the annotation text “Vice President” to “Vice President of the United States” using an input device connected to theclient104. Future display of the annotation (to this user or possibly other users) would include the modified text “Vice President of the United States.” As another option, a user can change the temporal or spatial definition with which annotations or associated. For example, the astute user may recognize that the documents shown on the right side of the frame are actually excerpts from 15 USC §§ 78dd-1, and that the Constitution (despite being almost completely obscured by the position of the President) is just barely visible on the left side of the frame. The user can change the temporal definition with whichAnnotation3 is associated accordingly, for example, by dragging (for example, in a direct manipulation user interface illustrating frames of the video) the spatial definition to a different location using an input device connected to theclient104.
Theannotation list280 is shown inFIG.2 for the purposes of illustration as one example of how a client can organize annotations and their associated frames. Theannotation list280 is useful for managing and displaying annotations associated with a frame or range of frames, but various clients can organize annotations differently without departing from the scope of the present invention.
As shown inFIG.1, a client sometimes has access to multiple instances of the same video, and different clients frequently have access to various different instances.FIG.3 illustrates sequences of the frames making up two instances of the same video. For example,video instance302 could be a copy of a video received from a cable channel, whilevideo instance304 is a copy of the same video received from an online video store. As another example,video instance302 could be a copy of a video recorded by a first user's digital video recorder receiving a signal from a first broadcast station, whilevideo instance304 is a copy of the same video recorded by a second user's digital video recorder receiving a signal from a second broadcast station.
Asvideo instance302 is acquired independently ofvideo instance304, it is likely that the two copies are not time-synchronized, and/or are of different lengths. For example,video instance302 might have been recorded from The Zurich Channel, a television affiliate known for its punctuality and good taste.Video instance304, on the other hand, might have been recorded from TV Tulsa, a television affiliate known for its slipshod programming and haphazard timing. Thus, as shown inFIG.3, the frames of the first instance might not necessarily correspond to the frames of the second instance. In addition, there are numerous other types of differences that can arise between different instances of a given program or broadcast. These include, and are not limited to, differences in encoding parameters (e.g., resolution, frame rate) and differences in file formats.
In the example illustrated, theframes306 ofvideo instance302 are time-shifted with respect to theframes308 of thevideo instance304. The first frame of theframes308 contains the same content as the third frame of theframes306. When annotations are associated with specific frames of a video by one user, it is desirable that they be displayed with those frames when shown to another user in spite of the possibility of time shifting between various instances of the video. Notice as well thatvideo instance302 has 6 frames, whereasvideo instance304 has 4 frames.
Theannotation server110 accounts for this time shifting of frames so that annotations can be properly displayed with various instances of the video. For example, suppose an annotation describes the driver who enters the third frame of theframes306. If this annotation is indexed with respect to theframes306, theannotation server110 translates this index to an index with respect to theframes308 so that the annotation can be properly displayed with thevideo instance304. Theannotation server110 translates the annotation indexes by mapping one video instance to another.
Referring now toFIG.4(a), annotations404 are indexed to a canonical instance ofvideo406. For the purposes of illustration, the instance of video having annotations indexed to it is referred to as the canonical instance, and the instance of video that will be displayed at the client is referred to as the client instance. According to one embodiment, annotations can be shared in multiple directions between two or more client peers. As such, it is possible that there is no definitively canonical instance of video. It should be understood that the term “canonical instance” refers to a role that an instance of video plays in one case of annotation exchange, and not necessarily to the status of that copy of the video in the video distribution system or in the annotation framework as a whole.
Thevideo server108 may store video content in chunks. One system and method for storing video in chunks is disclosed in U.S. patent application Ser. No. 11/428,319, titled “Dynamic Media Serving infrastructure” to Manish Gupta, et al., filed Jun. 30, 2006, and U.S. Provisional Patent Application Ser. No. 60/756,787, titled “Discontinuous Download of Media Articles” to Michael Yu, et al., filed Jan. 6, 2006, both of which are incorporated herein by reference in their entirety.FIG.4(a) shows a canonical instance ofvideo406 stored aschunk402A andchunk402B. A chunk is a data element for storing video. Storing video in chunks is beneficial for the efficient indexing and transfer of video, and allows for the manipulation as video data of more manageable size.
As described herein with reference toFIG.2, an annotation can be associated with a specific frame in a video. The association between the annotation and the specific frame is stored by indexing the annotation to a frame in a particular instance of the video.Annotation404A, for example, is indexed to a frame of the canonical instance ofvideo406, in this case to a frame in thechunk402A.
As also described herein with reference toFIG.2, an annotation can be associated with a range of frames in a video. A set of one or more frames of video is sometimes referred to as a segment of video.Annotation404D, for example, is indexed to a segment of video of the canonical instance ofvideo406, in this case the segment including one or more frames of thechunk402B.
The client receives a video from a video source or server (such as one of those described herein with reference toFIG.1) and stores a copy as the client instance ofvideo408. As the client displays the video, the client periodically requests, from the annotation server, annotations associated with frames of video about to be displayed. To ensure that annotations are requested, retrieved, transmitted and received in sufficient time for display with their associated frames, the client requests annotations associated with a frame some time before that frame is to be displayed.
For increased efficiency, the client can combine requests for annotations associated with particular frames into a request for annotations associated with a segment of video. A request could, for example, seek to retrieve all of the annotations associated with a given video. In the example shown, the client requests annotations associated with the segment ofvideo409. The request for annotations will return annotations associated with individual frames of the segment, or annotations associated with a superset or subset of the frames of the segment. For example, the client can request annotations associated with exactly the segment ofvideo409, associated with individual frames of the segment ofvideo409, or associated with the entire video.
Referring now toFIG.4(b), theannotation server110 maps the client instance ofvideo408 to a canonical instance ofvideo406. Themapping412 describes the correspondence between frames of the client instance ofvideo408 and frames in the canonical instance ofvideo406. Theannotation server110 can map the client instance of thevideo408 to the canonical instance ofvideo406 using a variety of techniques. According to one embodiment of the present invention, the client's request for annotations includes a feature of the client instance ofvideo408. A feature is a succinct representation of the content of one or more frames of video that are similar. For example, theannotation server110 may group the frames into logical units, such as scenes or shots. Theannotation server110 may use scene detection algorithms to group the frames automatically. One scene detection algorithm is described in Naphade, M. R., et al., “A High-Performance Shot Boundary Detection Algorithm Using Multiple Cues”, 1998 International Conference on Image Processing (Oct. 4-7 1998), vol. 1, pp. 884-887, which is incorporated by reference herein.
Thus, theannotation server110 can compute one feature set for all frames that belong to the same scene. The feature can be, for example, a description of a characteristic in the time, spatial, or frequency domains. For example, a client can request annotations associated with a specific frame, and can describe that frame by its time, position, and frequency domain characteristics. The client can use any technique for determining features of video, such as those described in Zabih, R., Miller, J., and Mai, K., “Feature-Based Algorithms for Detecting and Classifying Scene Breaks”, Proc. ACM Multimedia 95, San Francisco, Calif. (November 1993), pp. 189-200; Arman, F., Hsu, A., and Chiu, M-Y., “Image Processing on Encoded Video Sequences”, Multimedia Systems (1994), vol. 1, no. 5, pp. 211-219; Ford, R. M., et al., “Metrics for Shot Boundary Detection in Digital Video Sequences”, Multimedia Systems (2000), vol. 8, pp. 37-46, all of the foregoing being incorporated by reference herein. One of ordinary skill in the art would recognize various techniques for determining features of video.
Generally, a distance function is defined over the universe of features that captures the closeness of the underlying sets of frames. When theannotation server110 receives a request for annotation for a frame, along with its feature set, the server first attempts to map the frame in the request to the closest frame in the canonical instance ofvideo406. Theannotation server110 uses the temporal position of the frame in the client instance of video408 (one of the features in the feature set) to narrow down the set of frames in thecanonical video406 that may potentially map to this frame, e.g., by limiting the candidate set to frames within a fixed amount of time or frames before and after the selected frame. For all of the frames in the candidate set, theannotation server110 computes the distance between the feature set of the frame from theclient408 and feature set of the frame fromcanonical video406. The frame from thecanonical video406 with the shortest distance is termed as the matching frame. The client frame is then mapped to the matching frame. If the distance to the closest frame is greater than a certain threshold, indicating absence of a good match, no annotations are returned. The components described by a feature used to create the mapping can reside in the segment of video for which annotations are being requested, but need not be. Similarly, the components described by a feature may or may not reside in the segment of video to which an annotation is indexed.
Features may be represented as strings, allowing theannotation server110 to search for features using an inverted index from feature strings to frames, for example. Theannotation server110 may also search for features by defining a distance metric over the feature set and selecting the candidate frame with the smallest distance. Such mapping could take place at the time theserver110 receives the client request, or theannotation server110 can pre-compute and maintain the distances in an offline process.
Using themapping412, theannotation server110 determines a corresponding segment ofvideo414 in the canonical instance of video. The corresponding segment ofvideo414 has content that closely matches the content of the segment ofvideo409, as described above. Under ideal conditions, the corresponding segment ofvideo414 contains instances of the same frames as the segment ofvideo409. Theannotation server110 associates each frame in theclient video408 that maps to a frame in the canonical instance of video with a frame number and maintains a list of frame numbers for each frame mapping. In one example, the length of the list of frame numbers is equal to the number of frames in the client instance ofvideo408, where each entry maps the corresponding frame to the frame in the canonical instance ofvideo406.
The annotation server determines the annotations that are indexed to the corresponding segment of video414 (or to a superset or subset of the corresponding segment of video414). As the example ofFIG.4(b) illustrates, theannotation404D is indexed to a segment of video that falls in the corresponding segment ofvideo414. In response to the request for annotations for thesegment409, theannotation server110 transmits theannotation404D to the client.
Optionally, the annotation server can also transmit information describing the segment of the video that the annotation is associated with. For example, using a feature as a reference point, the annotation server can describe a frame (or range of frames) with respect to that reference point.
FIG.5 illustrates the organization of video and annotations.FIG.5 shows how annotations can be indexed to a canonical instance of video in an annotation server.
According to one embodiment, annotations are stored in an annotation repository. Canonical instances of video are stored in a video repository. The annotation and repositories can be included in the same server, or they can be included in different servers. For example, the annotations can be stored in theannotation server110 and video can be stored in thevideo server108.
An annotation includes a reference to a segment of video. For example, theannotation404D includes a temporal definition501D. A temporal definition specifies one or more frames of a canonical instance of video. In the example illustrated, the temporal definition501D refers to one of theframes504 of the canonical instance ofvideo406. As another example, theannotation404F includestemporal definition510F.Temporal definition510F refers to a range of the frames of the canonical instance ofvideo406. A temporal definition can be described using a variety of metrics including, but not limited to, document identifiers, frame identifiers, timecodes, length in frames, length in milliseconds, and various other combinations.
The temporal definition is one example of how annotations can be associated with segments of video. Other methods for associating annotations with segments of video will be apparent to one of skill in the art without departing from the scope of the present invention.
An annotation also includes annotation content511. Annotation content can include, for example, audio, text, metadata, commands, or any other data useful to be associated with a media file. An annotation can optionally include a spatial definition509, which specifies the area of the frame (or frames) with which that annotation is associated. Use of a spatial definition509 is an example of one method for associating an annotation with a specific spatial location on a frame.
As an example, suppose the corresponding segment ofvideo414 includes theframes504. The corresponding segment ofvideo414 can be defined as a range of timecodes. The annotation server retrieves annotations by searching for annotations with references to timecodes that are within or overlapping with the range of timecodes defining the corresponding segment ofvideo414. The annotation server retrievesannotation404D, including theannotation content511D. The annotation server transmits theannotation content511D (or theannotation404D, which includes theannotation content511D) to the client, which displays theannotation content511D.
FIG.6 is an event trace of the display and modification of annotations associated with a video, according to one embodiment of the present invention. Theclient104 receives a segment of video from avideo server106 or a video source102, and stores a copy as the client instance of video. The client processes the segment using a feature detection algorithm and determines602 a feature based on a first segment of video. The client sends a request for annotations associated with a second segment of video, the request including the feature, to theannotation server110.
The first segment of video may contain some frames in common with the second segment of video, but need not. The feature included in the request for annotations associated with the second segment of video may additionally include features from adjacent segments to the second segment of video.
The request can also include metadata describing the content or title of the video so that the annotation server can retrieve the appropriate annotations. For example, video purchased from an online store may have a video title that can be used to filter the set of available annotations. As another example, the metadata sent to the annotation server for video acquired from broadcast television or cable can include a description of the time and channel at which the video was acquired. The annotation server can use this time and channel information to determine the appropriate video and retrieve annotations associated with that video.
Theannotation server110 receives the request for annotations. Theannotation server110searches604 for the feature included in the request in a canonical instance of the video and creates a mapping between the client instance of the video and the canonical instance of the video. In one embodiment, the request for annotations includes metadata indicating a particular video for which to retrieve annotations, and theannotation server110searches604 in a canonical instance in the video indicated by this metadata for the feature.
Theannotation server110searches608 an annotation repository for annotations associated with the video and returns an annotation. For example, theannotation server110 can search for annotations indexed to the canonical instance of the video. Using the mapping between the two instances, theannotation server110 can translate the index to the canonical instance of the video to an index to the client instance of the video
Theannotation server110 transmits an annotation associated with the video to the client. According to one embodiment, the annotation also includes index information defining the set of one or more frames associated with the annotation. Theannotation server110 can define frames associated with the annotation, for example, by indexing the association with respect to the feature.
Theclient104 receives and displays610 the annotation. Theclient104 can also process index information for the annotation so that the annotation is displayed appropriately along with the client instance of the video.
Optionally, the client receives612 changes to the annotation from the user. For example, a user can edit text, re-record audio, modify metadata included in the annotation content, or change an annotation command. Theclient104 transmits the modified annotation to theannotation server110, or, alternatively, transmits a description of the modifications theannotation server110.
Theannotation server110 receives the modified annotation. Theannotation server110stores614 the modified annotation and indexes the modified annotation to the canonical instance of the video. Theannotation server110 can index the modified annotation with the canonical instance of the video using a variety of methods. For example, theannotation server110 can translate an index to the client instance of the video using a previously established mapping. As another example, theclient104 can include a feature with the modified annotation, and theannotation server110 can establish a new mapping between the client instance of the video and the canonical instance of the video.
For the purposes of illustration, features have been shown as flowing from theclient104 to theannotation server110. However, for the purpose of establishing a mapping between the client instance of the video and the canonical instance of the video, features can flow in either direction. The example of theannotation server110 maintaining this mapping on the basis of features sent by theclient104 is given for the purposes of illustration and is not limiting. In another embodiment, the client maintains the mapping between the client instance of the video and the canonical instance of the video, for example, on the basis of features of the canonical instance of the video sent by theannotation server110 to theclient104. In yet another embodiment, a third party maintains the mapping between the client instance of the video and the canonical instance of the video by receiving features from both theannotation server110 and theclient104.
Theclient104 can also be used to submit a new annotation. For example, a user can create annotation content and associate it with a video. The user can also specify a spatial definition for the new annotation and choose a range of frames of the client instance of the video to which the annotation will be indexed. Theclient104 transmits the new annotation to theannotation server110 for storage.
Referring now toFIG.7(a), a user can search, create, or edit annotations using a graphical user interface. In the example illustrated, the graphical user interface for annotations is integrated into a video playergraphical user interface702. The video playergraphical user interface702 is an example of an interface that might be shown on the display device of aclient104. The video playergraphical user interface702 includes a display area for presenting the media file (in the example illustrated, a video), as well as control buttons for selecting, playing, pausing, fast forwarding, and rewinding the media file. The video playergraphical user interface702 can also include advertisements, such as the advertisement for the National Archives and Records Administration shown inFIG.7(a).
The video playergraphical user interface702 presents a frame of video. Shown along with the frame of video is anannotation definition704. Theannotation definition704 graphically illustrates the spatial definition and/or the temporal definition of an annotation. For example, theannotation definition704 shown inFIG.7(a) delineates a subset of the frame with which an annotation is associated. As another example, anannotation definition704 can delineate a range of frames with which an annotation is associated. While asingle annotation definition704 is shown inFIG.7(a), the video playergraphical user interface702 can include a plurality ofannotation definitions704 without departing from the scope of the invention.
Theannotation definition704 can be displayed in response to a user selection, or as part of the display of an existing annotation. For example, the user can use an input device to select a region of the frame with which a new annotation will be associated, and in response to that selection the video playergraphical user interface702 displays theannotation definition704 created by the user. As another example, the video playergraphical user interface702 can display video and associated annotations, and can display theannotation definition704 in conjunction with displaying an associated annotation.
The video playergraphical user interface702 also includesannotation control buttons706, which allow the user to control the content and display of annotations. For example, the video playergraphical user interface702 can include a button for searching annotations. In response to the selection of the search annotations button, the client searches for annotations associated with the annotation definition704 (or a similar definition), or for annotations associated with a keyword. The results of the search can then be displayed on the video playergraphical user interface702. As another example, the video playergraphical user interface702 can include a button for editing annotations. In response to the selection of the edit annotations button, the video playergraphical user interface702 displays one or more annotations associated with theannotation definition704 and allows the user to modify the one or more annotations. As yet another example, the video playergraphical user interface702 can include a button for creating a new annotation. In response to the selection of the create new annotation button, the video playergraphical user interface702 displays options such as those shown inFIG.7(b).
Referring now toFIG.7(b), theannotation control buttons706 indicate that the create new annotation button has been selected. The video playergraphical user interface702 includes a display area for receiving user input of the new annotation content. In the example illustrated, the new annotation content includes somenew annotation text708. As shown inFIG.7(b), as the user enters the description “General MacArthur”, thenew annotation text708 is displayed. In response to a further user selection indicating the authoring of annotation content is complete, the new annotation is submitted, for example, to theannotation server110, and displayed in the video playergraphical user interface702.
The entering ofnew annotation text708 has been shown as an example of the authoring of annotation content. The video playergraphical user interface702 can be adapted to receive other types of annotation content as well. For example, annotation content can include audio, and the video playergraphical user interface702 can include a button for starting recording of audio through a microphone, or for selecting an audio file from a location on a storage medium. Other types of annotations and similar methods for receiving their submission by a user will be apparent to one of skill in the art without departing from the scope of the invention.
FIG.8 illustrates a method for determining which annotations to display. In one embodiment, theclient104 displays only some of the received annotations. Theclient104 performs a method such as the one illustrated inFIG.8 to determine which annotations should be displayed and which should not.
Theclient104 receives802 an annotation. The client determines804 if the annotation is high-priority. A high-priority annotation is displayed regardless of user settings for the display of annotations. High-priority annotations can include, for example, advertisements, emergency broadcast messages, or other communications whose importance that should supersede local user settings.
If theclient104 determines804 that the annotation is high-priority, the client displays812 the annotation. If theclient104 determines804 that the annotation is not high-priority, the client determines806 if annotations are enabled. Annotations can be enabled or disabled, for example, by a user selection of an annotation display mode. If the user has selected to disable annotations, theclient104 does not display810 the annotation. If the user has selected to enable annotations, theclient104 determines808 if the annotation matches user-defined criteria.
As described herein, theclient104 allows the user to select annotations for display based on various criteria. In one embodiment, the user-defined criteria can be described in the request for annotation, limiting the annotations sent by theannotation server110. In another embodiment, the user-defined criteria can be used to limit which annotations to display once annotations have been received at theclient104. User defined-criteria can specify which annotations to display, for example, on the basis of language, annotation content, particular authors or groups of authors, or other annotation properties.
If theclient104 determines808 that the annotation satisfies the user-defined criteria, theclient104 displays812 the annotation. If theclient104 determines808 that the annotation does not satisfy the user-defined criteria, theclient104 does not display810 the annotation.
FIG.8 illustrates one example of how theclient104 may determine which annotations to display. Other methods for arbitrating annotation priorities established by the annotation provider and the annotation consumer will be apparent to one of skill in the art without departing from the scope of the present invention.
Turning now to the canonical instance of video disclosed herein, the canonical instance of video can be implemented in a variety of ways according to various embodiments. In some cases, theannotation server110 has selected a canonical instance of the video prior to the submission of the new annotation. Theclient104 can send a feature to facilitate the indexing of the new annotation to the canonical instance of the video. In other cases, for example, when the annotation is the first to be associated with a particular video, theannotation server110 may not have yet identified a canonical instance of the video. Theannotation server110 stores the annotation indexed to the client instance of the video, and establishes the client instance of the video as the canonical instance of the video for future annotation transactions.
According to one embodiment of the present invention, annotations are stored indexed to features of the instance of video used by the client that submitted that annotation. Annotations can be stored and retrieved without any underlying canonical instance of video. For example, each annotation can be indexed to its own “canonical instance of video”, which refers to the instance of video of the submitter. Such an approach is particularly beneficial for situations in which theannotation server110 does not maintain or have access to copies of the video itself. Essentially, theannotation server110 can serve as a blind broker of annotations, passing annotations from authors to consumers without its own copy of the video with which those annotations are associated.
A content-blind annotation server can be beneficial, for example, when the video content is copyrighted, private, or otherwise confidential. For example, a proud mother may want to annotate a film of her son's first bath, but might be reticent to submit even a reference instance of the video to a central annotation server. The content-blind annotation server stores annotations indexed to the mother's instance of the video, without access to an instance of its own. When an aunt, uncle, or other trusted user with an instance of the video requests annotations, his instance is mapped to the mother's instance by comparison of features of his instance to features of the mother's instance received with the submission of the annotation. Features can be determined in such a way that cannot be easily reversed to find the content of a frame, thus preserving the privacy of the video.
The case of an annotation server and a client is but one example in which the present invention may be usefully employed for the sharing and distribution of annotations for video. It will be apparent to one of skill in the art that the methods described herein for transmitting annotations without the need to transmit associated video will have a variety of other uses without departing from the scope of the present invention. For example, the features described herein could be used in an online community in which users can author, edit, review, publish, and view annotations collaboratively, without the burdens of transferring or hosting video directly. Such a community would allow for open-source style production of annotations without infringing the copyright protections of the video with which those annotations are associated.
As an added feature, a user in such a community could also accumulate a reputation, for example based on other users' review of the quality of that user's previous authoring or editing. A user who wants to view annotations could have the option of ignoring annotations from users with reputations below a certain threshold, or to search for annotations by users with reputations of an exceedingly high caliber. As another example, a user could select to view annotations only from a specific user, or from a specific group of users.
As described herein, annotations can also include commands describing how video should be displayed, for example, commands that instruct a display device to skip forward in that video, or to jump to another video entirely. A user could author a string of jump-to command annotations, effectively providing a suggestion for the combination of video segments into a larger piece. As an example, command annotations can be used to create a new movie from component parts of one or more other movies. The annotation server provides the annotations to the client, which acquires the various segments specified by the annotations and assembles the pieces for display to the user.
The present invention has applicability to any of a variety of hosting models, including but not limited to peer-to-peer, distributed hosting, wiki-style hosting, centralized serving, or other known methods for sharing data over a network.
The annotation framework described herein presents the opportunity for a plurality of revenue models. As an example, the owner of the annotation server can charge of fee for including advertisements in annotations. The annotation server can target advertisement annotations to the user based on a variety of factors. For example, the annotation server could select advertisements for transmission to the client based on the title or category of the video that the client is displaying, known facts about the user, recent annotation search requests (such as keyword searches), other annotations previously submitted for the video, the geographic location of the client, or other criteria useful for effectively targeting advertising.
Access to annotations could be provided on a subscription basis, or annotations could be sold in a package with the video content itself. For example, a user who purchases a video from an online video store might be given permission for viewing, editing, or authoring annotations, either associated with that video or with other videos. An online video store might have a promotion, for example, in which the purchase of a certain number of videos in a month gives the user privileges on an annotation server for that month.
Alternatively, the purchase of a video from an online video store might be coupled to privileges to author, edit, or view annotations associated with that video. If a particular annotation server becomes particularly popular with users, controlled access to the annotation server could assist with the protection of the copyrights of the video. For example, a user might have to prove that he has a certified legitimately acquired copy of a video before being allowed to view, edit, or author annotations. Such a requirement could reduce the usefulness or desirability of illegally acquired copies of video.
These examples of revenue models have been given for the purposes of illustration and are not limiting. Other applications and potentially profitable uses will be apparent to one of skill in the art without departing from the scope of the present invention.
Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
It should be noted that the process steps and instructions of the present invention can be embodied in software, firmware or hardware, and when embodied in software, can be downloaded to reside on and be operated from different platforms used by a variety of operating systems.
The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references below to specific languages are provided for disclosure of enablement and best mode of the present invention.
While the invention has been particularly shown and described with reference to a preferred embodiment and several alternate embodiments, it will be understood by persons skilled in the relevant art that various changes in form and details can be made therein without departing from the spirit and scope of the invention.
Finally, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claim

Claims (18)

What is claimed is:
1. A method for providing annotations to client devices, the method comprising:
causing a video to be presented on a display associated with a client device;
causing a first annotation to be presented, wherein the first annotation corresponds to a first frame of the video being presented on the display;
in response to receiving a request to modify annotations presented in connection with the video, identifying a second annotation; and
causing the second annotation to be presented in connection with presentation of a second frame of the video on the display.
2. The method ofclaim 1, wherein the first annotation is text in a first language, and wherein the second annotation is text in a second language.
3. The method ofclaim 1, wherein the second annotation is identified in response to receiving a selection of an annotation from a list of annotations that is presented on the display.
4. The method ofclaim 1, further comprising:
receiving an indication of a change in resolution of presentation of the video on the display from a first resolution to a second resolution; and
identifying a portion of the second frame in which the second annotation is to be presented based on the second resolution.
5. The method ofclaim 1, wherein the second frame of the video is identified based on a frame rate of the video.
6. The method ofclaim 1, wherein the first annotation and the second annotation are identified based on an annotation definition associated with the video.
7. A system for providing annotations to client devices, the system comprising:
a hardware processor that is configured to:
cause a video to be presented on a display associated with a client device;
cause a first annotation to be presented, wherein the first annotation corresponds to a first frame of the video being presented on the display;
in response to receiving a request to modify annotations presented in connection with the video, identify a second annotation; and
cause the second annotation to be presented in connection with presentation of a second frame of the video on the display.
8. The system ofclaim 7, wherein the first annotation is text in a first language, and wherein the second annotation is text in a second language.
9. The system ofclaim 7, wherein the second annotation is identified in response to receiving a selection of an annotation from a list of annotations that is presented on the display.
10. The system ofclaim 7, wherein the hardware processor is configured to:
receive an indication of a change in resolution of presentation of the video on the display from a first resolution to a second resolution; and
identify a portion of the second frame in which the second annotation is to be presented based on the second resolution.
11. The system ofclaim 7, wherein the second frame of the video is identified based on a frame rate of the video.
12. The system ofclaim 7, wherein the first annotation and the second annotation are identified based on an annotation definition associated with the video.
13. A non-transitory computer-readable medium containing computer executable instructions that, when executed by a processor, cause the processor to perform a method for providing annotations to client devices, the method comprising:
causing a video to be presented on a display associated with a client device;
causing a first annotation to be presented, wherein the first annotation corresponds to a first frame of the video being presented on the display;
in response to receiving a request to modify annotations presented in connection with the video, identifying a second annotation; and
causing the second annotation to be presented in connection with presentation of a second frame of the video on the display.
14. The non-transitory computer-readable medium ofclaim 13, wherein the first annotation is text in a first language, and wherein the second annotation is text in a second language.
15. The non-transitory computer-readable medium ofclaim 13, wherein the second annotation is identified in response to receiving a selection of an annotation from a list of annotations that is presented on the display.
16. The non-transitory computer-readable medium ofclaim 13, the method comprising:
receiving an indication of a change in resolution of presentation of the video on the display from a first resolution to a second resolution; and
identifying a portion of the second frame in which the second annotation is to be presented based on the second resolution.
17. The non-transitory computer-readable medium ofclaim 13, wherein the second frame of the video is identified based on a frame rate of the video.
18. The non-transitory computer-readable medium ofclaim 13, wherein the first annotation and the second annotation are identified based on an annotation definition associated with the video.
US17/892,4802006-12-222022-08-22Annotation framework for videoActiveUS11727201B2 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US17/892,480US11727201B2 (en)2006-12-222022-08-22Annotation framework for video

Applications Claiming Priority (8)

Application NumberPriority DateFiling DateTitle
US11/615,771US7559017B2 (en)2006-12-222006-12-22Annotation framework for video
US12/477,762US8151182B2 (en)2006-12-222009-06-03Annotation framework for video
US13/414,675US8775922B2 (en)2006-12-222012-03-07Annotation framework for video
US14/145,641US9805012B2 (en)2006-12-222013-12-31Annotation framework for video
US15/795,635US10261986B2 (en)2006-12-222017-10-27Annotation framework for video
US16/384,289US10853562B2 (en)2006-12-222019-04-15Annotation framework for video
US17/107,018US11423213B2 (en)2006-12-222020-11-30Annotation framework for video
US17/892,480US11727201B2 (en)2006-12-222022-08-22Annotation framework for video

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US17/107,018ContinuationUS11423213B2 (en)2006-12-222020-11-30Annotation framework for video

Publications (2)

Publication NumberPublication Date
US20220398375A1 US20220398375A1 (en)2022-12-15
US11727201B2true US11727201B2 (en)2023-08-15

Family

ID=39544384

Family Applications (8)

Application NumberTitlePriority DateFiling Date
US11/615,771Active2027-05-10US7559017B2 (en)2006-12-222006-12-22Annotation framework for video
US12/477,762Active2027-09-04US8151182B2 (en)2006-12-222009-06-03Annotation framework for video
US13/414,675Active2027-10-18US8775922B2 (en)2006-12-222012-03-07Annotation framework for video
US14/145,641Active2027-11-17US9805012B2 (en)2006-12-222013-12-31Annotation framework for video
US15/795,635ActiveUS10261986B2 (en)2006-12-222017-10-27Annotation framework for video
US16/384,289ActiveUS10853562B2 (en)2006-12-222019-04-15Annotation framework for video
US17/107,018ActiveUS11423213B2 (en)2006-12-222020-11-30Annotation framework for video
US17/892,480ActiveUS11727201B2 (en)2006-12-222022-08-22Annotation framework for video

Family Applications Before (7)

Application NumberTitlePriority DateFiling Date
US11/615,771Active2027-05-10US7559017B2 (en)2006-12-222006-12-22Annotation framework for video
US12/477,762Active2027-09-04US8151182B2 (en)2006-12-222009-06-03Annotation framework for video
US13/414,675Active2027-10-18US8775922B2 (en)2006-12-222012-03-07Annotation framework for video
US14/145,641Active2027-11-17US9805012B2 (en)2006-12-222013-12-31Annotation framework for video
US15/795,635ActiveUS10261986B2 (en)2006-12-222017-10-27Annotation framework for video
US16/384,289ActiveUS10853562B2 (en)2006-12-222019-04-15Annotation framework for video
US17/107,018ActiveUS11423213B2 (en)2006-12-222020-11-30Annotation framework for video

Country Status (9)

CountryLink
US (8)US7559017B2 (en)
EP (2)EP3324315A1 (en)
JP (1)JP4774461B2 (en)
KR (1)KR100963179B1 (en)
CN (1)CN101589383B (en)
AU (2)AU2007336947B2 (en)
BR (1)BRPI0720366B1 (en)
CA (2)CA2866548C (en)
WO (1)WO2008079850A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US12073852B2 (en)*2021-11-092024-08-27Samsung Electronics Co., Ltd.Electronic device and method for automatically generating edited video

Families Citing this family (224)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7823058B2 (en)2002-12-302010-10-26The Board Of Trustees Of The Leland Stanford Junior UniversityMethods and apparatus for interactive point-of-view authoring of digital video content
US7580867B2 (en)2004-05-042009-08-25Paul NykampMethods for interactively displaying product information and for collaborative product design
US9275052B2 (en)2005-01-192016-03-01Amazon Technologies, Inc.Providing annotations of a digital work
US8755673B2 (en)2005-05-232014-06-17Open Text S.A.Method, system and computer program product for editing movies in distributed scalable media environment
US9648281B2 (en)2005-05-232017-05-09Open Text Sa UlcSystem and method for movie segment bookmarking and sharing
US8141111B2 (en)2005-05-232012-03-20Open Text S.A.Movie advertising playback techniques
US8145528B2 (en)2005-05-232012-03-27Open Text S.A.Movie advertising placement optimization based on behavior and content analysis
WO2007109162A2 (en)*2006-03-172007-09-27Viddler, Inc.Methods and systems for displaying videos with overlays and tags
US8341112B2 (en)*2006-05-192012-12-25Microsoft CorporationAnnotation by search
US8412021B2 (en)2007-05-182013-04-02Fall Front Wireless Ny, LlcVideo player user interface
US8656282B2 (en)*2007-01-312014-02-18Fall Front Wireless Ny, LlcAuthoring tool for providing tags associated with items in a video playback
US10003781B2 (en)*2006-08-042018-06-19Gula Consulting Limited Liability CompanyDisplaying tags associated with items in a video playback
US20080031590A1 (en)*2006-08-042008-02-07Kulas Charles JDigital video recording of multiple associated channels
US20080066107A1 (en)2006-09-122008-03-13Google Inc.Using Viewing Signals in Targeted Video Advertising
US9672533B1 (en)2006-09-292017-06-06Amazon Technologies, Inc.Acquisition of an item based on a catalog presentation of items
US8725565B1 (en)2006-09-292014-05-13Amazon Technologies, Inc.Expedited acquisition of a digital item following a sample presentation of the item
US7559017B2 (en)2006-12-222009-07-07Google Inc.Annotation framework for video
US7865817B2 (en)2006-12-292011-01-04Amazon Technologies, Inc.Invariant referencing in digital works
US20080184117A1 (en)*2007-01-302008-07-31Awaremedia, Inc. D/B/A/ PanjeaMethod and system of media channel creation and management
US7903904B1 (en)*2007-02-162011-03-08Loeb Enterprises LLC.System and method for linking data related to a set of similar images
US8683342B2 (en)2007-02-282014-03-25Red Hat, Inc.Automatic selection of online content for sharing
US8762327B2 (en)*2007-02-282014-06-24Red Hat, Inc.Synchronizing disributed online collaboration content
KR101316743B1 (en)*2007-03-132013-10-08삼성전자주식회사Method for providing metadata on parts of video image, method for managing the provided metadata and apparatus using the methods
US7904825B2 (en)*2007-03-142011-03-08Xerox CorporationGraphical user interface for gathering image evaluation information
US7716224B2 (en)2007-03-292010-05-11Amazon Technologies, Inc.Search and indexing on a user device
US9665529B1 (en)2007-03-292017-05-30Amazon Technologies, Inc.Relative progress and event indicators
US8667532B2 (en)*2007-04-182014-03-04Google Inc.Content recognition for targeting video advertisements
US20080276266A1 (en)*2007-04-182008-11-06Google Inc.Characterizing content for identification of advertising
US20080259211A1 (en)*2007-04-232008-10-23Nokia CorporationUsing Subtitles for Other Purposes
US8234282B2 (en)2007-05-212012-07-31Amazon Technologies, Inc.Managing status of search index generation
US9710553B2 (en)*2007-05-252017-07-18Google Inc.Graphical user interface for management of remotely stored videos, and captions or subtitles thereof
US8433611B2 (en)*2007-06-272013-04-30Google Inc.Selection of advertisements for placement with content
US9609260B2 (en)*2007-07-132017-03-28Gula Consulting Limited Liability CompanyVideo tag layout
WO2009018171A1 (en)*2007-07-272009-02-05Synergy Sports Technology, LlcSystems and methods for generating bookmark video fingerprints
US9064024B2 (en)2007-08-212015-06-23Google Inc.Bundle generation
US9170997B2 (en)*2007-09-272015-10-27Adobe Systems IncorporatedCommenting dynamic content
US8364020B2 (en)*2007-09-282013-01-29Motorola Mobility LlcSolution for capturing and presenting user-created textual annotations synchronously while playing a video recording
US8285121B2 (en)*2007-10-072012-10-09Fall Front Wireless Ny, LlcDigital network-based video tagging system
US8640030B2 (en)*2007-10-072014-01-28Fall Front Wireless Ny, LlcUser interface for creating tags synchronized with a video playback
US8661096B2 (en)*2007-11-052014-02-25Cyberlink Corp.Collaborative editing in a video editing system
US20090132935A1 (en)*2007-11-152009-05-21Yahoo! Inc.Video tag game
KR20090063528A (en)*2007-12-142009-06-18엘지전자 주식회사 Mobile terminal and data playback method of mobile terminal
US8099662B2 (en)*2008-01-172012-01-17Seiko Epson CorporationEfficient image annotation display and transmission
US11227315B2 (en)2008-01-302022-01-18Aibuy, Inc.Interactive product placement system and method therefor
US8312486B1 (en)2008-01-302012-11-13Cinsay, Inc.Interactive product placement system and method therefor
US20110191809A1 (en)2008-01-302011-08-04Cinsay, LlcViral Syndicated Interactive Product System and Method Therefor
US8181197B2 (en)2008-02-062012-05-15Google Inc.System and method for voting on popular video intervals
US9824372B1 (en)2008-02-112017-11-21Google LlcAssociating advertisements with videos
US20090210778A1 (en)*2008-02-192009-08-20Kulas Charles JVideo linking to electronic text messaging
US8112702B2 (en)2008-02-192012-02-07Google Inc.Annotating video intervals
US8151179B1 (en)*2008-05-232012-04-03Google Inc.Method and system for providing linked video and slides from a presentation
US8321784B1 (en)2008-05-302012-11-27Adobe Systems IncorporatedReviewing objects
US8566353B2 (en)2008-06-032013-10-22Google Inc.Web-based system for collaborative generation of interactive videos
US9070106B2 (en)*2008-07-142015-06-30International Business Machines CorporationSystem and method for dynamic structuring of process annotations
WO2010011865A2 (en)*2008-07-232010-01-28Google Inc.Video promotion in a video sharing site
GB2465538B (en)*2008-08-012013-03-13Sony CorpMethod and apparatus for generating an event log
US20100037149A1 (en)*2008-08-052010-02-11Google Inc.Annotating Media Content Items
US8843974B2 (en)*2008-08-272014-09-23Albert John McGowanMedia playback system with multiple video formats
US8892630B1 (en)2008-09-292014-11-18Amazon Technologies, Inc.Facilitating discussion group formation and interaction
US8504587B2 (en)*2008-09-302013-08-06Yahoo! Inc.Content access and annotation system and method
US9083600B1 (en)2008-10-292015-07-14Amazon Technologies, Inc.Providing presence information within digital items
US8706685B1 (en)*2008-10-292014-04-22Amazon Technologies, Inc.Organizing collaborative annotations
US9141860B2 (en)2008-11-172015-09-22Liveclips LlcMethod and system for segmenting and transmitting on-demand live-action video in real-time
US9154942B2 (en)2008-11-262015-10-06Free Stream Media Corp.Zero configuration communication between a browser and a networked media device
US9986279B2 (en)2008-11-262018-05-29Free Stream Media Corp.Discovery, access control, and communication with networked services
CN101414307A (en)*2008-11-262009-04-22阿里巴巴集团控股有限公司Method and server for providing picture searching
US10334324B2 (en)2008-11-262019-06-25Free Stream Media Corp.Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US9519772B2 (en)2008-11-262016-12-13Free Stream Media Corp.Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US9961388B2 (en)2008-11-262018-05-01David HarrisonExposure of public internet protocol addresses in an advertising exchange server to improve relevancy of advertisements
US10880340B2 (en)2008-11-262020-12-29Free Stream Media Corp.Relevancy improvement through targeting of information based on data gathered from a networked device associated with a security sandbox of a client device
US20100131856A1 (en)*2008-11-262010-05-27Brian Joseph KalbfleischPersonalized, Online, Scientific Interface
US10567823B2 (en)2008-11-262020-02-18Free Stream Media Corp.Relevant advertisement generation based on a user operating a client device communicatively coupled with a networked media device
US9386356B2 (en)2008-11-262016-07-05Free Stream Media Corp.Targeting with television audience data across multiple screens
US10977693B2 (en)2008-11-262021-04-13Free Stream Media Corp.Association of content identifier of audio-visual data with additional data through capture infrastructure
US10419541B2 (en)2008-11-262019-09-17Free Stream Media Corp.Remotely control devices over a network without authentication or registration
US10631068B2 (en)2008-11-262020-04-21Free Stream Media Corp.Content exposure attribution based on renderings of related content across multiple devices
US9026668B2 (en)2012-05-262015-05-05Free Stream Media Corp.Real-time and retargeted advertising on multiple screens of a user watching television
US8180891B1 (en)2008-11-262012-05-15Free Stream Media Corp.Discovery, access control, and communication with networked services from within a security sandbox
US9087032B1 (en)*2009-01-262015-07-21Amazon Technologies, Inc.Aggregation of highlights
US20130124242A1 (en)*2009-01-282013-05-16Adobe Systems IncorporatedVideo review workflow process
US9292481B2 (en)2009-02-272016-03-22Adobe Systems IncorporatedCreating and modifying a snapshot of an electronic document with a user comment
US8930843B2 (en)2009-02-272015-01-06Adobe Systems IncorporatedElectronic content workflow review process
US8826117B1 (en)2009-03-252014-09-02Google Inc.Web-based system for video editing
US8132200B1 (en)2009-03-302012-03-06Google Inc.Intra-video ratings
US8832584B1 (en)2009-03-312014-09-09Amazon Technologies, Inc.Questions on highlighted passages
US8554848B2 (en)*2009-04-162013-10-08At&T Intellectual Property 1, L.P.Collective asynchronous media review
US8769421B2 (en)*2009-04-302014-07-01Apple Inc.Graphical user interface for a media-editing application with a segmented timeline
JP5449859B2 (en)*2009-05-182014-03-19任天堂株式会社 GAME PROGRAM, GAME DEVICE, AND GAME SYSTEM
US8943408B2 (en)*2009-05-272015-01-27Adobe Systems IncorporatedText image review process
US8943431B2 (en)*2009-05-272015-01-27Adobe Systems IncorporatedText operations in a bitmap-based document
US8887190B2 (en)*2009-05-282014-11-11Harris CorporationMultimedia system generating audio trigger markers synchronized with video source data and related methods
US20110035683A1 (en)*2009-08-072011-02-10Larry SteadMethod and apparatus for synchronous, collaborative media consumption
US20110072047A1 (en)*2009-09-212011-03-24Microsoft CorporationInterest Learning from an Image Collection for Advertising
US8692763B1 (en)2009-09-282014-04-08John T. KimLast screen rendering for electronic book reader
JP5714812B2 (en)*2009-11-202015-05-07ソニー株式会社 Information processing apparatus, bookmark setting method and program
CN102074030B (en)*2009-11-252016-03-09新奥特(北京)视频技术有限公司A kind of acquisition, the method for editing self-defined figure and subtitle graphic manufacturing system
US8363109B2 (en)*2009-12-102013-01-29Harris CorporationVideo processing system providing enhanced tracking features for moving objects outside of a viewable window and related methods
US8933961B2 (en)*2009-12-102015-01-13Harris CorporationVideo processing system generating corrected geospatial metadata for a plurality of georeferenced video feeds and related methods
US8970694B2 (en)*2009-12-102015-03-03Harris CorporationVideo processing system providing overlay of selected geospatially-tagged metadata relating to a geolocation outside viewable area and related methods
US8717436B2 (en)2009-12-102014-05-06Harris CorporationVideo processing system providing correlation between objects in different georeferenced video feeds and related methods
US9152708B1 (en)2009-12-142015-10-06Google Inc.Target-video specific co-watched video clusters
US20110145240A1 (en)*2009-12-152011-06-16International Business Machines CorporationOrganizing Annotations
US9477667B2 (en)*2010-01-142016-10-25Mobdub, LlcCrowdsourced multi-media data relationships
US8903798B2 (en)*2010-05-282014-12-02Microsoft CorporationReal-time annotation and enrichment of captured video
US9703782B2 (en)2010-05-282017-07-11Microsoft Technology Licensing, LlcAssociating media with metadata of near-duplicates
USD644660S1 (en)*2010-06-112011-09-06Microsoft CorporationDisplay screen with a user interface
US8910046B2 (en)2010-07-152014-12-09Apple Inc.Media-editing application with anchored timeline
US10674230B2 (en)*2010-07-302020-06-02Grab Vision Group LLCInteractive advertising and marketing system
US9113215B1 (en)*2010-07-302015-08-18Lincoln Berry, IIIInteractive advertising and marketing system
US8744239B2 (en)2010-08-062014-06-03Apple Inc.Teleprompter tool for voice-over tool
KR20120026762A (en)*2010-09-102012-03-20삼성전자주식회사User terminal apparatus, server and display method, information porviding method thereof
US9495322B1 (en)2010-09-212016-11-15Amazon Technologies, Inc.Cover display
US20120072845A1 (en)*2010-09-212012-03-22Avaya Inc.System and method for classifying live media tags into types
US9659313B2 (en)*2010-09-272017-05-23Unisys CorporationSystems and methods for managing interactive features associated with multimedia content
US8559682B2 (en)2010-11-092013-10-15Microsoft CorporationBuilding a person profile database
US9343110B2 (en)*2010-11-232016-05-17Levels BeyondDynamic synchronization tool
US20120159527A1 (en)*2010-12-162012-06-21Microsoft CorporationSimulated group interaction with multimedia content
EP2466492A1 (en)*2010-12-202012-06-20Paul Peter VaclikA method of making text data associated with video data searchable
US20130334300A1 (en)*2011-01-032013-12-19Curt EvansText-synchronized media utilization and manipulation based on an embedded barcode
US8886015B2 (en)2011-01-282014-11-11Apple Inc.Efficient media import
US8799300B2 (en)2011-02-102014-08-05Microsoft CorporationBookmarking segments of content
US9997196B2 (en)2011-02-162018-06-12Apple Inc.Retiming media presentations
US9026909B2 (en)*2011-02-162015-05-05Apple Inc.Keyword list view
US11747972B2 (en)2011-02-162023-09-05Apple Inc.Media-editing application with novel editing tools
US9251130B1 (en)2011-03-312016-02-02Amazon Technologies, Inc.Tagging annotations of electronic books
US9342516B2 (en)*2011-05-182016-05-17Microsoft Technology Licensing, LlcMedia presentation playback annotation
US9678992B2 (en)2011-05-182017-06-13Microsoft Technology Licensing, LlcText to image translation
WO2013016596A2 (en)*2011-07-282013-01-31Scrawl, Inc.System for annotating documents served by a document system without functional dependence on the document system
US20130031455A1 (en)2011-07-282013-01-31Peter GriffithsSystem for Linking to Documents with Associated Annotations
US9443518B1 (en)2011-08-312016-09-13Google Inc.Text transcript generation from a communication session
KR101942211B1 (en)*2011-09-122019-01-24인텔 코포레이션Cooperative provision of personalized user functions using shared and personal devices
KR101500913B1 (en)*2011-09-122015-03-09인텔 코오퍼레이션Annotation and/or recommendation of video content method and apparatus
US9240215B2 (en)2011-09-202016-01-19Apple Inc.Editing operations facilitated by metadata
US9536564B2 (en)2011-09-202017-01-03Apple Inc.Role-facilitated editing operations
US10079039B2 (en)*2011-09-262018-09-18The University Of North Carolina At CharlotteMulti-modal collaborative web-based video annotation system
JP5781418B2 (en)*2011-10-182015-09-24株式会社ドワンゴ Content viewing device, comment distribution server device, operation method of content viewing device, and program
US9158741B1 (en)2011-10-282015-10-13Amazon Technologies, Inc.Indicators for navigating digital works
US9782680B2 (en)2011-12-092017-10-10Futurewei Technologies, Inc.Persistent customized social media environment
US9032020B2 (en)2011-12-292015-05-12Google Inc.Online video enhancement
US10171522B1 (en)*2012-01-132019-01-01Google LlcVideo commentary
US9239848B2 (en)2012-02-062016-01-19Microsoft Technology Licensing, LlcSystem and method for semantically annotating images
CA2867360C (en)*2012-03-132021-11-16Cognilore Inc.Method of navigating through digital content
US9148429B2 (en)2012-04-232015-09-29Google Inc.Controlling access by web applications to resources on servers
US9262420B1 (en)2012-04-232016-02-16Google Inc.Third-party indexable text
US9195840B2 (en)2012-04-232015-11-24Google Inc.Application-specific file type generation and use
US9367745B2 (en)2012-04-242016-06-14Liveclips LlcSystem for annotating media content for automatic content understanding
US20130283143A1 (en)2012-04-242013-10-24Eric David PetajanSystem for Annotating Media Content for Automatic Content Understanding
US20130308922A1 (en)*2012-05-152013-11-21Microsoft CorporationEnhanced video discovery and productivity through accessibility
WO2013180437A1 (en)*2012-05-282013-12-05Samsung Electronics Co., Ltd.A method and system for enhancing user experience during an ongoing content viewing activity
US9002180B2 (en)*2012-06-012015-04-07At&T Intellectual Property I, L.P.Media recorder
US9208819B1 (en)2012-06-202015-12-08Google Inc.Dynamic split-frame preview of video editing effects
US9317709B2 (en)2012-06-262016-04-19Google Inc.System and method for detecting and integrating with native applications enabled for web-based storage
US9483109B2 (en)*2012-07-122016-11-01Spritz Technology, Inc.Methods and systems for displaying text using RSVP
CA2822217A1 (en)*2012-08-022014-02-02Iwatchlife Inc.Method and system for anonymous video analytics processing
US10152467B2 (en)2012-08-132018-12-11Google LlcManaging a sharing of media content among client computers
US8984582B2 (en)*2012-08-142015-03-17Confidela Ltd.System and method for secure synchronization of data across multiple computing devices
US8612211B1 (en)2012-09-102013-12-17Google Inc.Speech recognition and summarization
US9264584B2 (en)*2012-09-192016-02-16Tata Consultancy Services LimitedVideo synchronization
US20140089813A1 (en)*2012-09-212014-03-27Darius Vahdat PajouhRanking of user feedback based on user input device tracking
US9288121B2 (en)*2012-10-032016-03-15Google Technology Holdings LLCSystem and method for associating online content to a second indexed content
US9529785B2 (en)2012-11-272016-12-27Google Inc.Detecting relationships between edits and acting on a subset of edits
US11455664B2 (en)*2012-12-122022-09-27Saankhya Labs Private LimitedSystem and method for enabling and performing services and functionalities across device types and service providers within a communication device
WO2014113071A1 (en)*2013-01-152014-07-24Viki, Inc.System and method for captioning media
US9424347B2 (en)*2013-01-162016-08-23Hewlett-Packard Development Company, L. P.Techniques pertaining to document creation
EP2767910A1 (en)*2013-02-192014-08-20Alcatel LucentDevice for controlling sychronisation of a set of data contents with a multimedia content, and device for retrieving said data contents for an equipment
US20140232813A1 (en)*2013-02-202014-08-21Sprint Communications Company L.P.Using metadata for video message modifications among wireless communication devices
US10783319B2 (en)*2013-03-112020-09-22Coachmyvideo.Com LlcMethods and systems of creation and review of media annotations
US9653116B2 (en)2013-03-142017-05-16Apollo Education Group, Inc.Video pin sharing
GB2514753A (en)*2013-03-142014-12-10Buzzmywords LtdSubtitle processing
US9430578B2 (en)*2013-03-152016-08-30Google Inc.System and method for anchoring third party metadata in a document
US9727577B2 (en)2013-03-282017-08-08Google Inc.System and method to store third-party metadata in a cloud storage system
US10489501B2 (en)*2013-04-112019-11-26Google LlcSystems and methods for displaying annotated video content by mobile computing devices
US9438947B2 (en)*2013-05-012016-09-06Google Inc.Content annotation tool
EP2813983A1 (en)*2013-06-112014-12-17Alcatel LucentMethod and system for managing social interactions in a social multimedia content stream
EP2813984A1 (en)*2013-06-112014-12-17Alcatel LucentMethod and System for Summarizing Multimedia Content
US10001904B1 (en)2013-06-262018-06-19R3 Collaboratives, Inc.Categorized and tagged video annotation
US20150012840A1 (en)*2013-07-022015-01-08International Business Machines CorporationIdentification and Sharing of Selections within Streaming Content
US9971752B2 (en)2013-08-192018-05-15Google LlcSystems and methods for resolving privileged edits within suggested edits
CN104427351B (en)*2013-08-302017-11-21北京计算机技术及应用研究所Method for rapidly positioning and system for the time-interleaving video flowing of video monitoring
US9348803B2 (en)2013-10-222016-05-24Google Inc.Systems and methods for providing just-in-time preview of suggestion resolutions
KR101524379B1 (en)*2013-12-272015-06-04인하대학교 산학협력단System and method for the caption replacement of the released video for the interactive service
WO2015112870A1 (en)2014-01-252015-07-30Cloudpin Inc.Systems and methods for location-based content sharing using unique identifiers
US9832538B2 (en)*2014-06-162017-11-28Cisco Technology, Inc.Synchronizing broadcast timeline metadata
US10140379B2 (en)2014-10-272018-11-27Chegg, Inc.Automated lecture deconstruction
US9754624B2 (en)*2014-11-082017-09-05Wooshii LtdVideo creation platform
CN104333776B (en)*2014-11-282018-03-23北京奇艺世纪科技有限公司A kind of display methods and system of video labeling related information
CN104391960B (en)*2014-11-282019-01-25北京奇艺世纪科技有限公司A kind of video labeling method and system
US10521672B2 (en)*2014-12-312019-12-31Opentv, Inc.Identifying and categorizing contextual data for media
US20160212487A1 (en)*2015-01-192016-07-21Srinivas RaoMethod and system for creating seamless narrated videos using real time streaming media
US9418296B1 (en)2015-03-172016-08-16Netflix, Inc.Detecting segments of a video program
US10599758B1 (en)*2015-03-312020-03-24Amazon Technologies, Inc.Generation and distribution of collaborative content associated with digital content
US10083160B1 (en)2015-03-312018-09-25Amazon Technologies, Inc.Distribution of user-generated annotations associated with digital content
IL296030B2 (en)2015-04-202024-11-01Snap Inc Method and system for interactive content
US9607224B2 (en)*2015-05-142017-03-28Google Inc.Entity based temporal segmentation of video streams
CN104936034B (en)*2015-06-112019-07-05三星电子(中国)研发中心Information input method and device based on video
EP3326083A4 (en)*2015-07-232019-02-06WizrVideo processing
CN105100920B (en)*2015-08-312019-07-23北京奇艺世纪科技有限公司A kind of method and apparatus of video preview
AU2015224398A1 (en)2015-09-082017-03-23Canon Kabushiki KaishaA method for presenting notifications when annotations are received from a remote device
EP3354015A1 (en)*2015-09-232018-08-01Edoardo RizziCommunication device and method
US9697198B2 (en)*2015-10-052017-07-04International Business Machines CorporationGuiding a conversation based on cognitive analytics
EP3387838A4 (en)2015-12-132019-05-15Ustudio, Inc. VIDEO DRIVE FRAME FOR A MULTIMEDIA DISTRIBUTION AND MANAGEMENT PLATFORM
CN107018451B (en)*2016-01-282020-03-06深圳市音波传媒有限公司Scheduling method, device and system of time-based hypermedia event
US10460023B1 (en)2016-03-102019-10-29Matthew Connell ShriverSystems, methods, and computer readable media for creating slide presentations for an annotation set
KR102177455B1 (en)*2016-04-042020-11-11한국전자통신연구원Apparatus for video annotation and method using the same
CN106873971B (en)*2016-12-292020-08-04武汉斗鱼网络科技有限公司 A multilingual display method and system for flash applications
US10395693B2 (en)2017-04-102019-08-27International Business Machines CorporationLook-ahead for video segments
US10721503B2 (en)*2017-06-092020-07-21Sony Interactive Entertainment LLCSystems and methods for operating a streaming service to provide community spaces for media content items
US10990241B2 (en)*2017-06-282021-04-27Buxton Technology Enterprises Inc.Rich media icon system
US10299013B2 (en)*2017-08-012019-05-21Disney Enterprises, Inc.Media content annotation
WO2019036690A1 (en)2017-08-182019-02-21BON2 Media Services LLCEmbedding interactive content into a shareable online video
US10462370B2 (en)2017-10-032019-10-29Google LlcVideo stabilization
US10171738B1 (en)2018-05-042019-01-01Google LlcStabilizing video to reduce camera and face movement
US10356387B1 (en)*2018-07-262019-07-16Telefonaktiebolaget Lm Ericsson (Publ)Bookmarking system and method in 360° immersive video based on gaze vector information
CN109033394B (en)*2018-08-012022-02-11浙江深眸科技有限公司Client for picture video annotation data
CN109688484A (en)*2019-02-202019-04-26广东小天才科技有限公司Teaching video learning method and system
CN109889879A (en)*2019-03-252019-06-14联想(北京)有限公司Information control method and electronic equipment
US11724171B2 (en)*2019-05-032023-08-15New York UniversityReducing human interactions in game annotation
CN110602527B (en)*2019-09-122022-04-08北京小米移动软件有限公司 Video processing method, device and storage medium
US11087557B1 (en)*2020-06-032021-08-10Tovy KamineMethods and systems for remote augmented reality communication for guided surgery
US11190689B1 (en)2020-07-292021-11-30Google LlcMulti-camera video stabilization
CN112527374B (en)*2020-12-112024-08-27北京百度网讯科技有限公司 Annotation tool generation method, annotation method, device, equipment and storage medium
US11843820B2 (en)2021-01-082023-12-12Sony Interactive Entertainment LLCGroup party view and post viewing digital content creation
CN115967822A (en)*2021-10-122023-04-14北京字跳网络技术有限公司 Information display method, device, electronic device and storage medium
CN118075409B (en)*2024-04-192024-06-25贵州联广科技股份有限公司Data fusion method and device for multi-level user terminal

Citations (176)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5339393A (en)1993-04-151994-08-16Sony Electronics, Inc.Graphical user interface for displaying available source material for editing
US5388197A (en)1991-08-021995-02-07The Grass Valley Group, Inc.Video editing system operator inter-face for visualization and interactive control of video material
US5414806A (en)1992-08-291995-05-09International Business Machines CorporationPalette and parts view of a composite object in an object oriented computer system
US5465353A (en)1994-04-011995-11-07Ricoh Company, Ltd.Image matching and retrieval by multi-access redundant hashing
US5530861A (en)1991-08-261996-06-25Hewlett-Packard CompanyProcess enaction and tool integration via a task oriented paradigm
US5600775A (en)1994-08-261997-02-04Emotion, Inc.Method and apparatus for annotating full motion video and other indexed data structures
US5664216A (en)1994-03-221997-09-02Blumenau; TrevorIconic audiovisual data editing environment
US5708845A (en)1995-09-291998-01-13Wistendahl; Douglass A.System for mapping hot spots in media content for interactive digital media program
US5732184A (en)1995-10-201998-03-24Digital Processing Systems, Inc.Video and audio cursor video editing system
US5812642A (en)1995-07-121998-09-22Leroy; David J.Audience response monitor and analysis system and method
US5996121A (en)1993-07-281999-12-07Harris; EuniceConvertible coat
US6006241A (en)1997-03-141999-12-21Microsoft CorporationProduction of a video stream with synchronized annotations over a computer network
US6144375A (en)1998-08-142000-11-07Praja Inc.Multi-perspective viewer for content-based interactivity
WO2001069438A2 (en)2000-03-142001-09-20Starlab Nv/SaMethods and apparatus for encoding multimedia annotations using time-synchronized description streams
US20010023436A1 (en)1998-09-162001-09-20Anand SrinivasanMethod and apparatus for multiplexing seperately-authored metadata for insertion into a video data stream
US6295092B1 (en)1998-07-302001-09-25Cbs CorporationSystem for analyzing television programs
US6311189B1 (en)*1998-03-112001-10-30Altavista CompanyTechnique for matching a query to a portion of media
CN1332556A (en)2001-04-272002-01-23清华大学Channel transmission method for ground digital multimeldia television broadcast system
US20020049983A1 (en)2000-02-292002-04-25Bove V. MichaelMethod and apparatus for switching between multiple programs by interacting with a hyperlinked television broadcast
US20020054138A1 (en)1999-12-172002-05-09Erik HennumWeb-based instruction
US20020059342A1 (en)1997-10-232002-05-16Anoop GuptaAnnotating temporally-dimensioned multimedia content
US20020059218A1 (en)1999-01-262002-05-16Katherine Grace AugustSystem and method for obtaining real time survey information for media programming using input device
US20020059584A1 (en)2000-09-142002-05-16Ferman Ahmet MufitAudiovisual management system
US20020065678A1 (en)2000-08-252002-05-30Steven PeliotisiSelect video
US20020069218A1 (en)2000-07-242002-06-06Sanghoon SullSystem and method for indexing, searching, identifying, and editing portions of electronic multimedia files
US20020078092A1 (en)2000-07-282002-06-20Minhoe KimSystem and method for reformatting contents in wireless internet site
US20020078446A1 (en)2000-08-302002-06-20Jon DakssMethod and apparatus for hyperlinking in a television broadcast
US6415438B1 (en)1999-10-052002-07-02Webtv Networks, Inc.Trigger having a time attribute
US20020108112A1 (en)2001-02-022002-08-08Ensequence, Inc.System and method for thematically analyzing and annotating an audio-visual sequence
US20020120925A1 (en)2000-03-282002-08-29Logan James D.Audio and video program recording, editing and playback systems using metadata
US20020152082A1 (en)*2000-04-052002-10-17Harradine Vincent CarlAudio/video reproducing apparatus and method
US20020188630A1 (en)2001-05-212002-12-12Autodesk, Inc.Method and apparatus for annotating a sequence of frames
US20030002851A1 (en)2001-06-282003-01-02Kenny HsiaoVideo editing method and device for editing a video project
US20030018668A1 (en)2001-07-202003-01-23International Business Machines CorporationEnhanced transcoding of structured documents through use of annotation techniques
US20030020743A1 (en)2000-09-082003-01-30Mauro BarbieriApparatus for reproducing an information signal stored on a storage medium
US20030039469A1 (en)1999-05-192003-02-27Kwang Su KimMethod for creating caption-based search information of moving picture data, searching moving picture data based on such information, and reproduction apparatus using said method
WO2003019418A1 (en)2001-08-312003-03-06Kent Ridge Digital LabsAn iterative collaborative annotation system
US20030068046A1 (en)2001-10-102003-04-10Markus LindqvistDatacast distribution system
US20030093790A1 (en)2000-03-282003-05-15Logan James D.Audio and video program recording, editing and playback systems using metadata
US20030095720A1 (en)2001-11-162003-05-22Patrick ChiuVideo production and compaction with collage picture frame user interface
US6570587B1 (en)1996-07-262003-05-27Veon Ltd.System and method and linking information to a video
US20030107592A1 (en)2001-12-112003-06-12Koninklijke Philips Electronics N.V.System and method for retrieving information related to persons in video programs
US20030112276A1 (en)2001-12-192003-06-19Clement LauUser augmentation of content
US20030177503A1 (en)2000-07-242003-09-18Sanghoon SullMethod and apparatus for fast metadata generation, delivery and access for live broadcast program
JP2003283981A (en)2002-03-202003-10-03Nippon Telegr & Teleph Corp <Ntt> Video comment input / display method and system, client device, video comment input / display program, and recording medium therefor
US20030196164A1 (en)1998-09-152003-10-16Anoop GuptaAnnotations for multiple versions of media content
US20030196964A1 (en)2002-01-312003-10-23Koslow Evan E.Microporous filter media, filteration systems containing same, and methods of making and using
US20030231198A1 (en)2002-06-182003-12-18Koninklijke Philips Electronics N.V.System and method for providing videomarks for a video program
US20040021685A1 (en)2002-07-302004-02-05Fuji Xerox Co., Ltd.Systems and methods for filtering and/or viewing collaborative indexes of recorded media
US20040125148A1 (en)2002-12-302004-07-01The Board Of Trustees Of The Leland Stanford Junior UniversityMethods and apparatus for interactive point-of-view authoring of digital video content
US20040128308A1 (en)2002-12-312004-07-01Pere ObradorScalably presenting a collection of media objects
US20040125133A1 (en)2002-12-302004-07-01The Board Of Trustees Of The Leland Stanford Junior UniversityMethods and apparatus for interactive network sharing of digital video content
JP2004193979A (en)2002-12-112004-07-08Canon Inc Video distribution system
US20040138946A1 (en)2001-05-042004-07-15Markus StolzeWeb page annotation systems
US20040152054A1 (en)2003-01-302004-08-05Gleissner Michael J.G.System for learning language through embedded content on a single medium
US6774908B2 (en)2000-10-032004-08-10Creative Frontier Inc.System and method for tracking an object in a video and linking information thereto
US20040168118A1 (en)2003-02-242004-08-26Wong Curtis G.Interactive media frame display
US20040172593A1 (en)2003-01-212004-09-02Curtis G. WongRapid media group annotation
US20040181545A1 (en)2003-03-102004-09-16Yining DengGenerating and rendering annotated video files
US20040205547A1 (en)2003-04-122004-10-14Feldt Kenneth CharlesAnnotation process for message enabled digital content
US20040205482A1 (en)2002-01-242004-10-14International Business Machines CorporationMethod and apparatus for active annotation of multimedia content
US20040210602A1 (en)2002-12-132004-10-21Hillis W. DanielMeta-Web
US20040237032A1 (en)2001-09-272004-11-25David MieleMethod and system for annotating audio/video data files
US20050044254A1 (en)2001-06-112005-02-24C-Burn Systems LtdAutomated system for remote product or service selection
US20050081159A1 (en)1998-09-152005-04-14Microsoft CorporationUser interface for creating viewing and temporally positioning annotations for media content
US20050132401A1 (en)2003-12-102005-06-16Gilles Boccon-GibodMethod and apparatus for exchanging preferences for replaying a program on a personal video recorder
US20050203892A1 (en)2004-03-022005-09-15Jonathan WesleyDynamically integrating disparate systems and providing secure data sharing
US20050203876A1 (en)2003-06-202005-09-15International Business Machines CorporationHeterogeneous multi-level extendable indexing for general purpose annotation systems
US20050207622A1 (en)2004-03-162005-09-22Haupt Gordon TInteractive system for recognition analysis of multiple streams of video
US20050216457A1 (en)2004-03-152005-09-29Yahoo! Inc.Systems and methods for collecting user annotations
US20050229227A1 (en)2004-04-132005-10-13Evenhere, Inc.Aggregation of retailers for televised media programming product placement
US6956573B1 (en)1996-11-152005-10-18Sarnoff CorporationMethod and apparatus for efficiently representing storing and accessing video information
US6965646B1 (en)2000-06-282005-11-15Cisco Technology, Inc.MPEG file format optimization for streaming
US20050275716A1 (en)2004-06-142005-12-15Fuji Xerox Co., Ltd.Display apparatus, system and display method
US20050289142A1 (en)2004-06-282005-12-29Adams Hugh W JrSystem and method for previewing relevance of streaming data
US20050289469A1 (en)2004-06-282005-12-29Chandler Roger DContext tagging apparatus, systems, and methods
US20050286865A1 (en)2004-06-282005-12-29International Business Machines CorporationFramework for extracting multiple-resolution semantics in composite media content analysis
US6993347B2 (en)2002-12-172006-01-31International Business Machines CorporationDynamic media interleaving
US20060041564A1 (en)2004-08-202006-02-23Innovative Decision Technologies, Inc.Graphical Annotations and Domain Objects to Create Feature Level Metadata of Images
US20060053365A1 (en)2004-09-082006-03-09Josef HollanderMethod for creating custom annotated books
US20060059120A1 (en)2004-08-272006-03-16Ziyou XiongIdentifying video highlights using audio-visual objects
US20060064733A1 (en)2004-09-202006-03-23Norton Jeffrey RPlaying an audiovisual work with dynamic choosing
US7032178B1 (en)2001-03-302006-04-18Gateway Inc.Tagging content for different activities
US20060087987A1 (en)2004-10-052006-04-27Daniel WittInteractive video collaboration framework
US20060101328A1 (en)2004-11-082006-05-11International Business Machines CorporationMulti-user, multi-timed collaborative annotation
US7055168B1 (en)2000-05-032006-05-30Sharp Laboratories Of America, Inc.Method for interpreting and executing user preferences of audiovisual information
JP2006155384A (en)2004-11-302006-06-15Nippon Telegr & Teleph Corp <Ntt> Video comment input / display method, apparatus, program, and storage medium storing program
JP2006157692A (en)2004-11-302006-06-15Nippon Telegr & Teleph Corp <Ntt> Video reproduction method, apparatus and program
JP2006157689A (en)2004-11-302006-06-15Nippon Telegr & Teleph Corp <Ntt> Inter-viewer communication method, apparatus and program
US20060136813A1 (en)2004-12-162006-06-22Palo Alto Research Center IncorporatedSystems and methods for annotating pages of a 3D electronic document
US7080139B1 (en)2001-04-242006-07-18Fatbubble, IncMethod and apparatus for selectively sharing and passively tracking communication device experiences
US20060161578A1 (en)2005-01-192006-07-20Siegel Hilliard BMethod and system for providing annotations of a digital work
US20060161838A1 (en)2005-01-142006-07-20Ronald NydamReview of signature based content
US7111009B1 (en)1997-03-142006-09-19Microsoft CorporationInteractive playlist generation using annotations
US20060218590A1 (en)2005-03-102006-09-28Sbc Knowledge Ventures, L.P.System and method for displaying an electronic program guide
US7137062B2 (en)2001-12-282006-11-14International Business Machines CorporationSystem and method for hierarchical segmentation with latent semantic indexing in scale space
US7149755B2 (en)2002-07-292006-12-12Hewlett-Packard Development Company, Lp.Presenting a collection of media objects
US20060286536A1 (en)2005-04-012006-12-21Sherman MohlerSystem and method for regulating use of content and content styles in a distributed learning system
US20060294134A1 (en)2005-06-282006-12-28Yahoo! Inc.Trust propagation through both explicit and implicit social networks
US20070002946A1 (en)2005-07-012007-01-04Sonic SolutionsMethod, apparatus and system for use in multimedia signal encoding
KR20070004153A (en)2005-07-042007-01-09주식회사 다음커뮤니케이션 User preferred content providing system and method, personal preferred content analysis system and method, group preferred content analysis system and method
US20070011651A1 (en)2005-07-072007-01-11Bea Systems, Inc.Customized annotation editing
US20070038610A1 (en)2001-06-222007-02-15Nosa OmoiguiSystem and method for knowledge retrieval, management, delivery and presentation
US20070067707A1 (en)2005-09-162007-03-22Microsoft CorporationSynchronous digital annotations of media data stream
US20070094590A1 (en)2005-10-202007-04-26International Business Machines CorporationSystem and method for providing dynamic process step annotations
US20070101387A1 (en)2005-10-312007-05-03Microsoft CorporationMedia Sharing And Authoring On The Web
US20070099684A1 (en)2005-11-032007-05-03Evans ButterworthSystem and method for implementing an interactive storyline
US20070121144A1 (en)2001-12-032007-05-31Canon Kabushiki KaishaInformation processing apparatus and information processing method
JP2007142750A (en)2005-11-172007-06-07National Agency For The Advancement Of Sports & Health Video browsing system, computer terminal and program
JP2007151057A (en)2005-10-252007-06-14Dainippon Printing Co Ltd Video content browsing system using evaluation impression information
US7243301B2 (en)2002-04-102007-07-10Microsoft CorporationCommon annotation framework
US20070162568A1 (en)2006-01-062007-07-12Manish GuptaDynamic media serving infrastructure
WO2007082169A2 (en)2006-01-052007-07-19Eyespot CorporationAutomatic aggregation of content for use in an online video editing system
US20070174774A1 (en)2005-04-202007-07-26Videoegg, Inc.Browser editing with timeline representations
US7254605B1 (en)2000-10-262007-08-07Austen Services LlcMethod of modulating the transmission frequency in a real time opinion research network
JP2007274090A (en)2006-03-302007-10-18Toshiba Corp Content playback apparatus, method and program
JP2007529822A (en)2004-03-152007-10-25ヤフー! インコーポレイテッド Search system and method integrating user annotations from a trust network
US20070250901A1 (en)2006-03-302007-10-25Mcintire John PMethod and apparatus for annotating media streams
US20070256016A1 (en)2006-04-262007-11-01Bedingfield James C SrMethods, systems, and computer program products for managing video information
US20070266304A1 (en)2006-05-152007-11-15Microsoft CorporationAnnotating media files
US20070271331A1 (en)2006-05-172007-11-22Steve MuthSystem of archiving and repurposing a complex group conversation referencing networked media
WO2007135688A2 (en)2006-05-222007-11-29P.S.G GroupA method for interactive commenting on media files
JP2007310833A (en)2006-05-222007-11-29Nippon Telegr & Teleph Corp <Ntt> Server apparatus and client apparatus and program thereof
JP2007317123A (en)2006-05-292007-12-06Daisuke YamamotoServer for managing dynamic images
US20080005064A1 (en)2005-06-282008-01-03Yahoo! Inc.Apparatus and method for content annotation and conditional annotation retrieval in a search context
US20080028294A1 (en)2006-07-282008-01-31Blue Lava TechnologiesMethod and system for managing and maintaining multimedia content
US20080028323A1 (en)2006-07-272008-01-31Joshua RosenMethod for Initiating and Launching Collaboration Sessions
US7343617B1 (en)2000-02-292008-03-11Goldpocket Interactive, Inc.Method and apparatus for interaction with hyperlinks in a television broadcast
US20080086742A1 (en)2006-10-092008-04-10Verizon Services Corp.Systems And Methods For Real-Time Interactive Television Polling
US20080091723A1 (en)2006-10-112008-04-17Mark ZuckerbergSystem and method for tagging digital media
US20080109841A1 (en)2006-10-232008-05-08Ashley HeatherProduct information display and product linking
US20080109851A1 (en)2006-10-232008-05-08Ashley HeatherMethod and system for providing interactive video
US7383497B2 (en)2003-01-212008-06-03Microsoft CorporationRandom access editing of media
US20080168055A1 (en)2007-01-042008-07-10Wide Angle LlcRelevancy rating of tags
US20080168070A1 (en)2007-01-082008-07-10Naphade Milind RMethod and apparatus for classifying multimedia artifacts using ontology selection and semantic classification
US20080168073A1 (en)2005-01-192008-07-10Siegel Hilliard BProviding Annotations of a Digital Work
US20080195657A1 (en)2007-02-082008-08-14Yahoo! Inc.Context-based community-driven suggestions for media annotation
US7418656B1 (en)2003-10-032008-08-26Adobe Systems IncorporatedDynamic annotations for electronics documents
US20080222511A1 (en)2005-09-122008-09-11International Business Machines CorporationMethod and Apparatus for Annotating a Document
US20080250331A1 (en)2007-04-042008-10-09Atul TulshibagwaleMethod and System of a Voting Based Wiki and Its Application to Internet Topic Directories
US20090007200A1 (en)2007-06-292009-01-01At&T Knowledge Ventures, LpSystem and method of providing video content commentary
US20090076843A1 (en)2007-06-072009-03-19Graff David SInteractive team portal system
US20090087161A1 (en)2007-09-282009-04-02Graceenote, Inc.Synthesizing a presentation of a multimedia event
US20090094520A1 (en)2007-10-072009-04-09Kulas Charles JUser Interface for Creating Tags Synchronized with a Video Playback
US20090110296A1 (en)*1999-01-292009-04-30Shunichi SekiguchiMethod of image feature coding and method of image search
US20090150947A1 (en)2007-10-052009-06-11Soderstrom Robert WOnline search, storage, manipulation, and delivery of video content
US20090172745A1 (en)2007-12-282009-07-02Motorola, Inc.Method and Apparatus Regarding Receipt of Audio-Visual Content Information and Use of Such Information to Automatically Infer a Relative Popularity of That Content
US7558017B2 (en)2006-04-242009-07-07Hitachi Global Storage Technologies Netherlands B.V.Magnetic disk drive and a loading/unloading method
US7559017B2 (en)2006-12-222009-07-07Google Inc.Annotation framework for video
US20090199251A1 (en)2008-02-062009-08-06Mihai BadoiuSystem and Method for Voting on Popular Video Intervals
US20090210779A1 (en)2008-02-192009-08-20Mihai BadoiuAnnotating Video Intervals
US20090276805A1 (en)2008-05-032009-11-05Andrews Ii James KMethod and system for generation and playback of supplemented videos
US7616946B2 (en)2005-07-212009-11-10Lg Electronics Inc.Mobile terminal having bookmark functionn of contents service and operation method thereof
US20090300475A1 (en)2008-06-032009-12-03Google Inc.Web-based system for collaborative generation of interactive videos
US7636883B2 (en)2005-05-182009-12-22International Business Machines CorporationUser form based automated and guided data collection
US7644364B2 (en)2005-10-142010-01-05Microsoft CorporationPhoto and video collage effects
US20100169927A1 (en)2006-08-102010-07-01Masaru YamaokaProgram recommendation system, program view terminal, program view program, program view method, program recommendation server, program recommendation program, and program recommendation method
US7761436B2 (en)2006-01-032010-07-20Yahoo! Inc.Apparatus and method for controlling content access based on shared annotations for annotated users in a folksonomy scheme
US7778516B2 (en)*2000-04-052010-08-17Sony United Kingdom LimitedIdentifying, recording and reproducing information
US20100278453A1 (en)*2006-09-152010-11-04King Martin TCapture and display of annotations in paper and electronic documents
US20100287236A1 (en)2009-04-162010-11-11Brian AmentoCollective asynchronous media review
US7992215B2 (en)*2002-12-112011-08-02Trio Systems, LlcAnnotation system for creating and retrieving media and methods relating to same
US8181201B2 (en)2005-08-302012-05-15Nds LimitedEnhanced electronic program guides
US8202167B2 (en)2003-06-022012-06-19Disney Enterprises, Inc.System and method of interactive video playback
US8209223B2 (en)2007-11-302012-06-26Google Inc.Video object tag creation and processing
US20120236143A1 (en)2005-11-042012-09-20Weatherhead James JCharacterizing dynamic regions of digital media data
US8280827B2 (en)2005-08-232012-10-02Syneola Luxembourg SaMultilevel semiotic and fuzzy logic user and metadata interface means for interactive multimedia system having cognitive adaptive capability
US8392834B2 (en)2003-04-092013-03-05Hewlett-Packard Development Company, L.P.Systems and methods of authoring a multimedia file
US8443279B1 (en)2004-10-132013-05-14Stryker CorporationVoice-responsive annotation of video generated by an endoscopic camera
US20130238995A1 (en)2012-03-122013-09-12sCoolTV, IncApparatus and method for adding content using a media player
US20130263002A1 (en)2012-03-302013-10-03Lg Electronics Inc.Mobile terminal
US20130290996A1 (en)2008-06-182013-10-31Ipowow! Ltd.Assessing digital content across a communications network
US20140101691A1 (en)2011-06-102014-04-10Tata Consultancy Services LimitedMethod and system for automatic tagging in television using crowd sourcing technique
US8713618B1 (en)2008-11-062014-04-29Google Inc.Segmenting video based on timestamps in comments
US20140123168A1 (en)2002-05-102014-05-01Convergent Media Solutions LlcMethod and apparatus for browsing using alternative linkbases
US8839327B2 (en)2008-06-252014-09-16At&T Intellectual Property Ii, LpMethod and apparatus for presenting media programs

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5966121A (en)1995-10-121999-10-12Andersen Consulting LlpInteractive hypervideo editing system and interface
JP4025185B2 (en)*2002-12-102007-12-19株式会社東芝 Media data viewing apparatus and metadata sharing system
EP1438933B1 (en)2003-01-172005-04-06WALDEMAR LINK GmbH & Co. KGHip prosthesis with a stem to be implanted into the medullary canal of the femur
US7383487B2 (en)2004-01-102008-06-03Broadcom CorporationIPHD (iterative parallel hybrid decoding) of various MLC (multi-level code) signals
KR20050115713A (en)2004-06-042005-12-08현익근 How to make ice cream and ice cream made from natural fruits and vegetables
US7528066B2 (en)*2006-03-012009-05-05International Business Machines CorporationStructure and method for metal integration

Patent Citations (194)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5388197A (en)1991-08-021995-02-07The Grass Valley Group, Inc.Video editing system operator inter-face for visualization and interactive control of video material
US5530861A (en)1991-08-261996-06-25Hewlett-Packard CompanyProcess enaction and tool integration via a task oriented paradigm
US5414806A (en)1992-08-291995-05-09International Business Machines CorporationPalette and parts view of a composite object in an object oriented computer system
US5339393A (en)1993-04-151994-08-16Sony Electronics, Inc.Graphical user interface for displaying available source material for editing
US5996121A (en)1993-07-281999-12-07Harris; EuniceConvertible coat
US5664216A (en)1994-03-221997-09-02Blumenau; TrevorIconic audiovisual data editing environment
US5465353A (en)1994-04-011995-11-07Ricoh Company, Ltd.Image matching and retrieval by multi-access redundant hashing
US5600775A (en)1994-08-261997-02-04Emotion, Inc.Method and apparatus for annotating full motion video and other indexed data structures
US5812642A (en)1995-07-121998-09-22Leroy; David J.Audience response monitor and analysis system and method
US5708845A (en)1995-09-291998-01-13Wistendahl; Douglass A.System for mapping hot spots in media content for interactive digital media program
US5732184A (en)1995-10-201998-03-24Digital Processing Systems, Inc.Video and audio cursor video editing system
US6570587B1 (en)1996-07-262003-05-27Veon Ltd.System and method and linking information to a video
US6956573B1 (en)1996-11-152005-10-18Sarnoff CorporationMethod and apparatus for efficiently representing storing and accessing video information
US6006241A (en)1997-03-141999-12-21Microsoft CorporationProduction of a video stream with synchronized annotations over a computer network
US7111009B1 (en)1997-03-142006-09-19Microsoft CorporationInteractive playlist generation using annotations
US20020059342A1 (en)1997-10-232002-05-16Anoop GuptaAnnotating temporally-dimensioned multimedia content
US6311189B1 (en)*1998-03-112001-10-30Altavista CompanyTechnique for matching a query to a portion of media
US6332144B1 (en)1998-03-112001-12-18Altavista CompanyTechnique for annotating media
US6295092B1 (en)1998-07-302001-09-25Cbs CorporationSystem for analyzing television programs
US6144375A (en)1998-08-142000-11-07Praja Inc.Multi-perspective viewer for content-based interactivity
US6956593B1 (en)1998-09-152005-10-18Microsoft CorporationUser interface for creating, viewing and temporally positioning annotations for media content
US20030196164A1 (en)1998-09-152003-10-16Anoop GuptaAnnotations for multiple versions of media content
US7506262B2 (en)*1998-09-152009-03-17Microsoft CorporationUser interface for creating viewing and temporally positioning annotations for media content
US20050081159A1 (en)1998-09-152005-04-14Microsoft CorporationUser interface for creating viewing and temporally positioning annotations for media content
US20010023436A1 (en)1998-09-162001-09-20Anand SrinivasanMethod and apparatus for multiplexing seperately-authored metadata for insertion into a video data stream
US20020059218A1 (en)1999-01-262002-05-16Katherine Grace AugustSystem and method for obtaining real time survey information for media programming using input device
US20090110296A1 (en)*1999-01-292009-04-30Shunichi SekiguchiMethod of image feature coding and method of image search
US20080092168A1 (en)1999-03-292008-04-17Logan James DAudio and video program recording, editing and playback systems using metadata
US20030039469A1 (en)1999-05-192003-02-27Kwang Su KimMethod for creating caption-based search information of moving picture data, searching moving picture data based on such information, and reproduction apparatus using said method
US6415438B1 (en)1999-10-052002-07-02Webtv Networks, Inc.Trigger having a time attribute
US20020054138A1 (en)1999-12-172002-05-09Erik HennumWeb-based instruction
US20020049983A1 (en)2000-02-292002-04-25Bove V. MichaelMethod and apparatus for switching between multiple programs by interacting with a hyperlinked television broadcast
US7343617B1 (en)2000-02-292008-03-11Goldpocket Interactive, Inc.Method and apparatus for interaction with hyperlinks in a television broadcast
WO2001069438A2 (en)2000-03-142001-09-20Starlab Nv/SaMethods and apparatus for encoding multimedia annotations using time-synchronized description streams
US20020120925A1 (en)2000-03-282002-08-29Logan James D.Audio and video program recording, editing and playback systems using metadata
US20030093790A1 (en)2000-03-282003-05-15Logan James D.Audio and video program recording, editing and playback systems using metadata
US20020152082A1 (en)*2000-04-052002-10-17Harradine Vincent CarlAudio/video reproducing apparatus and method
US7778516B2 (en)*2000-04-052010-08-17Sony United Kingdom LimitedIdentifying, recording and reproducing information
US7055168B1 (en)2000-05-032006-05-30Sharp Laboratories Of America, Inc.Method for interpreting and executing user preferences of audiovisual information
US6965646B1 (en)2000-06-282005-11-15Cisco Technology, Inc.MPEG file format optimization for streaming
US20020069218A1 (en)2000-07-242002-06-06Sanghoon SullSystem and method for indexing, searching, identifying, and editing portions of electronic multimedia files
US20030177503A1 (en)2000-07-242003-09-18Sanghoon SullMethod and apparatus for fast metadata generation, delivery and access for live broadcast program
KR20040041082A (en)2000-07-242004-05-13비브콤 인코포레이티드System and method for indexing, searching, identifying, and editing portions of electronic multimedia files
US20020078092A1 (en)2000-07-282002-06-20Minhoe KimSystem and method for reformatting contents in wireless internet site
US20020065678A1 (en)2000-08-252002-05-30Steven PeliotisiSelect video
US20020078446A1 (en)2000-08-302002-06-20Jon DakssMethod and apparatus for hyperlinking in a television broadcast
US20030020743A1 (en)2000-09-082003-01-30Mauro BarbieriApparatus for reproducing an information signal stored on a storage medium
US20020059584A1 (en)2000-09-142002-05-16Ferman Ahmet MufitAudiovisual management system
US6774908B2 (en)2000-10-032004-08-10Creative Frontier Inc.System and method for tracking an object in a video and linking information thereto
US7254605B1 (en)2000-10-262007-08-07Austen Services LlcMethod of modulating the transmission frequency in a real time opinion research network
US20020108112A1 (en)2001-02-022002-08-08Ensequence, Inc.System and method for thematically analyzing and annotating an audio-visual sequence
US7032178B1 (en)2001-03-302006-04-18Gateway Inc.Tagging content for different activities
US7080139B1 (en)2001-04-242006-07-18Fatbubble, IncMethod and apparatus for selectively sharing and passively tracking communication device experiences
CN1332556A (en)2001-04-272002-01-23清华大学Channel transmission method for ground digital multimeldia television broadcast system
US20040138946A1 (en)2001-05-042004-07-15Markus StolzeWeb page annotation systems
US20020188630A1 (en)2001-05-212002-12-12Autodesk, Inc.Method and apparatus for annotating a sequence of frames
US20050044254A1 (en)2001-06-112005-02-24C-Burn Systems LtdAutomated system for remote product or service selection
US20070038610A1 (en)2001-06-222007-02-15Nosa OmoiguiSystem and method for knowledge retrieval, management, delivery and presentation
US20030002851A1 (en)2001-06-282003-01-02Kenny HsiaoVideo editing method and device for editing a video project
US20030018668A1 (en)2001-07-202003-01-23International Business Machines CorporationEnhanced transcoding of structured documents through use of annotation techniques
WO2003019418A1 (en)2001-08-312003-03-06Kent Ridge Digital LabsAn iterative collaborative annotation system
US20050160113A1 (en)2001-08-312005-07-21Kent Ridge Digital LabsTime-based media navigation system
US20040237032A1 (en)2001-09-272004-11-25David MieleMethod and system for annotating audio/video data files
US20030068046A1 (en)2001-10-102003-04-10Markus LindqvistDatacast distribution system
US20030095720A1 (en)2001-11-162003-05-22Patrick ChiuVideo production and compaction with collage picture frame user interface
US20070121144A1 (en)2001-12-032007-05-31Canon Kabushiki KaishaInformation processing apparatus and information processing method
US20030107592A1 (en)2001-12-112003-06-12Koninklijke Philips Electronics N.V.System and method for retrieving information related to persons in video programs
US20030112276A1 (en)2001-12-192003-06-19Clement LauUser augmentation of content
US7137062B2 (en)2001-12-282006-11-14International Business Machines CorporationSystem and method for hierarchical segmentation with latent semantic indexing in scale space
US20040205482A1 (en)2002-01-242004-10-14International Business Machines CorporationMethod and apparatus for active annotation of multimedia content
US20030196964A1 (en)2002-01-312003-10-23Koslow Evan E.Microporous filter media, filteration systems containing same, and methods of making and using
JP2003283981A (en)2002-03-202003-10-03Nippon Telegr & Teleph Corp <Ntt> Video comment input / display method and system, client device, video comment input / display program, and recording medium therefor
US7243301B2 (en)2002-04-102007-07-10Microsoft CorporationCommon annotation framework
US20140123168A1 (en)2002-05-102014-05-01Convergent Media Solutions LlcMethod and apparatus for browsing using alternative linkbases
US20030231198A1 (en)2002-06-182003-12-18Koninklijke Philips Electronics N.V.System and method for providing videomarks for a video program
JP2005530296A (en)2002-06-182005-10-06コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ System and method for providing a video mark for a video program
US7149755B2 (en)2002-07-292006-12-12Hewlett-Packard Development Company, Lp.Presenting a collection of media objects
US20040021685A1 (en)2002-07-302004-02-05Fuji Xerox Co., Ltd.Systems and methods for filtering and/or viewing collaborative indexes of recorded media
JP2004080769A (en)2002-07-302004-03-11Fuji Xerox Co Ltd Method, system, user interface, and program for executing and filtering a joint index of multimedia or video stream annotations
JP2004193979A (en)2002-12-112004-07-08Canon Inc Video distribution system
US7992215B2 (en)*2002-12-112011-08-02Trio Systems, LlcAnnotation system for creating and retrieving media and methods relating to same
US20040210602A1 (en)2002-12-132004-10-21Hillis W. DanielMeta-Web
US6993347B2 (en)2002-12-172006-01-31International Business Machines CorporationDynamic media interleaving
US20040125148A1 (en)2002-12-302004-07-01The Board Of Trustees Of The Leland Stanford Junior UniversityMethods and apparatus for interactive point-of-view authoring of digital video content
US20040125133A1 (en)2002-12-302004-07-01The Board Of Trustees Of The Leland Stanford Junior UniversityMethods and apparatus for interactive network sharing of digital video content
US7131059B2 (en)2002-12-312006-10-31Hewlett-Packard Development Company, L.P.Scalably presenting a collection of media objects
US20040128308A1 (en)2002-12-312004-07-01Pere ObradorScalably presenting a collection of media objects
US7383497B2 (en)2003-01-212008-06-03Microsoft CorporationRandom access editing of media
US20040172593A1 (en)2003-01-212004-09-02Curtis G. WongRapid media group annotation
US20040152054A1 (en)2003-01-302004-08-05Gleissner Michael J.G.System for learning language through embedded content on a single medium
US20040168118A1 (en)2003-02-242004-08-26Wong Curtis G.Interactive media frame display
US20040181545A1 (en)2003-03-102004-09-16Yining DengGenerating and rendering annotated video files
US8392834B2 (en)2003-04-092013-03-05Hewlett-Packard Development Company, L.P.Systems and methods of authoring a multimedia file
US20040205547A1 (en)2003-04-122004-10-14Feldt Kenneth CharlesAnnotation process for message enabled digital content
US8202167B2 (en)2003-06-022012-06-19Disney Enterprises, Inc.System and method of interactive video playback
US20050203876A1 (en)2003-06-202005-09-15International Business Machines CorporationHeterogeneous multi-level extendable indexing for general purpose annotation systems
US7418656B1 (en)2003-10-032008-08-26Adobe Systems IncorporatedDynamic annotations for electronics documents
US20050132401A1 (en)2003-12-102005-06-16Gilles Boccon-GibodMethod and apparatus for exchanging preferences for replaying a program on a personal video recorder
US20050203892A1 (en)2004-03-022005-09-15Jonathan WesleyDynamically integrating disparate systems and providing secure data sharing
JP2007529822A (en)2004-03-152007-10-25ヤフー! インコーポレイテッド Search system and method integrating user annotations from a trust network
US20050216457A1 (en)2004-03-152005-09-29Yahoo! Inc.Systems and methods for collecting user annotations
US7599950B2 (en)2004-03-152009-10-06Yahoo! Inc.Systems and methods for collecting user annotations
US20050207622A1 (en)2004-03-162005-09-22Haupt Gordon TInteractive system for recognition analysis of multiple streams of video
US20050229227A1 (en)2004-04-132005-10-13Evenhere, Inc.Aggregation of retailers for televised media programming product placement
US20050275716A1 (en)2004-06-142005-12-15Fuji Xerox Co., Ltd.Display apparatus, system and display method
US7724277B2 (en)2004-06-142010-05-25Fuji Xerox Co., Ltd.Display apparatus, system and display method
US20050289142A1 (en)2004-06-282005-12-29Adams Hugh W JrSystem and method for previewing relevance of streaming data
US20050289469A1 (en)2004-06-282005-12-29Chandler Roger DContext tagging apparatus, systems, and methods
US20050286865A1 (en)2004-06-282005-12-29International Business Machines CorporationFramework for extracting multiple-resolution semantics in composite media content analysis
US20060041564A1 (en)2004-08-202006-02-23Innovative Decision Technologies, Inc.Graphical Annotations and Domain Objects to Create Feature Level Metadata of Images
US20060059120A1 (en)2004-08-272006-03-16Ziyou XiongIdentifying video highlights using audio-visual objects
US20060053365A1 (en)2004-09-082006-03-09Josef HollanderMethod for creating custom annotated books
US20090199082A1 (en)*2004-09-082009-08-06Sharedbook Ltd.System and method for annotation of web pages
US20090204882A1 (en)2004-09-082009-08-13Sharedbook Ltd.System and method for annotation of web pages
US20060064733A1 (en)2004-09-202006-03-23Norton Jeffrey RPlaying an audiovisual work with dynamic choosing
US20060087987A1 (en)2004-10-052006-04-27Daniel WittInteractive video collaboration framework
US8443279B1 (en)2004-10-132013-05-14Stryker CorporationVoice-responsive annotation of video generated by an endoscopic camera
US20060101328A1 (en)2004-11-082006-05-11International Business Machines CorporationMulti-user, multi-timed collaborative annotation
JP2006157689A (en)2004-11-302006-06-15Nippon Telegr & Teleph Corp <Ntt> Inter-viewer communication method, apparatus and program
JP2006157692A (en)2004-11-302006-06-15Nippon Telegr & Teleph Corp <Ntt> Video reproduction method, apparatus and program
JP2006155384A (en)2004-11-302006-06-15Nippon Telegr & Teleph Corp <Ntt> Video comment input / display method, apparatus, program, and storage medium storing program
US20060136813A1 (en)2004-12-162006-06-22Palo Alto Research Center IncorporatedSystems and methods for annotating pages of a 3D electronic document
US20060161838A1 (en)2005-01-142006-07-20Ronald NydamReview of signature based content
US20060161578A1 (en)2005-01-192006-07-20Siegel Hilliard BMethod and system for providing annotations of a digital work
US20080168073A1 (en)2005-01-192008-07-10Siegel Hilliard BProviding Annotations of a Digital Work
US20060218590A1 (en)2005-03-102006-09-28Sbc Knowledge Ventures, L.P.System and method for displaying an electronic program guide
US20060286536A1 (en)2005-04-012006-12-21Sherman MohlerSystem and method for regulating use of content and content styles in a distributed learning system
US20070174774A1 (en)2005-04-202007-07-26Videoegg, Inc.Browser editing with timeline representations
US7636883B2 (en)2005-05-182009-12-22International Business Machines CorporationUser form based automated and guided data collection
US20080005064A1 (en)2005-06-282008-01-03Yahoo! Inc.Apparatus and method for content annotation and conditional annotation retrieval in a search context
US20060294134A1 (en)2005-06-282006-12-28Yahoo! Inc.Trust propagation through both explicit and implicit social networks
US20070002946A1 (en)2005-07-012007-01-04Sonic SolutionsMethod, apparatus and system for use in multimedia signal encoding
KR20070004153A (en)2005-07-042007-01-09주식회사 다음커뮤니케이션 User preferred content providing system and method, personal preferred content analysis system and method, group preferred content analysis system and method
US20070011651A1 (en)2005-07-072007-01-11Bea Systems, Inc.Customized annotation editing
US7616946B2 (en)2005-07-212009-11-10Lg Electronics Inc.Mobile terminal having bookmark functionn of contents service and operation method thereof
US8280827B2 (en)2005-08-232012-10-02Syneola Luxembourg SaMultilevel semiotic and fuzzy logic user and metadata interface means for interactive multimedia system having cognitive adaptive capability
US8181201B2 (en)2005-08-302012-05-15Nds LimitedEnhanced electronic program guides
US20080222511A1 (en)2005-09-122008-09-11International Business Machines CorporationMethod and Apparatus for Annotating a Document
US20070067707A1 (en)2005-09-162007-03-22Microsoft CorporationSynchronous digital annotations of media data stream
US7644364B2 (en)2005-10-142010-01-05Microsoft CorporationPhoto and video collage effects
US20070094590A1 (en)2005-10-202007-04-26International Business Machines CorporationSystem and method for providing dynamic process step annotations
JP2007151057A (en)2005-10-252007-06-14Dainippon Printing Co Ltd Video content browsing system using evaluation impression information
US20070101387A1 (en)2005-10-312007-05-03Microsoft CorporationMedia Sharing And Authoring On The Web
US20070099684A1 (en)2005-11-032007-05-03Evans ButterworthSystem and method for implementing an interactive storyline
US20120236143A1 (en)2005-11-042012-09-20Weatherhead James JCharacterizing dynamic regions of digital media data
JP2007142750A (en)2005-11-172007-06-07National Agency For The Advancement Of Sports & Health Video browsing system, computer terminal and program
US7761436B2 (en)2006-01-032010-07-20Yahoo! Inc.Apparatus and method for controlling content access based on shared annotations for annotated users in a folksonomy scheme
WO2007082169A2 (en)2006-01-052007-07-19Eyespot CorporationAutomatic aggregation of content for use in an online video editing system
US20070162568A1 (en)2006-01-062007-07-12Manish GuptaDynamic media serving infrastructure
JP2007274090A (en)2006-03-302007-10-18Toshiba Corp Content playback apparatus, method and program
US20070250901A1 (en)2006-03-302007-10-25Mcintire John PMethod and apparatus for annotating media streams
US8645991B2 (en)2006-03-302014-02-04Tout Industries, Inc.Method and apparatus for annotating media streams
US7558017B2 (en)2006-04-242009-07-07Hitachi Global Storage Technologies Netherlands B.V.Magnetic disk drive and a loading/unloading method
US20070256016A1 (en)2006-04-262007-11-01Bedingfield James C SrMethods, systems, and computer program products for managing video information
US20070266304A1 (en)2006-05-152007-11-15Microsoft CorporationAnnotating media files
US20070271331A1 (en)2006-05-172007-11-22Steve MuthSystem of archiving and repurposing a complex group conversation referencing networked media
WO2007135688A2 (en)2006-05-222007-11-29P.S.G GroupA method for interactive commenting on media files
JP2007310833A (en)2006-05-222007-11-29Nippon Telegr & Teleph Corp <Ntt> Server apparatus and client apparatus and program thereof
JP2007317123A (en)2006-05-292007-12-06Daisuke YamamotoServer for managing dynamic images
US20080028323A1 (en)2006-07-272008-01-31Joshua RosenMethod for Initiating and Launching Collaboration Sessions
US20080028294A1 (en)2006-07-282008-01-31Blue Lava TechnologiesMethod and system for managing and maintaining multimedia content
US20100169927A1 (en)2006-08-102010-07-01Masaru YamaokaProgram recommendation system, program view terminal, program view program, program view method, program recommendation server, program recommendation program, and program recommendation method
US20100278453A1 (en)*2006-09-152010-11-04King Martin TCapture and display of annotations in paper and electronic documents
US20080086742A1 (en)2006-10-092008-04-10Verizon Services Corp.Systems And Methods For Real-Time Interactive Television Polling
US7945653B2 (en)2006-10-112011-05-17Facebook, Inc.Tagging digital media
US20080091723A1 (en)2006-10-112008-04-17Mark ZuckerbergSystem and method for tagging digital media
US20080109851A1 (en)2006-10-232008-05-08Ashley HeatherMethod and system for providing interactive video
US20080109841A1 (en)2006-10-232008-05-08Ashley HeatherProduct information display and product linking
US7559017B2 (en)2006-12-222009-07-07Google Inc.Annotation framework for video
US20090249185A1 (en)2006-12-222009-10-01Google Inc.Annotation Framework For Video
US8151182B2 (en)2006-12-222012-04-03Google Inc.Annotation framework for video
US20080168055A1 (en)2007-01-042008-07-10Wide Angle LlcRelevancy rating of tags
US20080168070A1 (en)2007-01-082008-07-10Naphade Milind RMethod and apparatus for classifying multimedia artifacts using ontology selection and semantic classification
US20080195657A1 (en)2007-02-082008-08-14Yahoo! Inc.Context-based community-driven suggestions for media annotation
US20080250331A1 (en)2007-04-042008-10-09Atul TulshibagwaleMethod and System of a Voting Based Wiki and Its Application to Internet Topic Directories
US20090076843A1 (en)2007-06-072009-03-19Graff David SInteractive team portal system
US20090007200A1 (en)2007-06-292009-01-01At&T Knowledge Ventures, LpSystem and method of providing video content commentary
US20090087161A1 (en)2007-09-282009-04-02Graceenote, Inc.Synthesizing a presentation of a multimedia event
US20090150947A1 (en)2007-10-052009-06-11Soderstrom Robert WOnline search, storage, manipulation, and delivery of video content
US20090094520A1 (en)2007-10-072009-04-09Kulas Charles JUser Interface for Creating Tags Synchronized with a Video Playback
US8209223B2 (en)2007-11-302012-06-26Google Inc.Video object tag creation and processing
US20090172745A1 (en)2007-12-282009-07-02Motorola, Inc.Method and Apparatus Regarding Receipt of Audio-Visual Content Information and Use of Such Information to Automatically Infer a Relative Popularity of That Content
US20090199251A1 (en)2008-02-062009-08-06Mihai BadoiuSystem and Method for Voting on Popular Video Intervals
US20090210779A1 (en)2008-02-192009-08-20Mihai BadoiuAnnotating Video Intervals
US20090276805A1 (en)2008-05-032009-11-05Andrews Ii James KMethod and system for generation and playback of supplemented videos
US20090297118A1 (en)2008-06-032009-12-03Google Inc.Web-based system for generation of interactive games based on digital videos
US20090300475A1 (en)2008-06-032009-12-03Google Inc.Web-based system for collaborative generation of interactive videos
US20130290996A1 (en)2008-06-182013-10-31Ipowow! Ltd.Assessing digital content across a communications network
US8839327B2 (en)2008-06-252014-09-16At&T Intellectual Property Ii, LpMethod and apparatus for presenting media programs
US8713618B1 (en)2008-11-062014-04-29Google Inc.Segmenting video based on timestamps in comments
US20100287236A1 (en)2009-04-162010-11-11Brian AmentoCollective asynchronous media review
US20140101691A1 (en)2011-06-102014-04-10Tata Consultancy Services LimitedMethod and system for automatic tagging in television using crowd sourcing technique
US20130238995A1 (en)2012-03-122013-09-12sCoolTV, IncApparatus and method for adding content using a media player
US20130263002A1 (en)2012-03-302013-10-03Lg Electronics Inc.Mobile terminal

Non-Patent Citations (87)

* Cited by examiner, † Cited by third party
Title
Arman, F., et al., "Image Processing on Encoded Video Sequences", In ACM Multimedia Systems Journal, vol. 1, No. 5, Mar. 1994, pp. 211-219.
Assfalg, J., et al., "Semantic Annotation of Sports Videos", In IEEE Multimedia, vol. 9, No. 2, Aug. 2002, pp. 52-60.
Caspi, Y. and Bargeron, D., "Sharing Video Annotations", In Proceedings of the International Conference on Image Processing, Singapore, Oct. 24-27, 2004, pp. 2227-2230.
European Extended Search Report dated Dec. 19, 2012 in EP Patent Application No. 09711777.4.
European Extended Search Report dated May 18, 2012 in EP Patent Application No. 07865849.9.
European Extended Search Report dated Sep. 21, 2012 in EP Application No. 09709327.2.
Examination Report dated Apr. 28, 2015 in EP Patent Application No. 7865849.9.
Examination Report dated Aug. 18, 2015 in EP Patent Application No. 09711777.4.
Examination Report dated Jan. 10, 2014 in EP Patent Application No. 09709327.2.
Examination Report dated Jan. 27, 2012 in AU Patent Application No. 2010249316.
Extended European Search Report dated Feb. 10, 2015 in EP Patent Application No. 9758919.6.
Ford, R.M., et al., "Metrics for Shot Boundary Detection in Digital Video Sequences", In Multimedia Systems, vol. 8, No. 1, Jan. 2000, pp. 1432-1882.
Gonzalez, N., "Video Ads: Every Startup Has a Difference Solution", TechCrunch, Jul. 6, 2007, pp. 1-7, available at: https://techcrunch.com/2007/07/06/video-ads-somebody-needs-to-solve-this-problem.
Good, R., "Online Video Publishing Gets Into the Conversation: Click.TV", In What Communication Experts Need to Know, Apr. 18, 2006, pp. 1-10, available at: http://www.masternewmedia.org/news/2006/04/18/online_video_publishing_gets_into.html.
Google Video Blog, "New Commenting and Stats Features", Nov. 14, 2006, p. 1, available at: http://googlevideo.blogspot.com/2006/11/new-commenting-and-stats-features.html.
Google Video Blog, "New Feature: Link Within a Video", Jul. 19, 2006, pp. 1, available at: http://google.blogspot.com/2006/11/new-feature-link-within-video_19.html.
Gordon, A.S., "Using Annotated Video as an Information Retrieval Interface", In Proceedings of the Conference on Intelligent User Interfaces, New Orleans, LA, US, Jan. 9-12, 2000, pp. 133-140.
International Search Report and Written Opinion dated Aug. 20, 2009 in International Patent Application No. PCT/US2009/033475.
International Search Report and Written Opinion dated Jul. 21, 2008 in International Patent Application No. PCT/US2007/088067.
International Search Report and Written Opinion dated Jun. 17, 2009 in International Patent Application No. PCT/US2009/042919.
International Search Report and Written Opinion dated Oct. 6, 2009 in International Patent Application No. PCT/US2009/034422.
Masuda, T., et al., "Video Scene Retrieval Using Online Video Annotation", In New Frontiers in Artifical Intelligent Lecture Notes in Computer Science, Jun. 23, 2003, pp. 54-62.
May, M., "Computer Vision: What is the Difference Between Local Descriptors and Global Descriptors", Computer Vision, Mar. 31, 2013, pp. 1.
Media X, "Online Media Bookmark Manager", last accessed Jul. 18, 2008, pp. 1-2, available at: http://mediax.stanford.edu/documents/bookmark.pdf.
Mikolajczyk, K. and Schmid, C., "A Performance Evaluation on Local Descripots", In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, No. 10, Oct. 2005, pp. 1615-1630.
MirriAd, Month Unkown 2008, last accessed Apr. 20, 2009, pp. 1, available at: http://www.mirriad.com/.
Miyamori, H., et al., "Generation of Views of TV Content Using TV Viewers' Perspectives Expressed in Live Chats on the Web", In Proceedings of the 13th ACM International Conference on Multimedia, Singapore, Nov. 6-11, 2005, p. 853-861.
Moenne-Loccoz, N., et al., "Managing Video Collections at Large", In CUDB '04: Proceedings of the 1st International Workshop on Computer Vision Meets Database, Paris, FR, Jun. 13, 2004, pp. 59-66.
Nagao, K., et al., "Semantic Annotation and Transcoding: Making Web Content More Accessible", In IEEE MultiMedia, Apr. 2001, pp. 1-13.
Naphade, M.R., et al., "A High Performance Shot Boundary Detection Algorithm using Multiple Cues", In Proceedings of the International Conference on Image Processing, Chicago, IL, US, Oct. 4-7, 1988, pp. 884-887.
Notice of Allowance dated Apr. 15, 2022 in U.S. Appl. No. 17/107,018.
Notice of Allowance dated Dec. 2, 2011 in U.S. Appl. No. 12/477,762.
Notice of Allowance dated Jul. 22, 2008 in U.S. Appl. No. 11/615,771.
Notice of Allowance dated Jul. 29, 2020 in U.S. Appl. No. 16/384,289.
Notice of Allowance dated Jun. 28, 2017 in U.S. Appl. No. 14/145,641.
Notice of Allowance dated Mar. 11, 2009 in U.S. Appl. No. 11/615,771.
Notice of Allowance dated Mar. 3, 2014 in U.S. Appl. No. 13/414,675.
Notice of Allowance dated Nov. 21, 2018 in U.S. Appl. No. 15/795,635.
Office Action dated Apr. 21, 2016 in CA Patent Application No. 2866548.
Office Action dated Apr. 23, 2013 in JP Patent Application No. 2010-546967.
Office Action dated Apr. 29, 2014 in AU Patent Application No. 2012244141.
Office Action dated Apr. 30, 2015 in CN Patent Application No. 200780050525.4.
Office Action dated Aug. 13, 2012 in CN Patent Application No. 200980108230.7.
Office Action dated Aug. 5, 2016 in U.S. Appl. No. 14/145,641.
Office Action dated Dec. 21, 2010 in CN Patent Application No. 200780050525.4.
Office Action dated Dec. 26, 2013 in CN Patent Application No. 200780050525.4.
Office Action dated Dec. 29, 2008 in U.S. Appl. No. 11/615,771.
Office Action dated Feb. 1, 2016 in JP Patent Application No. 2014-094684.
Office Action dated Feb. 24, 2016 in U.S. Appl. No. 14/145,641.
Office Action dated Feb. 26, 2015 in KR Patent Application No. 10-2010-7020965.
Office Action dated Feb. 5, 2010 in KR Patent Application No. 10-2009-7015068.
Office Action dated Feb. 7, 2017 in U.S. Appl. No. 14/145,641.
Office Action dated Jan. 16, 2020 in U.S. Appl. No. 16/384,289.
Office Action dated Jul. 22, 2008 in U.S. Appl. No. 11/615,771.
Office Action dated Jun. 14, 2013 in CN Patent Application No. 200780050525.4.
Office Action dated Jun. 16, 2011 in U.S. Appl. No. 12/477,762.
Office Action dated Jun. 20, 2011 in AU Patent Application No. 2010249316.
Office Action dated Mar. 17, 2015 in JP Patent Application No. 2014-094684.
Office Action dated Mar. 18, 2014 in IN Patent Application No. 1191/MUMNP/2009.
Office Action dated Mar. 21, 2013 in CA Patent Application No. 2672757.
Office Action dated Mar. 27, 2020 in U.S. Appl. No. 16/384,289.
Office Action dated Nov. 18, 2021 in U.S. Appl. No. 17/107,018.
Office Action dated Nov. 26, 2012 in CA Patent Application No. 2726777.
Office Action dated Oct. 19, 2010 in JP Patent Application No. P2009-543172.
Office Action dated Oct. 26, 2015 in CN Patent Application No. 200780050525.4.
Office Action dated Oct. 5, 2009 in KR Patent Application No. 10-2009-7015068.
Office Action dated Sep. 10, 2019 in U.S. Appl. No. 16/384,289.
Office Action dated Sep. 16, 2014 in CN Application No. 200780050525.4.
Office Action dated Sep. 18, 2012 in CN Patent Application No. 200910206036.4.
Ooyala, Inc., "Ooyala—Interactive Video Advertising", Month Unknown 2009, last accessed Apr. 20, 2009, pp. 1, available at: http://ooyala.com/products/ivideo.
Participatory Culture Foundation, "Ticket #3504 (new enhancement)", Software Development, Aug. 14, 2006, last accessed Jan. 16, 2007, pp. 1, available at: http://develop.participatoryculture.org/trac/democracy/ticket/3504.
PLYmedia Inc., "BubblePLY", Month Unknown 2008, last accessed Apr. 20, 2009, pp. 1, available at: http://www.plymedia.com/products/bubbleply/.aspx.
Reverend, "More on Mojiti", Bavatuesdays, Mar. 23, 2007, last accessed Apr. 10, 2019, pp. 1-2, available at: http://bavatuesdays.com/more-on-mojiti/.
Schroeter, R., et al., "Vannotea-A Collaborative Video Indexing, Annotation and Discussion System for Broadband Networks", In Proceedings of the Knowledge Capture, Sanibel, FL, US, Oct. 23-26, 2003, pp. 1-8.
Screenshot of "In Video Demo—Check out the Yelp/AdSense demo", Ooyala, Inc., Date Unknown, last accessed Apr. 23, 2009, pp. 1, available at: http://ooyala.com/producsts/ivideo.
Screenshot of "Remixer", YouTube.com, May 2007 to Feb. 2008, pp. 1.
Screenshot of "Veeple Labs—Interactive Video", Veeple, Month Unkown, last accessed Jun. 9, 2008, pp. 1, available at: http://veeple.com/.
Summons to Attend Oral Proceedings dated Apr. 13, 2016 in EP Patent Application No. 07865849.9.
Summons to Attend Oral Proceedings dated Apr. 24, 2017 in EP Patent Application No. 09711777.4.
Summons to Attend Oral Proceedings dated Sep. 25, 2014 in EP Patent Application No. 09709327.2.
Tjondronegoro, D., et al., "Content-Based Video Indexing for Sports Applications Using Integrated Mulit-Modal Approach", In Multimedia '05: Proceedings of the 13th Annual ACM ACM International Conference on Multimedia, Singapore, Nov. 6-11, 2005, pp. 1035-1036.
Tseng, B.L. and Lin, C.Y., "Personalized Video Summary Using Visual Semantic Annotations and Automatic Speech Transcriptions", In Proceedings of the IEEE Workshop on Multimedia Signal Processing, St. Thomas, VI, US, Dec. 9-11, 2002, pp. 1-4.
Veeple.com, "Video Marketing, Video Editing & Hosting, Interactive Video", Month Unknown 2009, last accessed Apr. 20, 2009, pp. 1, available at: http://www.veeple.com/interactivity.php.
Zabih, R., et al., "A Feature-Based Algorithm for Detecting and Classifying Scene Breaks", In Proceedings of the 3rd ACM International Conference on Multimedia, San Francisco, CA, US, Nov. 5-9, 1995, pp. 189-200.
Zentation.com, "The Art of Innovation", last accessed Jun. 26, 2009, pp. 1, available at: http//www.zentation.com/viewer/setup.php?passcode=De2cwpjHsd.
Zentation.com, "Where Video and PowerPoint Meet on the Web", last accessed Oct. 24, 2017, pp. 1, available at: http://www.zentation.com/.
Zentation.com, last accessed Jun. 26, 2009, pp. 1, available at: http://www.zentation.com/viewer/index.phppasscode=epbcSNExIQr.

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US12073852B2 (en)*2021-11-092024-08-27Samsung Electronics Co., Ltd.Electronic device and method for automatically generating edited video

Also Published As

Publication numberPublication date
US8151182B2 (en)2012-04-03
EP2126730A2 (en)2009-12-02
US9805012B2 (en)2017-10-31
CN101589383A (en)2009-11-25
CA2866548C (en)2017-05-09
US20090249185A1 (en)2009-10-01
US20140115440A1 (en)2014-04-24
JP2010515120A (en)2010-05-06
AU2007336947B2 (en)2010-09-30
KR100963179B1 (en)2010-06-15
US11423213B2 (en)2022-08-23
US20210081603A1 (en)2021-03-18
JP4774461B2 (en)2011-09-14
US7559017B2 (en)2009-07-07
EP3324315A1 (en)2018-05-23
US10853562B2 (en)2020-12-01
EP2126730A4 (en)2012-06-20
BRPI0720366B1 (en)2019-03-06
CN101589383B (en)2016-04-27
WO2008079850A2 (en)2008-07-03
AU2010249316A1 (en)2011-01-06
CA2672757A1 (en)2008-07-03
US20080154908A1 (en)2008-06-26
US20220398375A1 (en)2022-12-15
US20120166930A1 (en)2012-06-28
KR20090084976A (en)2009-08-05
CA2672757C (en)2014-12-16
BRPI0720366A2 (en)2013-12-24
WO2008079850A3 (en)2008-10-02
US8775922B2 (en)2014-07-08
CA2866548A1 (en)2008-07-03
AU2007336947A1 (en)2008-07-03
EP2126730B1 (en)2018-03-14
US20190243887A1 (en)2019-08-08
US20180052812A1 (en)2018-02-22
AU2010249316B2 (en)2012-07-26
US10261986B2 (en)2019-04-16

Similar Documents

PublicationPublication DateTitle
US11727201B2 (en)Annotation framework for video
JP6342951B2 (en) Annotate video interval
AU2012244141B2 (en)Annotation framework for video

Legal Events

DateCodeTitleDescription
FEPPFee payment procedure

Free format text:ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STCFInformation on status: patent grant

Free format text:PATENTED CASE


[8]ページ先頭

©2009-2025 Movatter.jp