BACKGROUNDThe present disclosure relates to content delivery and consumption systems and, more particularly, repeating portions of content associated with a particular subject in the content.
SUMMARYIn conventional media consumption systems, when a user wishes to repeat a portion of content (e.g., because the user did not understand the dialogue, or the user missed an action sequence), the user must rewind the content to a playback position prior to the portion they wish to repeat. However, rewind mechanisms are imprecise and do not allow the user to easily control the playback position to which the content returns. This results in the user either not rewinding far enough and missing some of the content the user wishes to repeat, or rewinding farther than the playback position at which the desired portion begins and having to unnecessarily re-watch additional portions of the content.
Furthermore, professionally generated content often contains closed captioning data, which allows a system to repeat a specific portion of audio associated with the subject or character who spoke the dialog. In the case of user generated content (for example millions of hours of video uploaded to websites for playback each day) there is no embedded closed caption data with which a system could use for this purpose.
Systems and methods are described herein for repeating portions of content associated with a particular subject (e.g., character or object) in the content. While the content is playing on a device, content data is analyzed, and a number of signatures are identified. In some embodiments, audio data is analyzed to identify audio signatures (voice or song recognition is an example where audio signatures can be used as identifiers), and each audio signature is associated, based on audio and/or video characteristics, with a particular subject within the content. In some embodiments, video data is analyzed to identify action signatures based on the motion of subjects displayed in the content. An identifier of each action signature is stored, along with a timestamp corresponding to a playback position at which the action signature begins. Subjects may also be identified during playback, and subject signatures identified or assigned to each subject. An identifier of each subject signature is stored, along with a timestamp corresponding to a playback position at which the subject is displayed in the content. A subject signature may be assigned to an audio signature or action signature having the same timestamp.
Upon receiving a command, playback of the content is paused and icons representing each of a number of signatures are displayed. The number of icons could be determined by the number of signatures at or near the current playback position or all icons representing the entirety of signatures identified up to the playback position could be displayed. Upon receiving user selection of an icon corresponding to a particular signature, a portion of the content corresponding to the signature is played back.
In some embodiments, upon receiving user selection of an icon corresponding to a particular subject, an identifier of the subject is retrieved. The timestamp of a signature associated with the identifier is then retrieved, and a portion of the content is played back beginning at the timestamp.
The icons may include an image of the subject corresponding to their respective signature. Video data corresponding to the signature is processed, and a subject of the signature is identified. A portion of a frame of video data in which the subject is displayed is captured as an image for display in the icon.
To identify an audio signature, audio data is analyzed beginning at a first playback position. Audio characteristics unique to a first subject are identified. As analysis continues, audio characteristics of the current audio data are compared with those of previous data. If a significant change in audio characteristics is detected, the portion of audio data from the first playback position to the current playback position is identified as an audio signature. Video data may also be analyzed to determine whether a particular subject responsible for the audio is displayed in the content.
More than one audio signature may have an ending playback position within a threshold amount of time of the current playback position. To determine which portion of audio data to repeat, it is determined whether any of the audio signatures overlap one another. If not, the portion of audio data corresponding to the most recent audio signature is played back. If an audio signature does overlap with another, audio data corresponding to each audio signature is isolated. Icons corresponding to the subject of each signature are then displayed, and the portion corresponding to a selected icon is played back.
To identify an action signature, video data is analyzed beginning at a first playback position. Motion displayed in the content is tracked, and a subject of motion is identified. For example, a face may be detected in a frame of the video content. As analysis continues, the level of motion (e.g., the speed at which the subject moves) is detected. When the level of motion is detected as above a threshold level, the portion of content from the first playback position to the current playback position is identified as an action signature.
If a user selects a particular icon more than once, the system identifies a number of signatures corresponding to the subject represented by the selected icon. The number of selections is counted, and the system retrieves the signature that is the number of selections prior to the current playback position and repeats the portion of content identifier by the retrieved signature. For example, if the user double taps an icon the system will playback the second most recent content associated with that subject (e.g., what the subject said prior to the last comment).
If an action signature has a length below a minimum threshold, the portion of content corresponding to the signature is repeated in slow motion. If the action signature has a length between the minimum threshold and a maximum threshold, the portion of content is played in a continuous loop until another input command is received, for a predetermined number of loops, or for a predetermined period of time.
BRIEF DESCRIPTION OF THE DRAWINGSThe above and other objects and advantages of the disclosure will be apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which:
FIG. 1 shows a generalized embodiment of a user interface displayed in response to a command to repeat audio in accordance with some embodiments of the disclosure;
FIG. 2 shows a generalized embodiment of audio data to be processed in accordance with some embodiments of the disclosure;
FIG. 3 shows a generalized embodiment of video data to be processed in accordance with some embodiments of the disclosure;
FIG. 4 shows an example of a table generated by processing audio and video data together and assignment of subject signatures to audio signatures in accordance with some embodiments of the disclosure;
FIG. 5 show another generalized embodiment of video data to be processed in accordance with some embodiments of the disclosure;
FIG. 6 shows an example of a table generated by processing video data and assignment of subject signatures to action signatures in accordance with some embodiments of the disclosure;
FIG. 7 is a block diagram representing control circuitry and data flow within a media device in response to a command to repeat audio in accordance with some embodiments of the disclosure;
FIG. 8 is a flowchart representing a process for repeating a portion of content in accordance with some embodiments of the disclosure;
FIG. 9 is a flowchart representing a process for assigning a subject signature to an audio signature or action signature is accordance with some embodiments of the disclosure;
FIG. 10 is a flowchart representing a process for playing back a portion of content in accordance with some embodiments of the disclosure;
FIG. 11 is a flowchart representing a process for capturing an image of a subject from video data in accordance with some embodiments of the disclosure;
FIG. 12 is a flowchart representing a process for identifying audio signatures in accordance with some embodiments of the disclosure;
FIG. 13 is a flowchart representing a process for assigning audio signatures to a subject in accordance with some embodiments of the disclosure;
FIG. 14 is a flowchart representing a process for playing back one of a plurality of portions of audio in accordance with some embodiments of the disclosure;
FIG. 15 is a flowchart representing a process for identifying action signatures in accordance with some embodiments of the disclosure;
FIG. 16 is a flowchart representing a process for identifying a subject displayed in content in accordance with some embodiments of the disclosure;
FIG. 17 is a flowchart representing a process for detecting a threshold level of motion in accordance with some embodiment of the disclosure; and
FIG. 18 is a flowchart representing a process for repeating a portion of content in slow motion or in a loop in accordance with some embodiments of the disclosure.
DETAILED DESCRIPTIONFIG. 1 depicts a user interface displayed overcontent102 in response to a command to pause or repeat a portion of media content. Whilecontent102 is being consumed onmedia device100, themedia device100 processes audio and/or video data of thecontent102 to identify a number of signatures (e.g., audio signatures or action signatures, as discussed below). A user may tap on a touchscreen interface of themedia device100 to pause thecontent102. A user may alternatively or additionally request that a portion of content be repeated, using, for example, a voice command or user input device. Upon receiving the command, themedia device100 pauses playback of thecontent102 and displays a series of icons112a-112drepresenting subjects104,106,108, and110 of signatures at or near the paused playback position. The user may select one of the icons112a-112dand, in response, themedia device100 repeats the portion of content corresponding to a recent signature associated with the subject represented by the selected icon. For example,icon112arepresents subject104. In response to selection oficon112a,media device100 repeats a portion of dialogue identified bymedia device100 as having been spoken by the character identified assubject104.
FIG. 2 depicts an embodiment of audio processing to identify audio signatures incontent102.Media device100 processesaudio data200 during playback ofcontent102.Media device100 identifies audio characteristics ofaudio data202 and determines thataudio data202 is spoken or otherwise generated by a single subject incontent102. As playback ofcontent102 continues,media device100 processesaudio data204 and determines, based on a comparison of audio characteristics ofaudio data204 with those ofaudio data202, thataudio data204 is spoken or otherwise generated by a different subject than that ofaudio data202. Themedia device100 may generate a database or other data structure in which to store each audio signature along with an identifier of the associated subject.Media device100 continues processingaudio data206 and208 in a similar manner. In some cases, multiple subjects may generate audio at the same time. For example,audio data210 may include audio generated by two separate subjects simultaneously.Media device100 processes the audio data and isolates audio data from each subject using audio characteristics specific to each subject, such as base frequency, modulation, amplitude, or other audio characteristics.
FIG. 3 depicts an embodiment of video processing to identify subject signatures incontent102.Media device100processes video data300 in conjunction to identify subjects in video ofcontent102.Media device100processes video data300 to identify discrete objects and characters/actors incontent102.Media device100 determines at least one object or character/actor present in at least one frame of video.Media device100 may use facial recognition, object recognition, edge detection, or any other suitable video processing methods to identify objects and characters/actors.Media device100 determines thatCharacter1 is displayed invideo data portions302,306, and312 andCharacter2 is displayed invideo data portions304,308, and310.Media device100 may store parameters corresponding to each identified character as a subject signature.
FIG. 4 shows an example of a table generated by processing audio and video data together and assignment of subject signatures to audio signatures in accordance with some embodiments of the disclosure.Media device100 determines at what timestamps a source signature and audio signature overlap and assigns the respective subject to the overlapping audio signature. Between T0and T1,Character1 is displayed incontent102. From T1through T4,Character1 continues to be displayed while audio signature S1is present in thecontent102.Media device100 assigns S1toCharacter1. From T4to T5Character1 continues to be displayed, but no audio signature is present. From T5to T6Character2 is displayed incontent102 while audio signature S2is present.Media device100 assigns S2toCharacter2. Beginning at T7, bothCharacter1 andCharacter2 are displayed incontent102. Audio signature S3begins at T8. Media device100 determines that the audio characteristics of audio signature S3do not match the audio characteristics of any previously identified audio signature (i.e., S1or S2) and temporarily assigns audio signature S3to “UNKNOWN-1.” Similarly, at T13audio signature S4is present incontent102 andmedia device100 determines that the audio characteristics of audio signature S4do not match the audio characteristics of any previously identified audio signature (i.e., S1, S2, or S3). Additionally, no character is displayed at T13. Therefore,media device100 temporarily assigns audio signature S4to “UNKNOWN-2.” At T18, an audio signature begins whileCharacter1 is displayed in thecontent102. The audio characteristics of the audio signature match those of audio signature S1which was previously identified and assigned toCharacter1.Media device100 therefore identifies the audio signature as S1and assigns it toCharacter1. At T18,Character2 is also displayed in thecontent102 and at T21, while audio signature S1is still present in thecontent102, another audio signature begins, having audio characteristics matching those of audio signature S2. Because S2was previously identified as assigned toCharacter2, and because S1is still present and is already assigned toCharacter1,media device100 assigns S2toCharacter2. Using a similar analysis at T23,media device100 identifies the audio signature as S1based on its audio characteristics and assigns it toCharacter1.
FIG. 5 depicts an embodiment of video processing to identify action signatures incontent102.Media device100processes video data300 to determine if the motion of any subject displayed in the video exceeds a threshold level ofmotion502.Media device100 first identifiesmotion504 of subjects in video ofcontent102.Media device100 identifies discrete objects and characters/actors incontent102.Media device100 determines at least one object or character/actor present in at least one frame of video.Media device100 may use facial recognition, object recognition, edge detection, or any other suitable video processing methods to identify objects and characters/actors.Media device100 compares the position of each subject in a subsequent frame of video to determine if any subject moved more than a threshold distance between the two frames of video. For example,media device100 determines that motion ofCharacter1 exceedsthreshold502 from T3through T8and identifiesmotion506 as action signature A1. Similarly,media device100 determines the motion ofCharacter2 exceeds thethreshold502 from T19through T23and identifiesmotion508 as action signature A2.
FIG. 6 shows an example of a table generated by processing video data and assignment of subject signatures to action signatures in accordance with some embodiments of the disclosure. Similar to the analyses described above in connections withFIG. 4,media device100 assigns action signatures to subject signatures based on which subject signatures coincide with which action signatures. Thus, action signature A1is assigned toCharacter1 at T3due toCharacter1 being present at the start of A1, and action signature A2is assigned toCharacter2 at T19due toCharacter2 being present at the start of A2.
FIG. 7 is an illustrative block diagram representing circuitry and data flow withinmedia device100 in accordance with some embodiments of the disclosure.Media device100 may be any device for accessing the content described above, such as a television, a Smart TV, a set-top box, an integrated receiver decoder (IRD) for handling satellite television, a digital storage device, a digital media receiver (DMR), a digital media adapter (DMA), a streaming media device, a DVD player, a DVD recorder, a connected DVD, a local media server, a BLU-RAY player, a BLU-RAY recorder, a personal computer (PC), a laptop computer, a tablet computer, a WebTV box, a personal computer television (PC/TV), a PC media server, a PC media center, a handheld computer, a stationary telephone, a personal digital assistant (PDA), a mobile telephone, a portable video player, a portable music player, a portable gaming machine, a smartphone, or any other television equipment, computing equipment, or wireless device, and/or combination of the same.Media device100 comprisesinput circuitry704.Input circuitry704 may include a microphone and voice processing circuitry for receiving voice commands, infrared receiving circuitry for receiving commands from a remote control device, a touchscreen interface for receiving user interaction with graphical user interface elements, or any combination thereof or any other suitable input circuitry for receiving any other suitable user input.Media device100 also comprisescontrol circuitry700 andstorage702.Control circuitry700 may be based on any suitable processing circuitry and comprises control circuits and memory circuits, which may be disposed on a single integrated circuit or may be discrete components. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores). In some embodiments, processing circuitry may be distributed across multiple separate processors or processing units, for example, multiple of the same type of processing units (e.g., two Intel Core i7 processors) or multiple different processors (e.g., an Intel Core i5 processor and an Intel Core i7 processor). Some control circuits may be implemented in hardware, firmware, or software.Input circuitry704 may be integrated withcontrol circuitry700.Storage702 may be any device for storing electronic data, such as random-access memory, read-only memory, hard drives, optical drives, digital video disc (DVD) recorders, compact disc (CD) recorders, BLU-RAY disc (BD) recorders, BLU-RAY 3D disc recorders, digital video recorders (DVRs, sometimes called personal video recorders, or PVRs), solid state devices, quantum storage devices, gaming consoles, gaming media, or any other suitable fixed or removable storage devices, and/or any combination of the same.
Control circuitry700 comprisesmedia playback circuitry706.Media playback circuitry706 receivescontent102 from a content provider. The content provider may be an OTT/Internet service (e.g., Netflix), a traditional television network (e.g., NBC), a traditional media company (e.g., NBCUniversal), or any other suitable content provider.Content102 may be received via a physical RF channel over a cable television connection or terrestrial broadcast, or may be received over an Internet connection from an over-the-top (OTT) service using a wired connection (e.g., Ethernet) or wireless connection (e.g., 802.11a/b/g/n (WiFi), WiMax, GSM, UTMS, CDMA, TDMA, 3G, 4G, 4G LTE, or any other suitable type of wireless data transmission). Media playbackcircuitry processes content102 and transmits708 audio and video data ofcontent102 toaudio processing circuitry710 andvideo processing circuitry712, respectively. Media playback circuitry transmits714athe audio data ofcontent102 toaudio output circuitry716 and simultaneously transmits714bthe video data ofcontent102 toaudio output circuitry716 andvideo output circuitry718, respectively.
Audio processing circuitry710 analyzes audio characteristics of audio data ofcontent102 to identify unique audio signatures using any suitable audio analysis technique. For example,audio processing circuitry710 may use frequency analysis to determine a base frequency and unique harmonic pattern of a particular voice, phoneme analysis to determine an accent of a particular voice, etc.Audio processing circuitry710 may also identify non-vocal audio such as music, sound effects, and the like using similar frequency analysis techniques or any other suitable method of audio analysis. Once a particular set of audio characteristics have been identified,audio processing circuitry710 stores the audio characteristics in, for example,storage702, along with a timestamp corresponding to a playback position ofcontent102 at which the audio characteristics were first identified.Audio processing circuitry710 continues to analyze audio data ofcontent102 and compares the determined audio characteristics of the audio data to the stored audio characteristics. Upon detecting a significant difference in audio characteristics,audio processing circuitry710 determines that the source of the audio has changed. For example, the base frequency of a voice may change by more than 20 Hz. Audio processing circuitry generates an audio signature from the stored audio characteristics and timestamp andstores720 the audio signature in a database instorage702.Audio processing circuitry710 then stores the new audio characteristics and a new timestamp instorage702 and continues analyzing the audio data as described above.
In some embodiments,media device100 processes video data ofcontent102 in conjunction with the audio data to identify a subject corresponding to an audio signature. In some embodiments,media device100 processes video data to identify action signatures based on motion of subjects incontent102.Video processing circuitry712 analyzes video data ofcontent102 using edge detection, facial recognition, or any other suitable video or image processing technique to identify subjects in a video frame.Video processing circuitry712 may capture and process a single frame of video data or may process more than one frame of video data. For example,video processing circuitry712 may process a single frame to identify a person depicted in the frame, or a set of consecutive frames to determine whether a person depicted in the set of frames is the subject of an audio signature by analyzing the movement of the mouth of the person. If a depicted subject is identified, the audio signature is stored720 instorage702 in association with anidentifier722 of the source. In some embodiments,video processing circuitry712 also captures as an image a portion of at least one frame in which the subject is depicted and stores the image in association with the audio signature, or in association with an identifier of the subject. Alternatively,video processing circuitry712 stores an identifier of a particular frame in which the subject is depicted and a set of coordinates identifying a portion of the frame that depicts the subject.
During playback ofcontent102,input circuitry704 receivescommand724 from a user input device to repeat a portion of content. Upon receivingcommand724,input circuitry704 transmits aninstruction726 tomedia playback circuitry706 to pause playback of thecontent102 and aninstruction728 tostorage702 to retrieve audio signatures and/or action signatures within a threshold amount of time prior to the time at which thecommand724 was received. For example,input circuitry704 may instructstorage702 to retrieve signatures with timestamps within the last thirty seconds prior to the timestamp at which thecontent102 is paused. The retrieved signatures are transmitted730 fromstorage702 to controlcircuitry700.Control circuitry700, usingvideo output circuitry718, generates for display a number of icons, each icon representing a subject of one of the retrieved audio signatures. The icons are then displayed732 as an overlay over the pausedcontent102.
Upon receivingselection734 of an icon,input circuitry704 transmits an instruction736 tomedia playback circuitry706 to replay the portion ofcontent102 corresponding to the signature represented by the selected icon.Media playback circuitry706 retrieves the media data and transmits738aaudio of the retrieved media data toaudio output circuitry716 and for output740 and transmits738bvideo of the retrieved media data tovideo output circuitry718 foroutput732.
FIG. 8 is a flowchart representing anillustrative process800 for resolving a query to repeat a portion of content in accordance with some embodiments of the disclosure.Process800 may be implemented oncontrol circuitry700. In addition, one or more actions ofprocess800 may be incorporated into or combined with one or more actions of any other process or embodiment described herein.
At802,control circuitry700, usingaudio processing circuitry710 and/orvideo processing circuitry712, identifies, during playback ofcontent102, a plurality of signatures. This may be accomplished using methods described below in connection withFIG. 12.Audio processing circuitry710 and/orvideo processing circuitry712 may identify one signature incontent102 at a time or may identify multiple signatures simultaneously.
At804,control circuitry700 initializes a counter variable N with a value of 0. For each identified signature,control circuitry700 determines, at806, whether a previously identified subject is the subject of the current signature. For example,control circuitry700 compares the audio characteristics of previous audio signatures to those of the current audio signature. As another example,control circuitry700 compares objects and characters displayed in a frame of video to parameters of previously identified subjects. If the audio characteristics of the current audio signature do not match those of any of the previous audio signatures, or if no object or character currently displayed matches a previously identified subject, then, at808,control circuitry700 assigns a new identifier as the subject of the signature. If the audio characteristics of the current audio signature do match those of a previous audio signature or if an object or character currently displayed matches a previously identified subject, then controlcircuitry700 determines that the subject of the current signature is the same as the subject of the previous signature having matching audio characteristics or image parameters and, at810, assigns the subject identifier of the previous signature to the current signature.Control circuitry700 then stores the identifier of the current signature and a start time corresponding to the current signature instorage702. At812,control circuitry700 determines whether all identified signatures have yet been processed by comparing the value of N to the number of signatures identified. If there are more signatures to process then, at814,control circuitry700 increments the value of N by one and processing returns to step806.
At816,control circuitry700, usinginput circuitry704, receives an input command. The input command may be a command to pause playback of the content, or a command to repeat a portion of the content. For example,input circuitry704 may include a microphone for receiving a voice command, an infrared receiver for receiving a command from a remote control, a WiFi or Bluetooth module for receiving commands from a device such as a tablet or smartphone, or any other suitable circuitry for receiving input commands.
At818,control circuitry700, usingvideo output circuitry718, generates for display a plurality of icons (e.g.,112a-112d), each icon representing a subject associated with the retrieved signatures. At820,control circuitry700, usinginput circuitry704, receives a selection of an icon. In response to receiving the selection,control circuitry700 retrieves the timestamp of the signature associated with the selected icon. At822,control circuitry700, usingmedia playback circuitry706, retrieves the portion ofcontent102 corresponding to the timestamp of the signature and plays back the portion of the content usingaudio output circuitry716 andvideo output circuitry718. If multiple audio signatures coincide,control circuitry700 may, usingaudio processing circuitry710, isolate audio data from the subject of an audio signature as described below in connection withFIG. 14.
The actions or descriptions ofFIG. 8 may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG. 8 may be done in suitable alternative orders or in parallel to further the purposes of this disclosure.
FIG. 9 is a flowchart representing anillustrative process900 for assigning a subject signature to an audio signature or action signature in accordance with some embodiments of the disclosure.Process900 may be implemented oncontrol circuitry700. In addition, one or more actions ofprocess900 may be incorporated into or combined with one or more actions of any other process or embodiment disclosed herein.
At902,control circuitry700 identifies, during playback ofcontent102, at least one subject signature. For example,control circuitry700, usingvideo processing circuitry712, analyzes at least one video frame ofcontent102. Using edge detection, facial detection, or any other suitable image processing or video processing techniques,video processing circuitry712 identifies a subject signature of at least one subject displayed in the video frame.Control circuitry700 stores, instorage702, a set of parameters corresponding to the visual characteristics of each displayed subject.
At904,control circuitry700 initializes a counter variable N with a value of 0. For each identified subject signature,control circuitry700 determines, at906, whether a previously identified subject is the subject of the current signature. For example,control circuitry700 compares the parameters of previous subject signatures to those of the current subject signature. If the parameters of the current subject signature do not match those of any of the previous subject signatures or if no subject currently displayed matches a previously identified subject, then, at908,control circuitry700 assigns a new identifier to the subject signature. If the parameters of the current subject signature do match those of a previous subject signature or if a subject currently displayed matches a previously identified subject, then controlcircuitry700 determines that the subject of the current signature is the same as the subject of the previous signature having matching parameters and, at910, assigns the subject identifier of the previous signature to the current signature.Control circuitry700 then stored the subject identifier and a start time corresponding to the subject signature instorage702.
At912,control circuitry700 determines whether any audio signature or action signature has the same timestamp as the current subject signature. If no audio signature or action signature has the same timestamp as the current subject signature, then processing proceeds to step916. If an audio signature or action signature has the same timestamp as the current subject signature, then, at914,control circuitry700 assigns the current subject signature to the audio signature or action signature having the same timestamp. After assigning the current subject signature to the audio signature or action signature having the same timestamp, at916,control circuitry700 determines whether all identified signatures have yet been processed by comparing the value of N to the number of signatures identified. If there are more signatures to process then, at916,control circuitry700 increments the value of N by one and processing returns to step906.
The action or descriptions ofFIG. 9 may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in related toFIG. 9 may be done in suitable alternative orders or in parallel to further the purposes of this disclosure.
FIG. 10 is a flowchart representing anillustrative process1000 for playing back a portion of audio in accordance with some embodiments of the disclosure.Process1000 may be implemented oncontrol circuitry700. In addition, one or more actions ofprocess1000 may be incorporated into or combined with one or more actions of any other process or embodiment described herein.
At1002,control circuitry700 retrieves an identifier of the subject represented by an icon. For example, when generating an icon for display,control circuitry700 may include metadata for the icon including the subject identifier. In another example,control circuitry700 generates a link or other computer code that includes a reference or pointer to the source identifier.
At1004,control circuitry700 accesses a database or other data structure instorage702 in which signatures are stored in association with identifiers of subjects. At1006,control circuitry700 retrieves, from the database or data structure, a timestamp of a signature associated with the retrieved subject identifier. At1008,control circuitry700, usingmedia playback circuitry706, plays back the portion of the audio ofcontent102 beginning at the retrieved timestamp.
The actions or descriptions ofFIG. 10 may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG. 10 may be done in suitable alternative orders or in parallel to further the purposes of this disclosure.
FIG. 11 is a flowchart representing anillustrative process1100 for capturing an image of an audio source from video data in accordance with some embodiments of the disclosure.Process1100 may be implemented oncontrol circuitry700. In addition, one or more actions ofprocess1100 may be incorporated into or combined with one or more actions of any other process or embodiment described herein.
At1102,control circuitry700, usingvideo processing circuitry712, processes at least one frame of video data ofcontent102 corresponding to a signature and, at1104, identifies a subject displayed in the at least one frame. For example,video processing circuitry712 may use edge detection, facial recognition, object recognition, or any other suitable video processing or image processing technique to identify objects or characters displayed in the frame. If more than one frame is processed,video processing circuitry712 may compare the frames to determine if, for example, the mouth of a character is moving during playback of the signature.
At1106,video processing circuitry712 captures a portion of the video frame in which the identified subject is displayed.Video processing circuitry712 may capture image data from the frame and store the image instorage702 in association with the signature. Alternatively,video processing circuitry712 may capture coordinates bounding an area of the frame in which the identified source is displayed and store instorage702 the coordinates, as well as an identifier of the frame, in association with the signature.
It is contemplated that the actions or descriptions ofFIG. 11 may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG. 11 may be done in suitable alternative orders or in parallel to further the purposes of this disclosure.
FIG. 12 is a flowchart representing anillustrative process1200 for identifying audio signatures in accordance with some embodiments of the disclosure.Process1200 may be implemented oncontrol circuitry700. In addition, one or more actions ofprocess1200 may be incorporated into or combined with one or more actions of any other process or embodiment described herein.
At1202,control circuitry700, usingaudio processing circuitry710, begins analyzing audio data ofcontent102 at a first timestamp. At1204,control circuitry700 initializes a variable Tfirstand sets as its value the first timestamp. At1206,audio processing circuitry710 identifies audio characteristics of the audio data, which are unique to a first source. For example,audio processing circuitry710 may use frequency analysis, rhythm analysis, harmonics, tempo, or any other audio characteristics to uniquely identify audio as being from a particular source. At1208,audio processing circuitry710 continues analyzing the audio data.
At1210,control circuitry700 initializes a variable Tcurrentand sets its value as the timestamp corresponding to the audio data currently being analyzed. At1212,audio processing circuitry710 determines whether the audio characteristics at Tcurrentare different from the audio characteristics at Tfirst. For example,audio processing circuitry710 may compare a set of audio characteristics at Tcurrentwith those identified at Tfirstto identify whether the value of any characteristic has changed by at least a threshold amount, such as five percent. If no change is detected, processing returns to1208, at whichaudio processing circuitry710 continues analyzing the audio data.
If the audio characteristics at Tcurrentare determined to be different from the audio characteristics at Tfirst, then, at1214,audio processing circuitry710 identifies as an audio signature the portion of audio data from Tfirstto Tcurrent.Audio processing circuitry710 stores the audio signature instorage702 along with at least Tfirst. At1216,control circuitry700 sets the value of Tfirstto the value of Tcurrent, and processing returns to1208.
It is contemplated that the actions or descriptions ofFIG. 12 may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG. 12 may be done in suitable alternative orders or in parallel to further the purposes of this disclosure.
FIG. 13 is a flowchart representing anillustrative process1300 for assigning audio signatures to a subject in accordance with some embodiments of the disclosure.Process900 may be implemented oncontrol circuitry700. In addition, one or more actions ofprocess900 may be incorporated into or combined with one or more actions of any other process or embodiment described herein.
At1302,control circuitry700, usingvideo processing circuitry712, processes at least one video frame from a portion ofcontent102 between Tfirstand Tcurrent. At1304,video processing circuitry712 determines whether a subject is displayed in the at least one frame.Video processing circuitry712 may use edge detection, facial recognition, object recognition, or any other suitable video processing or image processing technique.
If a subject displayed is displayed in the at least one frame, then, at1306,control circuitry700 determines whether the displayed subject is the source of an audio signature. For example,control circuitry700 may compare, usingaudio processing circuitry710 andvideo processing circuitry712, the audio signature with the at least one frame of video data.Audio processing circuitry710 may identify a type of audio signature based on audio characteristics. For example,audio processing circuitry710 may identify a low frequency speech pattern as a male voice.Control circuitry700 may then usevideo processing circuitry712 to identify a male figure in the at least one video frame. Video processing circuitry412 may identify a character whose mouth is moving during the audio signature.
If the displayed subject is the source of the audio signature, then, at1308,control circuitry700 assigns the audio signature to the displayed source. For example,control circuitry700 may update the database or data structure instorage702 to include an identifier of the subject in association with the audio signature.
If no subject is displayed in the at least one frame, or if a displayed subject is not the source of the audio signature, then, at1310,control circuitry700 assigns the audio signature to another subject.Control circuitry700 may, usingaudio processing circuitry710, compare the audio characteristics of the audio signature with other audio signatures having known subjects. If a match is detected,control circuitry700 may assign as a subject of the audio signature the subject of the audio signature having matching audio characteristics. If no matches are detected,control circuitry700 may assign a new or temporary subject identifier to the audio signature.
It is contemplated that the actions or descriptions ofFIG. 13 may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG. 13 may be done in suitable alternative orders or in parallel to further the purposes of this disclosure.
FIG. 14 is a flowchart representing anillustrative process1400 for playing back one of a plurality of portions of audio in accordance with some embodiments of the disclosure.Process1400 may be implemented on control circuitry400. In addition, one or more actions ofprocess1400 may be incorporated into or combined with one or more actions of any other process or embodiment described herein.
At1402,control circuitry700 determines whether more than one audio signature has an end time within a threshold time of a current playback timestamp. For example, in a portion ofcontent102 in which several characters have a conversation, several audio signatures may end within thirty seconds of the current playback position and will be returned in response to a query for audio signatures present within the threshold period.
At1404,control circuitry700 determines whether any of the audio signatures ending within the threshold period temporally overlap. For example, two characters may speak simultaneously, resulting in at least one audio signature ending at the same time as, or between the start time and end time of, another audio signature. If no audio signatures temporally overlap, then, at1406,control circuitry700 plays back a portion of audio data ofcontent102 corresponding to the most recent audio signature. However, if any audio signatures temporally overlap, then, at1408,control circuitry700, usingaudio processing circuitry710, isolates the audio data corresponding to each audio signature.Audio processing circuitry710 may use audio characteristics of each audio signature to isolate frequencies and harmonics unique to each signature.Audio processing circuitry710 may suppress frequencies associated with background noise.Audio processing circuitry710 may extract or copy audio data representing each individual audio signature and generate individual audio samples corresponding to each audio signature.
At1410,control circuitry700, usingvideo output circuitry718, generates for display a plurality of icons, each icon representing a subject corresponding to one of the audio signatures. At1412,control circuitry700, usinginput circuitry704, receives a selection of an icon and, at1414, plays back, usingmedia playback circuitry706, a portion of at least the audio ofcontent102 corresponding to the audio signature associated with the selected icon. This may be an extracted audio sample as described above.
It is contemplated that the actions or descriptions ofFIG. 14 may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG. 14 may be done in suitable alternative orders or in parallel to further the purposes of this disclosure.
FIG. 15 is a flowchart representing anillustrative process1500 for identifying action signatures in accordance with some embodiments of the disclosure.Process1500 may be implemented oncontrol circuitry700. In addition, one or more actions ofprocess1500 may be incorporated into or combined with one or more actions of any other process or embodiment described herein.
At1502,control circuitry700, usingvideo processing circuitry712, begins analyzing video data ofcontent102 at a first timestamp. At1504,control circuitry700 initializes a variable Tfirstand sets as its value the first timestamp. At1506,video processing circuitry712 determines whether a subject displayed in the video data ofcontent102 displays a threshold level of motion. For example,video processing circuitry712 may compare the position of each subject in a frame at the first timestamp with that of a previous frame to determine a distance traveled between the two frames. If no subject displays a threshold level of motion, processing returns to step1502.
If a subject does display a threshold level of motion, then, at1508,video processing circuitry712 continues analyzing the video data. At1510,control circuitry700 initializes a variable Tcurrentand sets its value as the timestamp corresponding to the video data currently being analyzed. At1512,video processing circuitry712 determines whether the motion of the subject at Tcurrentis still at or above the threshold level of motion. If so, processing returns to1508, at whichvideo processing circuitry712 continues analyzing the video data.
If the motion of the subject at Tcurrentis determined to be below the threshold level of motion, then, at1514,video processing circuitry712 identifies as an action signature the portion of video data from Tfirstto Tcurrent.Video processing circuitry712 stores the action signature instorage702 along with at least Tfirst. At1516,control circuitry700 sets the value of Tfirstto the value of Tcurrent, and processing returns to1506.
It is contemplated that the actions or descriptions ofFIG. 15 may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG. 15 may be done in suitable alternative orders or in parallel to further the purposes of this disclosure.
FIG. 16 is a flowchart representing anillustrative process1600 for identifying a subject displayed in content in accordance with some embodiments of the disclosure.Process1600 may be implemented oncontrol circuitry700. In addition, one or more actions ofprocess1600 may be incorporated into or combined with one or more actions of any other process or embodiment described herein.
At1602,control circuitry700, usingvideo processing circuitry712, detects a face in a frame of the video ofcontent102.Video processing circuitry712 may use any suitable video processing or image processing technique to identify faces displayed in video data ofcontent102.Video processing circuitry712 may identify a set of image parameters which uniquely identify the detected face, such as inter-pupil distance (i.e., the distance between the left and right pupils of the face's eyes), nose size or position, ear size or position, hair color, eye color, overall face shape, etc.Video processing circuitry712 may also employ a Haar algorithm or local binary patterns algorithm to identify faces. At1604,video processing circuitry712 assigns an identifier to the detected face. At1606,video processing circuitry712 stores, instorage702, the set of parameters corresponding to the face in association with the assigned identifier.
It is contemplated that the actions or descriptions ofFIG. 16 may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG. 16 may be done in suitable alternative orders or in parallel to further the purposes of this disclosure.
FIG. 17 is a flowchart representing anillustrative process1700 for detecting a threshold level of motion in accordance with some embodiment of the disclosure.Process1700 may be implemented oncontrol circuitry700. In addition, one or more actions ofprocess1700 may be incorporated into or combined with one or more actions of any other process or embodiment described herein.
At1702,control circuitry700, usingvideo processing circuitry712, analyzes a first video frame ofcontent102. Video processing circuitry identifies at least one subject displayed in the first video frame using methods described above in connection withFIG. 9. At1704,video processing circuitry712 identifies a position of a subject in the first video frame. For example,video processing circuitry712 identifies x and y coordinates of a corner of the subject. If the subject is of an irregular shape, such as a character's face,video processing circuitry712 may first define a bounding box having a top-left corner corresponding to a point directly to the left of the top-most pixel of the subject and directly above the left-most pixel of the subject, and a bottom-right corner corresponding to a point directly to the right of the bottom-most pixel of the subject and directly below the right-most pixel of the subject.Video processing circuitry712 may then identify a position of the bounding box.
At1706,video processing circuitry712 analyzes the next frame of video ofcontent102 and, at1708, identifies the position of the subject in the next frame of video using the methods described above. At1710,video processing circuitry712 determines whether the subject has moved a threshold distance between the two frames analyzed. For example,video processing circuitry712 may calculate the difference between the position of the object or of the bounding box in each of the frames and determine whether the object moved more than a threshold number of pixels.Video processing circuitry712 may also account for motion toward or away from the viewer by comparing the apparent size of the object between the two frames and determining whether the size has increased or decreased by a threshold amount.Video processing circuitry712 may use both of these calculations to determine three-dimensional motion of the subject.Video processing circuitry712 may calculate a vector in a three-dimensional space along which the subject has moved and determine the distance traveled along the vector. If the subject has moved a threshold distance, then, at1712,video processing circuitry712 identifies that a threshold level of motion has been detected.
It is contemplated that the actions or descriptions ofFIG. 17 may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG. 17 may be done in suitable alternative orders or in parallel to further the purposes of this disclosure.
FIG. 18 is a flowchart representing anillustrative process1800 for repeating a portion of content in slow motion or in a loop in accordance with some embodiments of the disclosure.Process1800 may be implemented oncontrol circuitry700. In addition, one or more actions ofprocess1800 may be incorporated into or combined with one or more actions of any other process or embodiment described herein.
At1802,control circuitry700 determines whether the portion of thecontent102 corresponding to a selected action signature is shorter than a maximum threshold, such as thirty seconds. If the portion of thecontent102 is shorter than the maximum threshold, then, at1804,control circuitry700 determines whether the portion of thecontent102 is also shorter than a minimum threshold, such as five seconds. If the portion of thecontent102 corresponding to the selected action signature is shorter than the minimum threshold, then, at1806,control circuitry700, usingmedia playback circuitry706, repeats the portion of thecontent102 in slow motion. If the length of the portion of thecontent102 corresponding to the selected action signature is between the minimum threshold and the maximum threshold, then, at1808,control circuitry700, usingmedia playback circuitry706, repeats the portion of thecontent102 in a loop.Media playback circuitry706 may continue looping the portion ofcontent102 until another input command is received. Alternatively or additionally,media playback circuitry706 may continue looping the portion ofcontent102 for a predetermined number of loops (e.g., five loops) or a predetermined amount of time (e.g., thirty seconds).
It is contemplated that the actions or descriptions ofFIG. 18 may be used with any other embodiment of this disclosure. In addition, the actions and descriptions described in relation toFIG. 18 may be done in suitable alternative orders or in parallel to further the purposed of this disclosure.
The processes described above are intended to be illustrative and not limiting. One skilled in the art would appreciate that the steps of the processes discussed herein may be omitted, modified, combined, and/or rearranged, and any additional steps may be performed without departing from the scope of the invention. More generally, the above disclosure is meant to be exemplary and not limiting. Only the claims that follow are meant to set bounds as to what the present invention includes. Furthermore, it should be noted that the features and limitations described in any one embodiment may be applied to any other embodiment herein, and flowcharts or examples relating to one embodiment may be combined with any other embodiment in a suitable manner, done in different orders, or done in parallel. In addition, the systems and methods described herein may be performed in real time. It should also be noted that the systems and/or methods described above may be applied to, or used in accordance with, other systems and/or methods.