Priority of united states provisional patent application No.61/174,787 entitled "Methods and Apparatus To provide secondary Content in Association with Primary Broadcast Media Content", filed on 5/1/2009, the entire disclosure of which is incorporated herein by reference.
Detailed Description
Exemplary methods, apparatus and articles of manufacture to provide secondary content associated with primary broadcast media content are disclosed. A disclosed example method includes: receiving an audio signal output by a first media presentation device, the audio signal associated with first media content; decoding the audio signal to extract a code from the audio signal, the code identifying at least one of the first media content or a broadcaster of the first media content; setting a clock using a timestamp associated with the code; obtaining second media content based on the code and the timestamp, the second media content including a plurality of auxiliary content for respective ones of a plurality of timestamps; and presenting, at a second media presentation device, a first one of the plurality of secondary media when its respective timestamp substantially corresponds to the time value obtained from the clock.
Another exemplary method comprises: receiving an audio signal output by a first media presentation device, the audio signal associated with first media content; decoding the audio signal to extract a code from the audio signal, the code representing at least one of the first media content or a broadcaster of the first media content; and transmitting a wireless signal to a second media presentation device, the signal including the extracted code, the signal for triggering the second media presentation device to obtain second media content based on the code and present the second media content at the second media presentation device.
Yet another exemplary method comprises: receiving audio output by a first media presentation device; obtaining a Nielsen code or from the audioAt least one of a code, the obtained code representing at least one of first media content or a broadcaster of the first media content; obtaining second media content based on the extracted code; and presenting the second media content on a second media presentation device different from the first media presentation device.
An exemplary disclosed apparatus includes: an audio interface that receives an audio signal output by a first media presentation device, the audio signal associated with first media content; a decoder that decodes the audio signal to extract a code from the audio signal, the code representing at least one of the first media content or a broadcaster of the first media content; a decoder that obtains a timestamp associated with the code; a second content module to obtain second media content based on the code and the timestamp, the second media content including a plurality of auxiliary content for respective ones of a plurality of timestamps; and a user interface module that presents a first of the plurality of secondary content media at a second media presentation device when a respective timestamp of the first of the plurality of secondary content media substantially corresponds to a time value obtained from the timestamp.
Another exemplary apparatus includes: an audio interface that receives an audio signal output by a first media presentation device, the audio signal associated with first media content; a decoder that decodes the audio signal to extract a code associated with at least one of the first media content or a broadcaster of the first media content; and a wireless interface to transmit a wireless signal to a second media presentation device, the signal including the extracted code, the signal to trigger the second media presentation device to obtain second media content based on the code and present the second media content at the second media presentation device.
Yet another exemplary apparatus includes: an audio input interface that receives audio output by a first media presentation device; a decoder that obtains at least one of a nielsen code or an Arbitron code from the audio, the obtained code corresponding to at least one of first media content or a broadcaster of the first media content; a second content module that obtains second media content based on the extracted code; and a user interface module that presents the second media content on a second media presentation device different from the first media presentation device.
The following description makes reference to audio encoding and decoding, also referred to as audio watermarking and watermark detection, respectively. It should be noted that in this context, audio may be any type of signal having frequencies that fall within the typical human audible spectrum. For example, the audio may be voice, music, audio and/or the audio portion of a video program or work (e.g., a Television (TV) program, a movie, an internet video, a radio program, a commercial, etc.), noise, or any other sound.
Generally, encoding audio refers to inserting one or more codes in the audio. In some examples, the code is psychoacoustically masked such that a human listener of the audio does not hear the code. However, there may be some specific situations where certain human listeners are able to hear the codes. In addition, these codes may also be referred to as watermarks. The codes embedded in the audio may be of any suitable length and may utilize any suitable technique for mapping information (e.g., channel identifiers, station identifiers, program identifiers, timestamps, broadcast identifiers, etc.) to the codes. Further, the code may be converted into symbols that can be represented by signals embedded in the audio having a selected frequency. The code may be converted into symbols using any suitable encoding and/or error correction technique. A Nielsen code is any code embedded in any media content by and/or in association with Nielsen Company (US), LLC).
Although the following examples are described with reference to broadcast audio/video media content including code embedded and/or encoded into an audio portion, such examples are merely illustrative. For example, additionally or alternatively, the code may be embedded and/or encoded into other types of media content, such as, but not limited to, video content, graphical content, images, games, surveys, and/or web pages. For example, the code may be hidden and/or placed in the non-visible portion of the video, such as by inserting the code into a vertical blanking interval and/or a horizontal retrace interval. Moreover, the methods and apparatus disclosed herein may be used to detect codes embedded in any number and/or type of additional and/or alternative primary media content (e.g., radio broadcasts, audio announcements, etc.) and to trigger the display of secondary content associated with such broadcast primary media. Furthermore, the primary media content need not be broadcast to trigger presentation of the secondary media content. For example, primary media content may be sold via any number and/or type of tangible media, such as Digital Versatile Disks (DVDs) and/or Compact Disks (CDs), that contain embedded codes that may trigger the presentation of secondary media content contained on the tangible media, either on local media storage and/or on media devices accessible via, for example, the Internet and/or a Local Area Network (LAN). additionally, non-media content data associated with the primary media content may be used to trigger the display of secondary media content associated with the primary media content The variable and/or identifier may be used to trigger the display of the secondary media content. It should be understood that such header information is carried along with the data representing the primary media content and thus is present in the non-payload portion of the stream transporting the primary media content.
In examples described herein, prior to and/or during transmission and/or broadcast, the primary media content is encoded to include one or more codes representing a source of the primary media content, a broadcast time of the primary media content, a distribution channel of the primary media content, an identifier of the primary media content, a link (e.g., a URL, an ASCII reference to a URL, etc.), a particular portion of the primary media content, and/or any other information deemed relevant to an operator of the system. When the primary media content is presented on the primary media content presentation device (e.g., played via a television, radio, computing device, cell phone, handheld device, and/or any other suitable device), people in the presentation area are not only exposed to the primary media content, but, to the extent they are not aware, they are also exposed to the code embedded in the primary media content. As described herein, in addition to a primary media device that presents broadcasted media content (referred to herein as "primary media content" or "primary broadcast media content"), people may be provided with and/or people may utilize secondary content presentation devices (e.g., handheld, mobile, and/or other portable devices, such as handheld computers, Personal Digital Assistants (PDAs), cellular phones, smart phones, laptop computers, notebook computers, ipods)TM、iPAdTMAnd/or any other type of handheld, mobile, capable of presenting media content to peopleAnd/or portable user devices). Some exemplary secondary content presentation devices include a microphone and decoder and use free-field (free-field) detection to detect codes embedded in the primary media content. Additionally or alternatively, the secondary content presentation device may obtain and/or receive the primary content identifier (e.g., code, signature, non-payload information, etc.) through other methods and/or interfaces such as a network interface, a bluetooth interface, etc. Based on the detected code, the secondary content presentation device obtains and presents secondary content related to the primary media content identified by the code. The secondary content may or may not be related to the primary media content and may itself include media content, user interfaces, advertisements, and/or applications. In some examples, the secondary content presentation device may be implemented by and/or within the primary presentation device.
Additionally, although the examples described herein utilize embedded audience measurement codes to identify primary media content, any number and/or type of additional and/or alternative methods may be used to identify primary media content. For example, one or more signatures and/or fingerprints may be computed from and/or based on the primary media content and compared to a database of signatures to identify the primary media content. An exemplary signature is computed via data compression applied to the audio portion of the primary media content. Exemplary Methods, Apparatus, and articles of manufacture For computing Signatures and/or For identifying Media using Signatures are described in U.S. patent application No.12/110,951 entitled "Methods and Apparatus For generating Signatures" filed on 28.4.2008 and U.S. patent application No.12/034,489 entitled "Methods and Apparatus For Characterizing Media" filed on 20.2.2008. The entire contents of each of U.S. patent application No.12/110,951 and U.S. patent application No.12/034,489 are hereby incorporated by reference.
Fig. 1 illustrates an exemplary primary media content and secondary content delivery system 100. To allow people to play, view and/or record primary media content, the exemplary system 100 of FIG. 1 includes any number and/or type of media servers (one of which is indicated at reference numeral 105) and any number and/or type of primary media content presentation devices (one of which is indicated at reference numeral 110). The example media server 105 of fig. 1 is a client device, consumer device, and/or user device and may be provided, implemented, and/or operated in, for example, a residence, apartment, commercial venue, school, government agency, medical facility, church, and the like. Exemplary media servers 105 include, but are not limited to, set-top boxes (STBs), Digital Video Recorders (DVRs), Video Cassette Recorders (VCRs), DVD players, CD players, Personal Computers (PCs), game consoles, radios, advertising devices, notification systems, and/or any type of media player. Exemplary primary media content presentation devices 110 include, but are not limited to, speakers, a sound system, a TV, and/or a monitor. In some examples, the example media server 105 of fig. 1 outputs audio and/or video signals via the primary media content presentation device 110. For example, the DVD player 105 may display a movie via a screen and speakers (not shown) of the TV 110 and/or speakers of the sound system 110. Exemplary primary media content includes, but is not limited to, television programs, movies, videos, commercials, advertisements, audio, video, games, web pages, advertisements, and/or surveys.
In the exemplary delivery system 100 of fig. 1, the exemplary media server 105 receives primary media content via any number and/or type of sources, such as: satellite receivers and/or antennas 115; a Radio Frequency (RF) input signal 120 received via any number and/or type of cable television signals and/or terrestrial broadcasts; terrestrial and/or satellite radio broadcasts; any number and/or type of data communication networks, such as the internet 125; any number and/or type of local or remote data and/or media storage devices 130, such as Hard Disk Drives (HDDs), VCR tapes, DVDs, CDs, flash memory devices, etc. In the exemplary delivery system 100 of fig. 1, at least some primary media content (regardless of its source and/or type) includes embedded audience measurement codes and/or watermarks intentionally inserted by content providers, audience measurement entities, and/or broadcasters 135 to facilitate performing audience measurement and/or audience rating determinations for the primary media content. Exemplary methods and apparatus for inserting and/or embedding audience measurement codes, such as nielsen codes, in primary content are described below with reference to fig. 18-21 and 38-47. Other exemplary Methods and Apparatus for inserting and/or embedding audience measurement codes in primary Content are described in U.S. patent application No.12/604,176 entitled "Methods and Apparatus to Extract Data Encoded in media Content", filed on 22.10.2009, the entire contents of which are incorporated herein by reference. Preferred examples of such audience measurement codes include Nielsen audio coding system (NAES) codes (also known as Nielsen codes) owned by The Nielsen Company (US), LLC, The assignee of The present patent. Exemplary NAES codes include NAES II and NAES V audio code systems. However, any past, present, and/or future NAES code may be used. Other exemplary audience measurement codes include, but are not limited to, codes associated with the Arbitron audio coding system.
To provide and/or broadcast primary media content, the exemplary delivery system 100 of fig. 1 includes any number and/or type of content providers and/or broadcasters 135, such as RF television stations, Internet Protocol Television (IPTV) broadcasters, Digital Television (DTV) broadcasters, cable television broadcasters, satellite television broadcasters, movie studios, terrestrial radio broadcasters, satellite radio broadcasters, and so forth. In the illustrated example of fig. 1, a content provider and/or broadcaster 135 transmits and/or provides primary media content to the example media server 105 via any desired medium (e.g., satellite broadcast using satellite transmitter 140 and satellite and/or satellite relay 145, terrestrial broadcast, cable television broadcast, internet 125, and/or media storage 130).
To provide secondary content that may or may not be related to the primary media content being presented at and/or via the media server 105 and/or the primary content presentation device 110, the exemplary primary media content and secondary content delivery system 100 of fig. 1 includes any number and/or type of secondary content presentation devices, one of which is indicated at reference numeral 150. The exemplary auxiliary content presentation device 150 of FIG. 1 is a client device, a consumer device, and/or a user device. Exemplary auxiliary content presentation devices 150 include, but are not limited to, handheld computers, PDAs, cellular phones, smart phones, laptops, netbook computers, and/or any other type of handheld, mobile, and/or portable auxiliary content presentation device capable of presenting primary media content and/or auxiliary content to a person. In the illustrated example of fig. 1, the auxiliary content presentation device 150 may communicate with other devices of the LAN 155 (e.g., the media server 105) via any number and/or type of wireless routers and/or wireless access points, one of which is indicated by reference numeral 160. The exemplary auxiliary content presentation device 150 may communicate with the internet 125 via the exemplary LAN 155 and/or via a cellular base station 165. Also, although not depicted in FIG. 1, the auxiliary content presentation device 150 may be communicatively connected to the LAN 155 via a wired communication protocol and/or communication signals.
To provide secondary content identified via primary broadcast media content, the exemplary secondary content presentation device 150 of fig. 1 includes a secondary content module 170. The example secondary content module 170 of fig. 1 detects the presence of codes and/or watermarks in the free-field radiated audio signals 172 and 173, for example, emitted by one or more speakers of the media server 105 and/or the primary content presentation device 110. Upon detecting the code, the example auxiliary content module 170 obtains auxiliary content associated with the detected code and/or watermark from the auxiliary content server 175 and/or the media server 105 via the wireless router 160 and/or the base station 165, and presents the auxiliary content so obtained on the display 330 (fig. 3) of the auxiliary content presentation device 150. The manner in which the exemplary auxiliary content module 170 and/or, more generally, the exemplary auxiliary content presentation device 150 are implemented is described below in connection with fig. 3, 17, and 25. Exemplary methods and apparatus to detect and decode codes and/or watermarks embedded in audio signals 172 and 173 are described below in conjunction with fig. 18, 22, 23, 38, and 48-56. Other exemplary Methods and Apparatus for detecting and decoding codes and/or watermarks embedded in audio signals 172 and 173 are described in U.S. patent application No.12/604,176 entitled "Methods and Apparatus to Extract Data Encoded in Media Content", filed on 22.10.2009.
As described below in connection with fig. 2, in some examples, the media server 105 includes a secondary content trigger 180 that detects and decodes a code and/or watermark embedded in the primary media content and/or a non-payload portion of the primary media content (e.g., a header of a data stream conveying the primary media content) and triggers the secondary content presentation device 150 to retrieve and/or present the secondary content via, for example, a bluetooth signal and/or a wireless LAN signal. Such triggers include and/or identify codes that are detected and/or decoded by the auxiliary content trigger 180. In some examples, the detection and/or decoding of the code at the media server 105 may occur simultaneously with the presentation of the primary media content via the primary content presentation device 110. When the auxiliary content presentation device is triggered by the auxiliary content trigger 180, the auxiliary content presentation device 150 retrieves and presents the auxiliary content associated with the decoded code as described above. Alternatively, the trigger includes pushing the secondary content from the media server 105 to the secondary content presentation device 150 such that the secondary content presentation device 150 need not request the secondary media content. The methods and apparatus described below in connection with fig. 18, 22, 23, 38, and 48-56 and/or U.S. patent application No.12/604,176 may be used to implement the exemplary auxiliary content trigger 180. An exemplary manner of implementing the exemplary media server 105 of fig. 1 is described below in conjunction with fig. 2.
Additionally or alternatively, the example media server 105 of fig. 1 may implement the auxiliary content service module 185, the auxiliary content service module 185 allowing the example auxiliary content module 170 to obtain auxiliary content from the media server 105 and/or from the auxiliary content server 175. Thus, the auxiliary content module 170 and/or, more generally, the example auxiliary content presentation device 150 can present locally cached and/or available auxiliary content as well as auxiliary content available via the internet 125.
The exemplary auxiliary content 175 of fig. 1 is responsive to a query for auxiliary content. For example, when the secondary content presentation device 150 of fig. 1 provides codes and/or watermarks decoded from the primary media content, the secondary content server 175 provides one or more links (e.g., URLs) to the secondary content presentation device 150, provides one or more secondary content (e.g., web pages, titles, images, video clips, etc.), and/or provides tuning information (e.g., tuning to a mobile DTV signal, a channel and/or broadcast, and/or an IPTV signal, a channel, and/or multicast) that may be used to obtain the secondary media content. The auxiliary content presentation device 150 preferably automatically activates the URL and/or tuning information to automatically obtain and begin displaying the auxiliary media content. As described below, a filter may be utilized to determine whether a given URL and/or tuning information is automatically activated. In some examples, the secondary content server 175 and/or the rating server 190 identify the primary media content and/or portions of the primary media content associated with the code and/or watermark. The identification is useful for audience measurement purposes. An exemplary manner of implementing the exemplary auxiliary content server 175 of fig. 1 is described below in conjunction with fig. 11 and 32.
To determine audience rating information, the exemplary delivery system of FIG. 1 includes an exemplary rating server 190. The example rating server 190 of fig. 1 develops meaningful content listening or viewership statistics using embedded code, computed signatures, and/or non-payload data and/or information, etc. detected, decoded, extracted, and/or computed by the secondary content presentation device, the media server 105, audience measurement devices (not shown) associated with the primary content presentation device 110, and/or similar devices at other locations. For example, by processing the collected data (e.g., codes, URLs, personal identification information, etc.) using any number and/or type of statistical methods, the example rating server 190 may determine the overall effectiveness, scope, and/or audience demographics of the primary media content and/or the secondary content. These ratings may relate to primary content, secondary content, or both primary and secondary content. In some examples, the media server 105 and/or the auxiliary content presentation device 150 stores a log of audience measurement data and sends the collected data to the rating server 190 for processing periodically (e.g., every day) and/or aperiodically. Additionally or alternatively, the respective code, signature, and/or non-payload information may be provided to the rating server 190 as it is detected, extracted, computed, and/or decoded. Access to the auxiliary content (e.g., by activating a URL) is also preferably recorded and provided to the rating server 190.
Fig. 2 illustrates an exemplary manner of implementing the exemplary media server of fig. 2. To receive primary broadcast media content and/or secondary content from the content provider 135, the exemplary media server 105 of fig. 2 includes any number and/or type of broadcast input interfaces 205, one of which is designated with the reference numeral 205. The example broadcast input interface 205 of fig. 2 receives broadcast primary media content via any number and/or type of devices, modules, circuits, and/or interfaces (e.g., an RF tuner that can be configured to receive a selected terrestrial broadcast television signal).
To decode the primary media signal, the example media server 105 of fig. 2 includes a media decoder 210. If the media signal received via the broadcast input interface 205 is encoded and/or encrypted, the example media decoder 210 decodes and/or decrypts the primary media signal, for example, to a form suitable for output to the example primary content presentation device 110 via the presentation device interface 215. Exemplary presentation device interfaces 215 include, but are not limited to, an RF output module, a component video output module, and/or a high-definition multimedia interface (HDMI) module.
To store received media content, the example media server 105 of FIG. 2 includes an example media store 130. Additionally or alternatively, the media server 105 may be communicatively connected to a removable media storage such as a DVD reader or CD reader, and/or to an external storage device. The exemplary media storage 130 is a HDD.
To trigger the exemplary secondary content presentation device 150 to retrieve and present secondary content related to the primary media content, the exemplary media server 105 of FIG. 2 includes an exemplary secondary content trigger 180. When the example secondary content trigger 180 of fig. 2 detects a code embedded in the primary media content that may be currently being presented and/or an identifier contained in a non-payload portion of the primary media content (e.g., a PID, a SID, and/or a timestamp contained in one or more packet headers), the secondary content trigger 180 sends a notification to the secondary content presentation device 150, for example, via any type of short-range wireless interface (such as the bluetooth interface 220) and/or any type of wireless LAN interface 225. Some examples do not include the auxiliary content trigger 180, but rely on the auxiliary content presentation device 150 to detect an inaudible code and utilize the code to retrieve auxiliary content. The trigger sent by the secondary content trigger 180 to the secondary content presentation device 150 may include a code detected in the primary media content, a signature and/or fingerprint calculated based on the primary media content, and/or an identifier contained in a non-payload portion of the primary media content (e.g., a PID, a SID, and/or a timestamp contained in one or more packet headers).
To provide the auxiliary content, the exemplary media server 105 of FIG. 2 includes an exemplary auxiliary content service module 185. When a request for auxiliary content is received from the example auxiliary content presentation device 150 via the wireless interface 225 and/or any type of wired communication interface 230, the auxiliary content service module 185 of FIG. 2 queries the example media store 130 for auxiliary content associated with the detected code and returns the auxiliary content to the auxiliary content presentation device 150. As mentioned above, the request for auxiliary content is preferably triggered by a detected inaudible code. The secondary content may or may not be related to the primary media content. The auxiliary content may be received by the media server 105 via the broadcast input interface 205 and stored and/or cached in the media store 130. In some examples, the secondary content may be received at the media server 105 in collaboration with the primary media content (e.g., on a secondary channel of a DTV broadcast). Additionally or alternatively, the secondary content may be received via one or more separate program streams using the same or different communication media as the primary content stream, and/or the secondary content may be pushed to the media server 105 by the secondary content server 175. Some examples do not include the auxiliary content service module 185, but rely on the auxiliary content presentation device 150 to detect inaudible code and use the code to retrieve auxiliary content from the second content server 175.
Although an example manner of implementing the example media server 105 of fig. 1 has been illustrated in fig. 2, one or more of the interfaces, data structures, elements, processes and/or devices illustrated in fig. 2 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the broadcast input interface 205, the example media decoder 210, the example presentation device interface 215, the example bluetooth interface 220, the example wireless interface 225, the example communication interface 230, the example media storage 130, the example auxiliary content trigger 180, the example auxiliary content service module 185, and/or, more generally, the example media server 105 of fig. 2 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, any of the broadcast input interface 205, the example media decoder 210, the example presentation device interface 215, the example bluetooth interface 220, the example wireless interface 225, the example communication interface 230, the example media storage 130, the example auxiliary content trigger 180, the example auxiliary content service module 185, and/or, more generally, the example media server 105 of fig. 2 may be implemented by one or more circuits, a programmable processor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Logic Device (FPLD), and/or a Field Programmable Gate Array (FPGA), among others. When any apparatus claim of this patent that contains one or more of these elements is understood to cover a purely software and/or firmware implementation, at least one of the broadcast input interface 205, the example media decoder 210, the example presentation device interface 215, the example bluetooth interface 220, the example wireless interface 225, the example communication interface 230, the example media storage 130, the example auxiliary content trigger 180, the example auxiliary content service module 185, and/or, more generally-the example media server 105 of fig. 2 is thereby expressly defined to include an article of manufacture, such as those tangible computer-readable media described below in connection with fig. 17, that stores firmware and/or software. Further, the example media server 105 may include interfaces, data structures, elements, processes and/or devices instead of or in addition to the components shown in fig. 2, and/or may include more than one of any or all of the illustrated interfaces, data structures, elements, processes and/or devices.
FIG. 3 illustrates an exemplary manner of implementing the exemplary auxiliary content presentation device 150 of FIG. 1. To receive the free-field radiated audio signals 172 and 173 of fig. 1, the exemplary auxiliary content presentation device 150 of fig. 3 includes any type of audio input interface 305, such as a microphone. To detect and/or decode codes and/or watermarks present in the audio signals 172 and 173, the example secondary content presentation device 150 of fig. 3 includes a decoder 310. Example apparatus and methods that may be used to implement the example decoder 310 of fig. 3 are described below in conjunction with fig. 18, 22, 23, 38, and 48-56. In some examples, decoder 310 is not continuously operating in order to conserve battery life. Conversely, the decoder 310 may be activated non-periodically and/or periodically to determine whether the primary media content has changed. During time intervals when the decoder 310 is turned off and/or in a standby mode and/or delays in transmitting or receiving the auxiliary content are to be compensated or accounted for, the auxiliary content module 170 may continuously present the auxiliary content according to an auxiliary content schedule (schedule), such as described below in connection with fig. 26 and 27. In some examples, the secondary content is transmitted to the secondary content presentation device 150 using a secondary content schedule prior to the time at which the secondary content is presented to account for content delivery delays and/or interruptions in network connectivity. That is, the secondary content may be delivered in non-real time (e.g., earlier) even if the secondary content is presented at a specified location in the primary media content in substantially real time.
It should be appreciated that the turning off of the decoder 310 may affect the secondary content presentation device 150 to quickly detect the change in the primary media content. To reduce such effects, the decoder 310 may continue to operate to detect SIDs, thereby maintaining, for example, a response to a channel change while detecting, decoding, and/or verifying timestamps less frequently. How frequently the timestamps are validated may adjust the frequency of detecting, decoding, and/or verifying the timestamps according to the length of time contained by a particular secondary content schedule, and/or to achieve a desired level of time synchronization between the primary media content and the secondary media content. For example, when the primary media content is, for example, pre-recorded, the timestamps may be continuously detected in order to account for skipping of commercials and/or other portions of the primary media content.
To retrieve and present the auxiliary content, the exemplary auxiliary content presentation device of FIG. 3 includes an exemplary auxiliary content module 170. When the example decoder 310 detects the embedded code and/or watermark, the example auxiliary content module of fig. 3 queries the auxiliary content server 175 and/or the example media server 105 via any type of wireless LAN interface 315 and/or any type of cellular interface 320. In response to the query, the example auxiliary content module 170 receives one or more auxiliary content and/or one or more links (e.g., URLs) to the auxiliary content. An exemplary manner of implementing the exemplary auxiliary content module 170 is described below in conjunction with fig. 25.
To present auxiliary content and the like, the exemplary auxiliary content presentation device 150 of FIG. 3 includes any type of user interface module 325 and any type of display 330. The example auxiliary content module 170 of fig. 3 generates and provides one or more user interfaces to the user interface module 325 that present, describe, and/or allow a user to view, select, and/or activate auxiliary content. Exemplary user interfaces that may be used to present, describe, and/or allow a user to select auxiliary content are described below in connection with fig. 4 and 5. The exemplary user interface generated by the exemplary auxiliary content module 170 may be responsive to user input and/or selections received via any number and/or type of input devices 335.
In some examples, the user interface module 325 of fig. 3 is implemented in conjunction with an Operating System (OS) executing on a processor (not shown) of the auxiliary content presentation device 150. In such an example, the auxiliary content module 170 may be implemented as a software application executing within the OS that accesses an Application Programming Interface (API) implemented by the OS to enable a user interface to be displayed on the display 330 and to receive user input via the input device 335. Example machine-useable instructions that may be executed to implement the example auxiliary content module 170 of fig. 3 are described below in conjunction with fig. 17.
To store the primary and/or secondary media content, etc. received and/or obtained by the secondary content module 170, the example secondary content presentation device 150 of fig. 3 includes any number and/or type of media stores (one of which is indicated by reference numeral 340). In some examples, the secondary content may be obtained, cached, and/or pushed to the secondary content presentation device 150 and stored in the media store 340 prior to and/or after presentation via the user interface module 325 and display 330.
In some examples, the example auxiliary content module 170 of fig. 3 may be triggered to obtain, receive, and/or present auxiliary content and/or links to auxiliary content via any type of short-range wireless interface, such as the bluetooth interface 345, and/or via the example wireless LAN interface 315. The triggers received via the bluetooth interface 345 and/or the wireless LAN interface 315 include embedded codes and/or non-payload codes (e.g., PID, SID, and/or timestamp extracted from one or more packet headers) detected at the media server 105.
Although an exemplary manner of implementing the exemplary auxiliary content presentation device 150 of fig. 1 is shown in fig. 3, one or more of the interfaces, data structures, elements, processes and/or devices shown in fig. 3 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the example audio input interface 305, the example decoder 310, the example wireless interface 315, the example cellular interface 320, the example user interface module 325, the example display 330, the example input device 335, the example media storage 340, the example bluetooth interface 345, the example auxiliary content module 170, and/or, more generally, the example auxiliary content presentation device 150 of fig. 3 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, any of the example audio input interface 305, the example decoder 310, the example wireless interface 315, the example cellular interface 320, the example user interface module 325, the example display 330, the example input device 335, the example media storage 340, the example bluetooth interface 345, the example auxiliary content module 170, and/or, more generally, the example auxiliary content presentation device 150 may be implemented by one or more circuits, programmable processors, ASICs, PLDs, FPLDs, and/or FPGAs, among others. When any apparatus claim of this patent that incorporates one or more of these elements is understood to cover purely software and/or firmware, at least one of the example audio input interface 305, the example decoder 310, the example wireless interface 315, the example cellular interface 320, the example user interface module 325, the example display 330, the example input device 335, the example media storage 340, the example bluetooth interface 345, the example auxiliary content module 170, and/or, more generally, the example auxiliary content presentation device 150 is thereby expressly defined to include a tangible article of manufacture, such as those storing firmware and/or software described below in connection with fig. 17. Additionally, the exemplary auxiliary content presentation device 150 may include interfaces, data structures, elements, processes and/or devices instead of, or in addition to, those illustrated in fig. 3, and/or may include more than one of any or all of the illustrated interfaces, data structures, elements, processes and/or devices.
Fig. 4 and 5 illustrate exemplary user interfaces that may be presented via the exemplary display 330 of fig. 3 to present auxiliary content to a user. Each of the exemplary user interfaces of fig. 4 and 5 includes an upper title portion 405, the upper title portion 405 including a broadcast source identifier 410 and a current time 415. An exemplary broadcast source identifier 410 is a logo (logo) associated with the broadcaster and/or content provider 135.
Each of the exemplary user interfaces of fig. 4 and 5 also includes an intermediate toolbar section 420, the intermediate toolbar section 420 including one or more buttons and/or elements of an activatable user interface (one of which is indicated by reference numeral 425). The exemplary button 425 allows the user to control settings of controls associated with the exemplary secondary content module 170, access and/or utilize social networking features implemented by the secondary content module 170, save secondary content for subsequent retrieval, obtain information related to people present in the primary media content, and the like.
The exemplary user interface of fig. 4 includes a lower portion 430, the lower portion 430 displaying one or more user selectable and/or activatable elements (e.g., icons, bitmap images, text, links, etc., one of which is indicated by reference numeral 435). As shown in fig. 5, when a user activates and/or selects a particular link 435, the lower portion of the exemplary user interface of fig. 4 is replaced with auxiliary content 505 associated with the element 435. For example, the secondary content 505 may display a web page that may be associated with or facilitate the purchase of a particular product that is currently advertised in a commercial portion of the primary media content presented at the primary content presentation device 110. As shown in fig. 5, a button 510 is provided to stop the display of the auxiliary content 505 and return to the list of selected elements 435, as shown in fig. 4. Where more auxiliary content elements are to be displayed that are not suitable for display within the display area of the lower portion 430, the exemplary user interface of fig. 4 may include a navigation element to allow a user to navigate through the selectable elements 435.
Although user interfaces are shown in fig. 4 and 5, any number and/or type of other and/or alternative user interfaces may be used. For example, one or more described elements may be combined, divided, rearranged, omitted, eliminated, and/or implemented in any other way. Moreover, the exemplary user interface may include elements that are alternatives to or supplements the elements shown in fig. 4 and/or 5, and/or may include more than one of any or all of the elements shown.
Fig. 6, 7, 8, 9, and 10 illustrate exemplary auxiliary content delivery scenarios that may be performed by the exemplary delivery system 100 of fig. 1. Although the examples shown in fig. 6-10 are described sequentially, the activities of detecting a code, detecting a timestamp t (n), obtaining auxiliary content, obtaining a link to auxiliary content, obtaining an auxiliary content schedule, displaying an auxiliary content link, displaying an auxiliary content feed may occur substantially in parallel, as described below in connection with fig. 17. Moreover, the auxiliary content may be presented without providing and/or presenting an intervening link and/or offer to the content. As described below in conjunction with fig. 25-31, the auxiliary content may additionally or alternatively be provided and/or displayed based on a schedule of auxiliary content. The schedule of the secondary content defines when the secondary media content is presented within the primary media content. The auxiliary content schedule may be used to eliminate the need for repeated, ongoing, and/or continuous interactions that may consume network bandwidth, may cause a loss of synchronization between the auxiliary content server 175 and/or the auxiliary content presentation device 150, and/or may make the exemplary system 100 of fig. 1 more sensitive to bandwidth limitations and/or transmission reaction times. Thus, although the exemplary scenarios of fig. 6-10 are described with reference to particular auxiliary media content items, the exemplary scenarios of fig. 6-10 may additionally or alternatively be used to provide a schedule of auxiliary content.
The exemplary secondary content delivery scenario of fig. 6 begins with the exemplary media server 105 receiving primary media content 605 via the exemplary broadcast input interface 205. The example media server 105 and/or the example primary content presentation device 110 emit and/or output free-field radiated audio signals 172, 173 associated with the primary media content 605 via, for example, one or more speakers.
When the example decoder 310 of the secondary content presentation device 105 detects the code 615 in the audio 172, 173 (block 610), the secondary content module 170 provides the code 615 to the example rating server 190 to facilitate audience measurement. The exemplary auxiliary content module 170 also queries the content server 175 based on the code 615 to receive one or more links 620 to auxiliary content. The user interface module 325 and the exemplary auxiliary content module 170 display the obtained link 620 using, for example, the exemplary user interface of fig. 4. When the user selects and/or activates one of the links 620 (block 625), the auxiliary content module 170 sends an identifier 630 associated with the link to the rating server 190, also obtains auxiliary content associated with the selected and/or activated link from the content server 175 (routes 635 and 640), and displays the obtained auxiliary content 640 using, for example, the user interface of FIG. 5 (block 645). Interaction with the rating server 190 may be omitted in the illustrated example of fig. 6 if audience measurement data does not need to be collected.
Turning to fig. 7, the first portion of the exemplary scenario of fig. 7 is the same as the first portion of the exemplary scenario of fig. 6. Accordingly, the same reference numerals are used in the first part of fig. 6 and 7, and the interested reader is referred to the discussion set forth above in connection with fig. 6 for like numbered elements.
In the illustrated example of fig. 7, a link 620 to the auxiliary content is obtained from the auxiliary content server 175; however, the auxiliary content 710 is retrieved and/or obtained from the media server 105 rather than the content server 175. Thus, when a particular link 620 is selected and/or activated (block 625), the auxiliary content module 170 sends a request 705 to the media server 105 for auxiliary content 710 associated with the selected link 620 and receives the auxiliary content 710 from the media server 105. The auxiliary content module 170 then displays the auxiliary content 710 obtained from the media server 105 using, for example, the user interface of fig. 5 (block 715). Interaction with the rating server 190 may be omitted in the example shown in fig. 7 if audience measurement data does not need to be collected.
In the illustrated example of fig. 8, the auxiliary content presentation device 150 obtains the auxiliary content and the link to the auxiliary content from the media server 105 instead of from the auxiliary content server 175. Thus, in the illustrated example of fig. 8, the auxiliary content presentation device 150 need not interact with the auxiliary content server 175. The exemplary secondary content delivery scenario of fig. 8 begins with the exemplary media server 105 receiving primary media content 805 via the exemplary broadcast input interface 205. The example media server 105 and/or the example primary content presentation device 110 emit and/or output free-field radiated audio signals 172, 173 associated with the primary media content 805 via, for example, one or more speakers.
When the exemplary auxiliary content service module 185 receives the auxiliary content 820, the exemplary auxiliary content service module 185 stores and/or caches the auxiliary content 820 in the media store 130 (block 825).
When the example decoder 310 of the secondary content presentation device 105 detects the code 815 in the audio 172, 173 (block 810), the secondary content module 170 provides the code 815 to the example rating server 190 to facilitate audience measurement. The exemplary auxiliary content module 170 queries the auxiliary content service module 185 based on the code 815 and receives one or more links 835 to auxiliary content. The user interface module 325 and the exemplary auxiliary content module 170 display the obtained links 835 using, for example, the exemplary user interface of fig. 4. When one of the links 835 is selected and/or activated (block 840), the auxiliary content module 170 sends an identifier 845 associated with the selected link 835 to the rating server 190, also obtains content 855 associated with the selected and/or activated link 835 from the content server 175 (routes 850 and 855), and displays the obtained content 855 using, for example, the user interface of fig. 5 (block 860). Interaction with the rating server 190 may be omitted in the example shown in fig. 8 if audience measurement data does not need to be collected.
In the example illustrated in fig. 9, the media server 105 detects a code in the primary media content and triggers the presentation of the secondary content at the secondary content presentation device 150. The exemplary secondary content delivery scenario of fig. 9 begins with the exemplary media server 105 receiving primary media content 905 via the exemplary broadcast input interface 205. When the exemplary auxiliary content service module 185 receives the auxiliary content 910, the auxiliary content service module 185 stores and/or caches the auxiliary content 910 in the media store 130 (block 915).
When the secondary content trigger 180 detects the code 925 in the primary media content 905 (block 920), the secondary content service module 185 sends the code 925 to the rating server 192 and the secondary content trigger 180 sends a trigger 930 to the secondary content presentation device 150 via the bluetooth interface 220 and/or the wireless interface 225. The auxiliary content service module 185 also sends a link 935 associated with the detected code 925 to the auxiliary content presentation device 150. In an alternative example, rather than sending the link and/or trigger 930, the auxiliary content service module 185 pushes the auxiliary content to the auxiliary content presentation device 150. Additionally or alternatively, the trigger 930 sent by the secondary content trigger to the secondary content presentation device 150 may include a code detected in the primary media content 905, a signature and/or fingerprint calculated based on the primary media content 905, and/or an identifier contained in a non-payload portion of the primary media content 905 (e.g., a PID, SID, and/or timestamp contained in one or more packet headers).
The user interface module 325 and the exemplary auxiliary content module 170 display the provided links 935 (or auxiliary content) using, for example, the exemplary user interface of fig. 4. When one of the links 935 is selected and/or activated (block 940), the auxiliary content module 170 obtains auxiliary content 950 associated with the selected and/or activated link 935 from the content server 175 (routes 945 and 950) and displays the obtained content 950 using, for example, the user interface of fig. 5 (block 960). In response to the request 945, the secondary content service module 185 sends the content identifier 955 associated with the selected link 935 to the rating server 190. Interaction with the rating server 190 may be omitted in the example shown in fig. 9 if audience measurement data does not need to be collected.
In the illustrated example of fig. 10, the content server 175 caches and/or pre-stores secondary content on the secondary content presentation device 150 for the identified primary media content. The exemplary secondary content delivery scenario of fig. 10 begins with the exemplary media server 105 receiving primary media content 1005 via the exemplary broadcast input interface 205. The example media server 105 and/or the example primary content presentation device 110 emit and/or output free-field radiated audio signals 172, 173 associated with the primary media content 1005 via, for example, one or more speakers.
When the example decoder 310 of the auxiliary content presentation device 105 detects the code 1015 in the audio 172, 173 (block 1010), the auxiliary content module 170 provides the code 1015 to the example rating server 190 to facilitate audience measurement. The exemplary secondary content module 170 queries the secondary content server 175 based on the code 1015 and receives the secondary content 1025 of the primary media content 1005. The secondary content 1025 is stored and/or cached in the example media storage 340 (block 1030).
The user interface module 325 and the exemplary secondary content module 170 display the secondary content 1025 using, for example, the exemplary user interface of fig. 5 and/or display the link 435 associated with the secondary content 1025 using, for example, the exemplary user interface of fig. 4 (block 1040). In some examples, the auxiliary content 1025, when displayed, may contain one or more selectable and/or activatable links. When a particular link is selected (block 1045), the auxiliary content module 170 sends the identifier 1050 of the selected and/or activated link to the rating server 190 and checks whether to cache auxiliary content associated with the identifier 1050 (block 1055). If the secondary content associated with the selected link is cached in the media storage 340 (block 1055), the secondary content module 170 retrieves the secondary content 1025 associated with the selected link from the media storage 340 and displays the retrieved secondary content using, for example, the user interface of FIG. 5 (block 1040). If the auxiliary content associated with the selected link is not available in the media store 340 (block 1055), the auxiliary content module 170 queries the content server 175 based on the identifier 1060 associated with the selected link and receives the auxiliary content 1065 associated with the selected link from the auxiliary content server 175.
In an alternative example, rather than retrieving the link and/or the auxiliary content in response to the code 1015, the auxiliary content server 175 can push the auxiliary content to the auxiliary content presentation device 150 regardless of any code detection at the auxiliary presentation device 150. In such an alternative, the secondary content module 170 may query the media store 340 for the secondary content 1025 associated with the detected code 1015 prior to querying the secondary content server 175 for the secondary content 1025. Interaction with the rating server 190 may be omitted in the example shown in fig. 9 if audience measurement data does not need to be collected.
Fig. 11 illustrates an exemplary manner of implementing the exemplary auxiliary content server 175 of fig. 1. To enable the person 1105 to define and/or associate secondary content with primary media content and/or portions of primary media content, the example secondary content server 175 of FIG. 11 includes a client interface 1110. The example client interface 1110 of fig. 11 is a network interface and/or a custom API that enables the person 1105 to interact with the action register 1115 to define an "action" (e.g., specific auxiliary content) and store the defined action in the action database 1120. An exemplary data structure that may be used to implement exemplary action database 1120 is described below in conjunction with FIG. 12.
The example client interface 1110 of fig. 11 also enables the person 1105 to interact with an action scheduler 1125 to associate defined actions stored in the action database 1120 with particular primary media content and/or portions of primary media content identified in the programming database 1130. The association of actions with primary media content and/or portions of primary media content is stored in the content database 1135. In some examples, the cost of associating an action with primary media content and/or portions of primary media content depends on the time, number of days, number of times, etc. of the action to be associated. An exemplary data structure that may be used to implement exemplary content database 1135 is described below in conjunction with FIG. 13.
In some examples, action scheduler 1125 of fig. 11 builds and/or edits a schedule of secondary content provided in response to identified primary media content. Such a secondary content schedule defines one or more secondary content items to be presented at and/or during a particular time of the identified primary media content. An exemplary data structure that may be used to represent the auxiliary content schedule is described below in conjunction with fig. 26 and 27.
To enable the example media server 105 and/or the auxiliary content presentation device 150 to query for and/or obtain auxiliary content, the example auxiliary content server 175 of fig. 11 includes an action server 1140. When the code and optional timestamp are received from the media server 105 or the secondary content presentation device 150, the example action server 1140 identifies the primary media content and/or a portion of the primary media content associated with the received code. Based on the identified primary media content and/or portions of the primary media content, the action server 1140 queries the content database 135 to identify actions (e.g., secondary content) and/or schedules for secondary content associated with the code and optional timestamp. The action server 1140 returns the identified action and/or the identified auxiliary content schedule to the requesting media server 105 or the auxiliary content presentation device 150. In some examples, action server 1140 provides information to action auditor 1145 regarding which codes have triggered access to auxiliary content, and action auditor 1145 may be provided at or associated with rating server 190.
Based on the access information provided by the example action server 1140 and/or based on the auxiliary content access and/or selection information provided by the example auxiliary content presentation device 150 and/or the example media server 105, the example action auditor 1145 of fig. 11 tabulates data representing contacts that have occurred (e.g., invitations to view auxiliary content) and/or auxiliary content consumption (e.g., actual "click-throughs"). Such information may be used, for example, to determine the effectiveness of an advertising campaign. Moreover, such information may be used to adjust and/or determine a cost for the user 1105 to associate the secondary content with the primary media content and/or portions of the primary media content.
To encode actions, secondary content, invitations to secondary content, and/or links to secondary content into the primary media content, the example secondary content server 175 includes an action encoder 1150. Thus, the code in the primary media content is not limited to identifying the primary media content solely for audience measurement purposes, and may additionally or alternatively be a dual-purpose code corresponding to the secondary content and/or the link to the secondary content. An example of implementing the exemplary motion encoder 1150 of fig. 11 is described below in conjunction with fig. 18-21 and 38-47.
To establish and/or add auxiliary content items to an auxiliary content schedule based on loyalty and/or affinity groups (affinity groups), the example auxiliary content server 175 of fig. 11 includes a loyalty-based scheduler 1160. The example loyalty based scheduler 1160 of fig. 11 tabulates primary media content viewed by different people and selects auxiliary content based on the frequency of their loyalty and/or consumption to particular programs, particular advertisements, and/or programs associated with particular content providers 135. Additionally or alternatively, the example loyalty based scheduler 1160 may add the ancillary media content to the ancillary content schedule based on a similarity of a person to another person in consumption and/or response to the same program, advertisement, and/or content provider 135. An exemplary manner of implementing the exemplary loyalty based scheduler 1160 is described below in connection with FIG. 32.
The example loyalty based scheduler 1160 may be used to reward a particular user with a particular benefit based on the user's demonstrated loyalty to a particular primary media content and/or content provider 130. Loyalty may be determined based on any number and/or type of criteria including, but not limited to, the number of hours spent consuming a particular primary media content and/or set of particular primary media content/television broadcast, the variety of shows viewed on a particular content delivery network, the frequency with which users activate and/or select ancillary content offers (offers), and the like. Loyalty may be characterized as a scalar quantity and/or type based on any number and/or type of ratings. For example, loyalty may manifest as the number of times a episode of a particular television show was watched over the last ten days.
Although an exemplary manner of implementing the exemplary auxiliary content server 175 of fig. 1 has been illustrated in fig. 11, one or more of the interfaces, data structures, elements, processes and/or devices illustrated in fig. 11 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the example client interface 1110, the example action register 1115, the example action database 1120, the example action scheduler 1125, the example program database 1130, the example content database 1135, the example action server 1140, the example action auditor 1145, the example action encoder 1150, the example loyalty based scheduler 1160, and/or, more generally, the example auxiliary content server 175 of fig. 11 may be implemented via hardware, software, firmware, and/or a combination of hardware, software, and/or firmware. Thus, for example, any of example client interface 1110, example action register 1115, example action database 1120, example action scheduler 1125, example program database 1130, example content database 1135, example action server 1140, example action auditor 1145, example action encoder 1150, example loyalty based scheduler 1160, and/or, more generally, example auxiliary content server 175 may be implemented by one or more circuits, programmable processors, ASICs, PLDs, and/or FPGAs, among others. When any device claim of this patent containing one or more of these elements is understood to encompass a purely software and/or firmware implementation, at least one of the example client interface 1110, the example action register 1115, the example action database 1120, the example action scheduler 1125, the example program database 1130, the example content database 1135, the example action server 1140, the example action auditor 1145, the example action encoder 1150, the example loyalty based scheduler 1160, and/or more generally, the example auxiliary content server 175, is thereby expressly defined to include a tangible article of manufacture, such as those tangible computer readable media described below in connection with fig. 17, storing firmware and/or software. Moreover, exemplary auxiliary content server 175 may include interfaces, data structures, elements, processes and/or devices instead of, or in addition to, those illustrated in other fig. 11 and/or may include more than one of any or all of the illustrated interfaces, data structures, elements, processes and/or devices.
FIG. 12 illustrates an exemplary data structure that may be used to implement the exemplary action database 1120 of FIG. 11. The example data structure of FIG. 12 includes a plurality of entries 1205 for respective actions. To identify the action, each of the example entries 1205 of FIG. 12 includes an action Identifier (ID) field 1210. Each of the example action ID fields 1210 of fig. 12 contains one or more numbers and/or letters that uniquely identify a particular action (e.g., a particular auxiliary content and/or a link to a particular auxiliary content).
To identify the client, the person and/or organization associated with the action, each of the example entries 1205 of FIG. 12 includes a client field 1215. Each of the example client fields 1215 of fig. 12 includes one or more numbers and/or letters that uniquely identify the user, client, and/or organization associated with, defining, leasing, purchasing, and/or owning the code slots and/or auxiliary content associated with the action 1205.
To identify the action type, each of the example entries 1205 of FIG. 12 includes a type field 1220. Each of the example type fields 1220 of fig. 12 includes one or more numbers, letters, and/or codes that identify the type of the action 1205. Exemplary action types include, but are not limited to, web page access, phone calls, and/or delivery to a native Java applet.
To specify the action, each of the example entries 1205 of FIG. 12 includes a script field 1225. Each of the example script fields 1225 of fig. 12 includes text and/or commands that define the action 1205. Exemplary scripts include, but are not limited to, a URL, a telephone number, a target Java applet, and/or an OS command.
To define when an action is valid, each of the example entries 1205 of FIG. 12 includes a valid field 1230. Each of the example valid fields 1230 of fig. 12 includes one or more times and/or dates that define one or more periods during which the action 1205 is valid and/or activatable.
To define how actions are presented, each of the example entries 1205 of FIG. 12 includes an invitation field 1235. Each of the example invitation fields 1235 of fig. 12 defines how an invitation to the action 1205 is displayed, for example, in a lower portion of the example user interface of fig. 4. For example, the invitation field 1235 may define and/or reference a bitmap image to be displayed.
To define whether an action can be saved, each of the example entries 1205 of FIG. 12 includes a save field 1240. Each of the example save fields 1240 of fig. 12 contains a value that represents whether an action may be saved at the secondary content presentation device 150 and/or the media server 105 (for subsequent retrieval and/or display at the secondary content presentation device 150) and/or the media server 105. In some examples, the save field 1240 defines a time period during which the action 1205 may be saved at the auxiliary content presentation device 150 and/or the media server 105 and/or a time at which the action 1205 is cleared from the cache.
While an example data structure is shown in FIG. 12 that may be used to implement the example action database 1120 of FIG. 11, one or more entries and/or fields may be combined, divided, rearranged, omitted, eliminated, and/or implemented in any other way. Moreover, the example data structure of FIG. 12 may include fields that are alternatives to or in addition to those shown in FIG. 12, and/or may include more than one of any or all of the fields shown.
FIG. 13 illustrates an exemplary data structure that may be used to implement the exemplary content database 1135 of FIG. 11. The example data structure of fig. 13 includes a plurality of entries 1305 for respective combinations of primary media content and actions (e.g., secondary content). To identify the primary media content and/or any portion thereof, each of the example entries 1305 of fig. 13 includes a content ID field 1310. Each of the example content ID fields 1310 of fig. 13 contains one or more numbers and/or letters that uniquely identify particular primary media content and/or portions of primary media content. An exemplary content ID identifies a particular primary media content and/or a particular segment of the primary media content.
To identify the action, each of the example entries 1305 of FIG. 13 includes an action ID field 1315. Each of the example action ID fields 1315 of fig. 13 contains one or more numbers and/or letters that identify a particular action (e.g., a particular secondary content and/or a link to a particular secondary content) that has been associated with the primary media content and/or a portion of the primary media content identified in the content ID field 1310.
To identify when the action ID1315 is effectively associated with the content ID 1310, each of the example entries 1305 of FIG. 13 includes a date field 1320 and a time field 1325. Each of the example date fields 1320 of fig. 13 lists and/or defines one or more particular dates on which the action ID1315 is effectively associated with the content ID 1310. Specifically, if a request is received for auxiliary content associated with content ID 1310, only auxiliary content associated with action ID1315 is returned if the current date falls within the range and/or group of dates defined by date field 1320. Likewise, each of the example time fields 1325 lists and/or defines one or more portions of time for which the action ID1315 is effectively associated with the content ID 1310.
To record access of the auxiliary content, each of the example entries 1305 of fig. 13 includes a count field 1330. Each of the example count fields 1330 of fig. 13 includes one or more movement values that represent a number of times an invitation has been presented to auxiliary content (e.g., as shown in the example user interface of fig. 4) and a number of times corresponding auxiliary content has been presented (e.g., as shown in the example user interface of fig. 5).
Although an example data structure has been illustrated in FIG. 13 as may be used to implement the example content database 1135 of FIG. 11, one or more entries and/or fields may be combined, divided, rearranged, omitted, eliminated, and/or implemented in any other way. Moreover, the example data structure of FIG. 13 may include fields that are alternatives to or in addition to those shown in FIG. 13, and/or may include more than one of any or all of the fields shown.
Fig. 14, 15, and 16 illustrate exemplary auxiliary content delivery flows that may be implemented using the exemplary auxiliary content server 175 of fig. 1 and 11. In the illustrated example of fig. 14, the action and/or auxiliary content is encoded by the example action encoder 1150 and included in the primary media content broadcast by the content provider 135. Thus, unlike simple codes, identifiers, and/or indexes that may be used to subsequently retrieve links to and/or retrieve secondary content, the data and/or information embedded in the primary media content represents the actual secondary content. In such a case, by decoding such codes, the auxiliary content presentation device 150 may obtain the auxiliary content and/or actions directly without querying the example auxiliary content server 175 for links and/or auxiliary content. In addition to or in lieu of the code associated with the content provider 135 being embedded in the primary media content by the content encoder 1405, the example action encoder 1150 may embed other data and/or information. As shown in fig. 14, the example auxiliary content presentation device 150 notifies the example action auditor 1145 of the presentation of auxiliary content for counting, audience measurement, and/or auditing purposes.
In the illustrated example of fig. 15, the action encoder 1150 encodes an identifier (e.g., a link) into the primary media content, which may be used to obtain the secondary media content. Thus, the embedded information indirectly represents the auxiliary content. Thus, when the auxiliary content presentation device 150 detects an embedded link associated with auxiliary content, it can utilize the link to obtain the auxiliary content, rather than first querying the auxiliary content server 175 to obtain the link based on the detected, decoded and/or extracted code and/or identifier.
In the illustrated example of fig. 16, the auxiliary content presentation device 150 queries for and obtains auxiliary content in response to only those codes embedded by the content provider 135. Thus, when the auxiliary content presentation device 150 detects the code, it interacts with the action server 1149 to retrieve the auxiliary content and/or a link to the auxiliary content associated with the detected code.
FIG. 17 illustrates exemplary machine-acceptable instructions that may be used to implement the exemplary auxiliary content presentation device 150 of FIGS. 1 and 3. A processor, controller, and/or any other suitable processing device may be used and/or programmed to execute the example machine-acceptable instructions of fig. 17. For example, the machine-acceptable instructions of FIG. 17 may be embodied as coded instructions stored on a tangible computer-readable medium. As used herein, the term tangible computer readable medium is expressly defined to include any type of computer readable storage and to exclude propagating signals. Exemplary tangible computer-readable media include any type of volatile and/or nonvolatile physical memory and/or physical memory device, flash memory, CD, DVD, floppy disk, Read Only Memory (ROM), Random Access Memory (RAM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), and/or an Electrically Erasable PROM (EEPROM), optical storage disk, optical storage device, magnetic storage disk, magnetic storage device, cache memory, and/or any other storage medium that stores information for any time (e.g., for extended periods of time, permanent storage, transient storage, for temporary buffering, and/or for caching of information) and that is accessible by a processor, a computer, and/or other machine with a processor, such as the exemplary processor platform P100 discussed below in connection with fig. 24. As used herein, the term "non-transitory (non-transitory) computer readable medium is expressly defined to include any type of computer readable medium and to exclude propagating signals. Combinations of the above are also included within the scope of computer-readable media. For example, machine-readable instructions comprise instructions and data which cause a processor, computer, and/or machine having a processor to perform one or more particular processes. Alternatively, some or all of the example machine-acceptable instructions of fig. 17 may be implemented using any combination of ASICs, PLDs, FPLDs, FPGAs, discrete logic, hardware, firmware, and the like. Additionally, some or all of the example processes of fig. 17 may be implemented manually or as any combination of any of the foregoing techniques, e.g., any combination of firmware, software, discrete logic and/or hardware. In addition, many other methods of implementing the exemplary operations of FIG. 17 may be employed. For example, the order of execution of the blocks may be changed, and/or one or more of the blocks described may be changed, eliminated, sub-divided, or combined. Additionally, any or all of the example machine-acceptable instructions of fig. 17 may be executed sequentially and/or in parallel by, for example, separate processing threads, processors, devices, discrete logic, circuits, and/or the like.
The example machine-acceptable instructions of fig. 17 may be executed to implement three parallel processes, which may be implemented by separate, substantially asynchronous processes executing within an OS, for example. In a first process, the example machine-acceptable instructions of fig. 17 begin with the example decoder 310 detecting and decoding any code present in the audio signals 172 and 173 (block 1705). When the code is detected and/or a trigger is received from the example media server 105 (block 1710), the auxiliary content module 170 sends a request for auxiliary content to the example media server 105 and/or the example auxiliary content server 175 (block 1715). Control then returns to block 1705 to continue detecting and decoding the code.
In a second process, the example machine-acceptable instructions of fig. 17 begin when the example auxiliary content module 170 receives a link to auxiliary content and/or an invitation for auxiliary content, for example, from the example media server 105 and/or the example auxiliary content server 175 (block 1720). The auxiliary content module 170 displays the received link and/or invitation via the example user interface module 325 and the display 330 using, for example, the example user interface of fig. 4 (block 1725). Control then returns to block 1720 to wait for additional auxiliary content information.
In a third process, the example machine-acceptable instruction of FIG. 17 begins (block 1730) when a user selects and/or activates an auxiliary content link via the input device 335. The auxiliary content module 170 obtains and displays auxiliary content associated with the selected and/or activated link from the local cache 340, the example media server 105, and/or the example auxiliary content server 175 (block 1735). When the user ends the display of the auxiliary content (e.g., using the exemplary close button 510 of fig. 5) (block 1740), the auxiliary content module 170 displays the previously received links and/or invitations via the exemplary user interface module 325 and the display 330 using, for example, the exemplary user interface of fig. 4 (block 1745). Control then returns to block 1730 to await selection and/or activation of another link.
Audio code and watermark
An exemplary encoding and decoding system 1800 is shown in fig. 18. For example, the exemplary system 1800 may be a television audience measurement system, which will serve as background to further describe the encoding and decoding processes described herein. The exemplary system 1800 of fig. 18 includes an encoder 1802 that adds a code 1803 to an audio signal 1804 to produce an encoded audio signal. Code 1803 may represent any selected information. For example, in a media monitoring context, code 1803 may represent and/or identify broadcast primary media content, such as a television broadcast, radio broadcast, or the like. Additionally, the code 1803 may include timing information indicating a time at which the code 1803 was inserted into audio or a media broadcast time. Alternatively, the code may include control information for controlling the behavior of one or more target devices, as described below. It should be appreciated that the example encoder 1802 of fig. 18 and 19 and/or the example encoder 3802 of fig. 38 and 39 may be used to implement the example content provider 135 of fig. 1 and/or the example action encoder 1150 of fig. 11. Other exemplary encoders that may be used to implement the exemplary Content provider 135 are described in U.S. patent application Ser. No.12/604,176 entitled "Methods and apparatus to Extract Data Encoded in Media Content", filed on 22.10.2009.
The audio signal 1804 can be any form of audio including speech, music, noise, commercial audio, audio associated with a television program, live performance, and the like. In the example of fig. 18, the encoder 1802 passes the encoded audio signal to the transmitter 1806. The transmitter 1806 transmits the encoded audio signal along with any video signals 1808 associated with the encoded audio signal. In some cases, the encoded audio signal need not have any associated video, although the encoded audio signal may have an associated video signal 1808.
Although a single transmitter 1806 is shown on the transmit side of the exemplary system 1800 shown in fig. 18, the transmit side may be much more complex and may include multiple levels in a distribution chain through which the audio signal 1804 may be communicated. For example, the audio signal 1804 may be generated at a national network and passed to a local network for local distribution. Thus, although the encoder 1802 is shown in a transmit lineup before the transmitter 1806, one or more encoders may be disposed throughout the distribution chain of the audio signal 1804. Thus, the audio signal 1804 may be encoded at multiple levels, and the audio signal 1804 may include embedded code associated with those multiple levels. Further details regarding the encoding and exemplary encoders are provided below.
Transmitter 1806 may include one or more of a Radio Frequency (RF) transmitter that may distribute encoded audio signals over free space propagation (e.g., via a terrestrial or satellite communication link) or a transmitter for distributing encoded audio signals over cable, fiber optics, or the like. In some examples, the transmitter 1806 may be used to broadcast an encoded audio signal throughout a wide geographic area. In other cases, the transmitter 1806 may distribute the encoded audio signal over a limited geographic area. The transmission may include up-converting the encoded audio signal to a radio frequency to enable propagation of the audio signal. Alternatively, transmitting may include distributing the encoded audio signal in the form of digital bits or packets (packets) that can be transmitted over one or more networks, such as the internet, a wide area network, or a local area network, as described above in connection with fig. 1. Thus, the encoded audio signal may be carried by a carrier signal, by packets of information, or by any suitable technique for distributing audio signals.
When receiver 1810 receives an encoded audio signal (in a media monitoring context, receiver 1810 may be located at a statistically selected assay location 1812), the audio signal portion of the received program signal is processed to recover a code whose presence is not perceptible (or substantially imperceptible) to a listener even when the encoded audio signal is presented by speaker 1814 of receiver 1810. To this end, the decoder 1816 is connected either to an audio output 1818 available at the receiver 1810 or to a microphone 1820 located near the speaker 1814, with audio being reproduced by the microphone 1820. The received audio signal may be in mono or stereo format. Further details regarding decoding and exemplary decoders are provided below. It is to be appreciated that the example decoder 1816 and the example microphone 1820 of fig. 18 and 22 and/or the example microphone 3820 of fig. 38 may be used to implement the example decoder 310 and the example audio input interface 305 of fig. 3, respectively, and/or the example auxiliary content trigger 180 of fig. 1. Other exemplary decoders that may be used to implement the exemplary decoder 310 are described in U.S. patent application Ser. No.12/604,176, entitled "Methods and Apparatus to extract Data Encoded in Media Content", filed on 22.10.2009.
Audio coding
As described above, the encoder 1802 inserts one or more inaudible (or substantially inaudible) codes into the audio 1804 to create encoded audio. An exemplary manner of implementing the exemplary encoder 1802 of fig. 18 is shown in fig. 19. In some implementations, the example encoder 1802 of fig. 19 includes a sampler 1902 that receives audio 1804. The sampler 1902 is connected to a masking evaluator 1904, the masking evaluator 1904 evaluating the ability to conceal code in the sampled audio. The code 1803 is provided to a code frequency selector 1906, and the code frequency selector 1906 determines an audio code frequency representing the code 1803 inserted into the audio. The code frequency selector 1906 may include code to symbol conversion and/or any suitable detection or error correction coding. An indication of the specified code frequency representing the code 1803 is passed to the masking evaluator 1904 so that the masking evaluator 1904 knows the frequency at which masking of the audio 1804 should be determined. In addition, an indication of the code frequency is supplied to the code synthesizer 1908, and the code synthesizer 1908 generates a sine wave signal having the frequency specified by the code frequency selector 1906. A combiner 1910 receives both the synthesized code frequencies from code synthesizer 1908 and the audio provided to the sampler and combines them to produce encoded audio.
In some examples where the audio 1804 is provided to the encoder 1802 in analog form, the sampler 1902 is implemented using an analog-to-digital (a/D) converter or any other suitable digital converter. Sampler 1902 may sample the audio 1804 at, for example, 48000 hertz or any other sampling rate suitable for sampling the audio 1804 while satisfying the nyquist criterion. For example, if the audio 1804 is frequency limited to 15000 hertz, the sampler 1902 may operate at 30000 hertz. Each sample from the sampler 1902 may be represented by a string of digital bits, where the number of bits in the string indicates the precision with which the sampling is performed. For example, the sampler 1902 may produce 8 bits, 16 bits, 24 bits, or 32 bits.
In addition to sampling the audio 1804, the example sampler 1902 accumulates multiple samples (i.e., audio blocks) for processing together. For example, the example sampler 1902 accumulates 512 sample audio blocks and passes them simultaneously to the masking evaluator 1904. Alternatively, in some examples, the masking evaluator 1904 may include an accumulator, where a plurality of samples (e.g., 512) may be accumulated in a buffer before being processed.
The masking evaluator 1904 receives or accumulates samples (e.g., 512 samples) and determines the ability to accumulate samples for the frequency of the human auditory hidden code. That is, the masking evaluator 1904 determines whether the code frequency can be concealed within the audio represented by the accumulated samples by evaluating the critical bands of the audio as a whole to determine its energy and to determine noise-like or tonal-like properties of each critical band and to determine a total ability of the critical bands to mask the code frequency. The critical bands determined by experimental studies performed on human hearing may vary in width from individual bands at the lower end of the spectrum to bands containing ten or more adjacent frequencies at the upper end of the sound spectrum. If the masking evaluator 1904 determines that the code frequency can be hidden in the audio 1804, the masking evaluator 1904 indicates the amplitude level at which the code frequency can be inserted into the audio 1804 while remaining hidden, and provides the amplitude information to the code synthesizer 1908.
In some examples, the energy E that may occur at any critical frequency band is determined by making no changes that can be heard by a humanbOr the maximum change in the masking energy level, the masking evaluator 1904 performs the masking evaluation. The masking evaluations performed by the masking evaluator 1904 may be as described, for example, in the moving picture experts group-advanced audio coding (MPEG-AAC) audio compression standard ISO/IEC 13818-7: 1997 as outlined in. Acoustic energy in each critical band affects the masking energy of its neighbors and is measured in a spectral analysis system such as ISO/IEC 13818-7: algorithms for calculating the masking effect are described in the standard documents of 1997. These analyses are used to determine, for each audio block, masking contributions due to tonal (e.g., how the evaluated audio resembles a tone) and noise-like (i.e., how the evaluated audio resembles noise) features. Further analysis may evaluate temporary masking, which extends the masking capability of the audio, typically for a short time of 50 to 100 ms. The analysis obtained by the masking evaluator 1904 provides a masking model that can produce no noticeable tones on a critical band basisFrequency degradation (e.g., inaudibility) can be added to the determination of the amplitude of the code frequency of the audio 1804.
In some examples, the code frequency selector 1906 is implemented using a look-up table that associates input codes 1803 to states, where each state is represented by a plurality of code frequencies emphasized in the encoded audio signal. For example, code frequency selector 1906 may include information that associates symbols or data states to groups of code frequencies, which information redundantly represents the data states. The number of states selected for use may be based on the type of input code. For example, an input code representing 2 bits may be converted to represent 4 symbols or states (e.g., 2)2) The code frequency of one of. In other examples, the input code representing 4 bits of information consists of 16 symbols or states (e.g., 2)4) Is shown. When converting code 1803 into one or more symbols or states, some other encoding may be used to embed error correction. Additionally, in some examples, more than one code may be embedded in the audio 1804.
An exemplary graph illustrating a code frequency configuration is shown in fig. 20A at reference numeral 2000. The chart includes frequency indices with values ranging from 360 to 1,366. These frequency indices correspond to the frequencies of the sinusoids embedded in the audio signal when viewed in the frequency domain by a Discrete Fourier Transform (DFT) on a block of 18,432 samples. The reason for referring to the frequency index instead of the actual frequency is that the frequency to which the index corresponds varies based on the sampling rate used within the encoder 1802 and the number of samples processed by the decoder 1816. The higher the sampling rate, the closer each index is to its neighboring index in frequency. Conversely, a low sampling rate results in adjacent indices being relatively far apart in frequency. For example, in the case of a sampling rate of 48,000 hertz, the interval between the indices shown in the graph 2000 of FIG. 20A is 2.6 hertz. Thus, the frequency index 360 corresponds to 936 hertz (2.6 hertz × 360). Of course, other sampling rates and frequency indices may be selected. For example, one or more frequency index ranges may be selected and/or used to avoid interference with frequencies used to carry other codes and/or watermarks. Moreover, the frequency ranges selected and/or used need not be continuous. In some examples, frequencies in the range of 0.8 kilohertz (kHz) to 1.03kHz and 2.9kHz to 4.6kHz are used. In other examples, frequencies in the range of 0.75 kilohertz (kHz) to 1.03kHz and 2.9kHz to 4.4kHz are used.
As shown in FIG. 20A, the graph 2000 includes a top row 2002, the top row 2002 listing 144 different states or symbols in columns, where the graph 2000 shows the first 3 states and the last state. The states are selected to represent a code or portion of a code. For clarity, the states between the third state and the last state are indicated by dashed boxes. Each state occupies a respective column in graph 2000. For example, state S1 occupies the column indicated by reference numeral 2004. Each column includes a plurality of frequency indices representing frequencies in each of the seven different code bands indicated in the left column 2006 of graph 2000. For example, as shown in column 2004, state S1 is represented by frequency indices 360, 504, 648, 792, 936, 1080, and 1224. To send one of 144 states, the code index in the column of the selected state is emphasized in a group of 18,432 samples. Thus, to send state S1, indices 360, 504, 648, 792, 936, 1080, and 1224 are emphasized. In some example encoders 1802, only the index of one of the states is emphasized at the same time.
As shown in fig. 20A, each code band includes sequentially numbered frequency indices, with respective frequency indices corresponding to respective states. That is, code band 0 includes frequency indices 360 through 503, each frequency index corresponding to one of the 144 different states/symbols shown in graph 2000. In addition, adjacent code bands in the system are separated by one frequency index. For example, code band 0 ranges from index 360 to index 503, and adjacent code band 1 ranges from index 504 to index 647. Thus, code band 0 is spaced apart from adjacent code band 1 by one frequency index. Advantageously, the code frequencies shown in fig. 20A are close to each other in frequency and thus are affected in relatively the same manner as multipath interference. In addition, the high level of redundancy in graph 2000 increases the ability to recover code.
Thus, if the code frequency selector 1906 operates on the premise of the graph 2000 of fig. 20A, when encoding or mapping the code input to the code frequency selector 1906 to the state S1, the code frequency selector 1906 indicates to the masking evaluator 1904 and the code synthesizer 1908 that the frequency indices 360, 504, 648, 792, 936, 1080, and 1224 should be emphasized in the encoded signal, and therefore the code synthesizer 1908 should produce sinusoids having frequencies corresponding to the frequency indices 360, 504, 648, 792, 936, 1080, and 1224 and such sinusoids should be generated with an amplitude specified by the masking evaluator 1904 such that the generated sinusoids can be inserted into the audio 1804 but are inaudible (or substantially inaudible). As a further example, when the input code identifies that state S144 should be encoded into audio 1904, code frequency selector 1906 identifies frequency indices 503, 647, 791, 1079, 1223, and 1336 to masking evaluator 1904 and code synthesizer 1908 so that the corresponding sinusoids may be generated with the appropriate amplitude.
The code used to select the states in diagram 2000 to communicate information may include data blocks and synchronization blocks. For example, a message to be encoded by the system with these 144 different states includes a synchronization block followed by a plurality of data blocks. Each of the sync block and the data block is encoded into 18,432 samples and is represented by an index that emphasizes one of the states shown in a column of the graph 2000.
For example, a sync block is represented by emphasizing one of 16 states selected to represent sync information. That is, the sync block indicates the beginning of one of 16 different message types. For example, when considering media monitoring, a network television station may use a first state to represent synchronization and a local affiliate television station may use a second state to represent synchronization. Thus, at the beginning of the transmission, one of the 16 different states is selected to represent synchronization and the state is sent by emphasizing the index associated with that state. The synchronization data is followed by information payload data.
In the preceding example, with respect to how the 16 states representing synchronization information are distributed among all 144 states, in some examples, the 16 states are selected such that a frequency range including a first code frequency representing each of the 16 states is greater than an amount of frequency separating the frequency range from an adjacent frequency range including a second code frequency also representing each of the 16 states. For example, in the above table, the 16 states representing the synchronization information may be spaced every 9 states, such that the states S1, S10, S19, S28, S37, S46, S54, S63, S72, S81, S90, S99, S108, S117, S126, S135 represent possible states that the synchronization information may occupy. In code band 0 and code band 1, this corresponds to the width of the frequency index of 135 indices. The frequency separating the maximum possible synchronization state (S135) of code band 0 and the minimum possible synchronization state (S1) of code band 1 is 10 frequency indexes. Thus, the range of each set of frequency indices representing synchronization information is much larger (e.g., 135 indices) than the amount separating adjacent sets (e.g., 10 indices).
In this example, the remaining 128 states of the 144 state spaces that are not used to represent synchronization may be used to transmit information data. The data may be represented by any number of suitable states representing the number of desired bits. For example, 16 states may be used to represent 4 bits of information for each state, or 128 states may be used to represent 7 bits of information for each state. In some examples, the states selected to represent the data are selected such that a frequency range including a first code frequency representing each data state is greater than an amount of frequency separating the frequency range from an adjacent frequency range including a second code frequency also representing each data state. Thus, the states for representing possible data include at least one sufficiently small numbered state (e.g., S2) and at least one sufficiently large numbered state (S144). This ensures that the ranges that include the status that can be used to represent the data occupy a wide bandwidth within their respective code bands, and the spacing between adjacent ranges is narrow.
The encoder 1802 may repeat the encoding process and thereby encode multiple audio blocks with a particular code. That is, the selected code frequency may be inserted into a plurality of consecutive audio blocks, the audio blocks having 512 samples. In some examples, the code frequencies representing the symbols may be repeated in 36 consecutive audio blocks of 512 samples, or 72 overlapping blocks of 256 samples. Thus, on the receive side, when 18432 samples are processed by a fourier transform such as DFT, the emphasized code frequency will be visible in the resulting spectrum.
Fig. 20B shows an exemplary alternative graph 2030 that may be used by the code frequency selector 1908, where the graph 2030 lists four states in a first row 2032, each state including a respective frequency index listed in seven code bands 2034. These frequency indices correspond to the frequencies of the sinusoids of the embedded audio signal when viewed in the frequency domain by means of a fourier transform, such as a DFT, of a block of 512 samples. As an example, when the state S1 is transmitted, the code frequency selector 1906 indicates that the frequency indices 10, 14, 18, 22, 26, 30, and 34 are to be used. As described above, indications of these frequencies are sent to the masking evaluator 1904 and the code synthesizer 1908 so that a sine wave of appropriate amplitude and corresponding to the indicated frequency index can be generated to add to the audio 1804. In the exemplary encoder 1802 operating according to the graph 2030, the code frequency corresponding to the desired symbol is encoded as 19 overlapping blocks of 256 samples to enable detection.
As with graph 2000 of fig. 20A, graph 2030 indicates that the code bands are separated by the same frequency distance as the frequency index representing the adjacent symbol. For example, code band 0 includes a code frequency index having a frequency index 13, and frequency index 13 is one frequency index spaced apart from frequency index 14 of code band 1 representing state S1.
A chart 2060 of fig. 20C shows another example that may be used by the code frequency selector 1908, where the chart 2060 lists 24 states in the first row 2062, each state including a respective frequency index listed in seven code bands 2064. These frequency indices correspond to the frequencies of the sinusoids embedded in the audio signal when viewed in the frequency domain by means of a fourier transform, such as a DFT, on a block of 3072 samples. As an example, when the state S1 is sent, the code frequency selector 1906 indicates that the frequency indices 60, 84, 108, 132, 156, 180, and 204 are to be used. As described above, indications of these frequencies are sent to the masking evaluator 1904 and the code synthesizer 1908 so that a sine wave of appropriate amplitude and corresponding to the indicated frequency index can be generated to add to the audio 1804.
In the exemplary encoder 1802 operating according to the graph 2030 of fig. 20C, the code frequencies corresponding to the desired symbols are encoded into 182 overlapping blocks of 256 samples. In such an implementation, the first 16 columns may be used as data symbols and the 17 th column may be used as synchronization symbols. The remaining 7 columns may be used for special data like video on demand, e.g. columns 18, 19, 20, 21, 22, 23 may be used as auxiliary data symbols and these are only decoded if there are auxiliary synchronization symbols in column 24.
As with graphs 2000 and 2030 described above, graph 2060 indicates that the code bands are separated by the same frequency distance as the frequency index representing the adjacent symbol. For example, code band 0 includes a code frequency index having a frequency index 83, and the frequency index 83 is one frequency index separate from the frequency index 14 of code band 1 representing state S1.
Returning now to fig. 19, as described above, the code synthesizer 1908 receives from the code frequency selector 1906 indications of the frequency indices that need to be included to produce an encoded audio signal that includes an indication of the input code. In response to the indication of the frequency index, code synthesizer 1908 generates a plurality of sinusoids (or a composite signal comprising a plurality of sinusoids) having the identified frequency. The synthesis may result in a sine wave signal or digital data representing a sine wave signal. In some examples, the code synthesizer 1908 generates a code frequency having an amplitude specified by the masking evaluator 1904. In other examples, the code synthesizer 1908 generates code frequencies having fixed amplitudes, and these amplitudes may be adjusted by one or more gain blocks (not shown) located within the code sequencer 1908 or disposed between the code synthesizer 1908 and the combiner 1910.
Although the example code synthesizer 1908 that generates a sine wave or data representing a sine wave is described above, other example implementations of a code synthesizer are possible. For example, another exemplary code synthesizer 1908 may output frequency domain coefficients for adjusting the amplitude of particular frequencies of audio provided to combiner 1910 without generating a sine wave. In this way, the frequency spectrum of the audio can be adjusted to include the necessary sinusoids.
A combiner 1910 receives both the output of code synthesizer 1908 and audio 1804 and combines them to form encoded audio. The output of the code synthesizer 1908 and the audio 1804 may be combined in analog or digital form by a combiner 1910. If the combiner 1910 performs digital combining, the output of the code synthesizer 1908 may be combined with the output of the sampler 1902 and not with the audio 1804 that is input to the sampler 1902. For example, an audio block in digital form may be combined in digital form with a sine wave. Alternatively, the combination may be performed in the frequency domain, where the frequency coefficients of the audio are adjusted according to the coefficients representing the sine wave. As another alternative, the sine wave and audio may be combined in analog form. The encoded audio may be output from the combiner 1910 in analog or digital form. If the output of the combiner 1910 is digital, it can then be converted to analog before being connected to the transmitter 1806.
Although an example manner of implementing the example encoder 1802 of fig. 18 has been illustrated in fig. 19, one or more of the interfaces, data structures, elements, processes and/or devices illustrated in fig. 19 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the example sampler 1902, the example mask evaluator 1904, the example code frequency selector 1906, the example code synthesizer 1908, the example combiner 1910, and/or, more generally, the example encoder 1802 of fig. 19 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, any of the example sampler 1902, the example masking evaluator 1904, the example code frequency selector 1906, the example code synthesizer 1908, the example combiner 1910, and/or, more generally, the example encoder 1802, can be implemented by one or more circuits, programmable processors, ASICs, PLDs, FPLDs, FPGAs, and/or the like. When any apparatus claim of this patent that includes one or more of these elements is understood to encompass a purely software and/or firmware implementation, at least one of the example sampler 1902, the example masking evaluator 1904, the example code frequency selector 1906, the example code synthesizer 1908, the example combiner 1910, and/or, more generally-the example encoder 1802 is thereby expressly defined to include a tangible article of manufacture, such as those tangible computer-readable media storing firmware and/or software described above in connection with fig. 17. Further, the example encoder 1802 may include an interface, data structure, element, process, and/or device in place of or in addition to the interface, data structure, element, process, and/or device illustrated in fig. 19, and/or may include more than one of any or all of the illustrated interfaces, data structures, elements, processes, and/or devices.
Fig. 21 illustrates example machine-acceptable instructions 2100 that may be executed to implement the example encoder 1802 of fig. 18 and 19. A processor, controller and/or any other suitable processing device may be used and/or programmed to execute the example machine-acceptable instructions of fig. 21. For example, the machine-acceptable instructions of fig. 21 may be embodied as coded instructions stored on any combination of tangible articles of manufacture, such as the tangible computer-readable media discussed above in connection with fig. 17. For example, machine-readable instructions comprise instructions and data which cause a processor, a computer, and/or a machine having a processor (e.g., the example processor platform P100 discussed below in connection with fig. 24) to perform one or more particular processes. Alternatively, some or all of the example machine-acceptable instructions of fig. 21 may be implemented using any combination of ASICs, PLDs, FPLDs, FPGAs, discrete logic, hardware, firmware, and the like. Additionally, some or all of the example processes of fig. 21 may be implemented manually or as any combination of any of the foregoing techniques, e.g., any combination of firmware, software, discrete logic and/or hardware. In addition, many other methods of implementing the exemplary operations of FIG. 21 may be employed. For example, the order of execution of the blocks may be changed, and/or one or more of the blocks described may be changed, eliminated, sub-divided, or combined. Additionally, any or all of the example machine-acceptable instructions of fig. 21 may be executed sequentially and/or in parallel by, for example, separate processing threads, processors, devices, discrete logic, circuits, and/or the like.
The example process 2100 of fig. 21 begins when code included in audio is obtained (block 2102). The code may be obtained via a data file, a register, an input port, a network connection, or any other suitable technique. After obtaining the code (block 2102), the example process 2100 samples the audio to be embedded in the code (block 2104). The sampling may be performed at 48000 hertz or any other suitable frequency. Next, the example process 2100 combines the audio samples into a block of audio samples (block 2106). The sample block may comprise, for example, 512 audio samples. In some examples, a block of samples may include both old samples (e.g., samples that were used before encoding information into audio) and new samples (e.g., samples that were not used before encoding information into audio). For example, a block of 512 audio samples may include 256 old samples and 256 new samples. On subsequent iterations of the exemplary process 400, the 256 new samples from the last iteration may be used as the 256 old samples on the next iteration of the exemplary process 2100.
Next, the example process 2100 determines a code frequency (block 2108) that will be used to include the code (obtained at block 2102) to the audio block (obtained at block 2106). This is an encoding process that converts a code or code bits into symbols to be represented by frequency components. As described above, the example process 2100 may use one or more look-up tables to convert a code to be encoded into symbols representing the code, where the symbols are redundantly represented by code frequencies in the audio spectrum. As described above, seven frequencies may be used to redundantly represent selected symbols in an audio block. The selection of the symbol representing the code may include considering the block number that is being subjected to the process of error coding or the like.
After the audio that is to include the code is obtained (block 2106) and the code frequencies that are to be used to represent the code are obtained (block 2108), the process 2100 computes the ability of the audio chunk to mask the selected code frequencies (block 2110). As described above, the masking evaluation may include converting the audio block to the frequency domain and taking into account pitch-like or noise characteristics of the audio block and amplitudes at various frequencies in the block. Alternatively, the evaluation may be performed in the time domain. In addition, masking may also include consideration of the audio in the previous audio block. As noted above, the masking assessment may be in accordance with, for example, the MPEG-AAC audio compression standard ISO/IEC 13818-7: 1997. The result of the masking evaluation is to determine the amplitude or energy of the code frequency to be added to the audio block while such code frequency remains inaudible or substantially inaudible to the human hearing.
After determining the amplitude or energy of the code frequency that should be generated (block 2110), the example process 2100 synthesizes one or more sinusoids having the code frequency (block 2112). This synthesis may result in an actual sine wave or an equivalent representation of the digital data that may result in a sine wave. Some exemplary sinusoids are synthesized with amplitudes specified by the masking evaluations. Alternatively, the code frequencies may be synthesized with fixed amplitudes and then the amplitudes of the code frequencies may be adjusted after the synthesis.
Next, the example process 2100 combines the synthesized code frequencies with the audio block (block 2114). The combining may be performed by adding data representing the audio blocks and data representing the synthesized sinusoid, or may be performed in any other suitable manner.
In another example, the code frequency synthesis (block 2112) and combining (block 2114) may be performed in the frequency domain, where the frequency coefficients representing the audio frequency block may be adjusted according to the frequency domain coefficients of the synthesized sinusoid.
As described above, the code frequencies are redundantly encoded as successive audio blocks. In some examples, a particular code frequency group is encoded as 36 consecutive blocks. Thus, the example process 2100 monitors whether it has completed the requisite number of iterations (block 2116) (e.g., the process 2100 determines whether the example process 2100 has repeated 36 times to redundantly encode the code frequency). If the example process 2100 does not complete the necessary repetitions (block 2116), the example process 2100 samples the audio (block 2104), analyzes masking characteristics of the audio (block 2110), synthesizes code frequencies (block 2112), and combines the code frequencies with the newly obtained audio block (block 2114), thereby encoding another audio block having code frequencies.
However, when the iterations necessary to redundantly encode code frequencies into audio blocks have been completed (block 2116), the example process 2100 obtains the next code to be included into the audio (block 2102) and the example process 2100 repeats. Thus, the example process 2100 encodes the first code into a predetermined number of audio blocks before selecting the next code to encode into the predetermined number of audio blocks. However, it is possible that there is not always a code embedded in the audio. In this case, the example process 2100 may be bypassed. Alternatively, if no code to include is obtained (block 2102), the code frequency will not be synthesized (block 2112) and thus there will be no code frequency that changes the audio block. Thus, the exemplary process 2100 may still operate, but audio blocks are not always modified, particularly when there is no code to include in the audio.
Audio coding
In general, the decoder 1816 detects a code signal inserted into the audio 1804 at the encoder 1802 to form encoded audio. That is, decoder 1816 looks for an emphasized pattern in the frequency of the code it processes. Once the decoder 1816 has determined which code frequencies are emphasized, the decoder 1816 determines the symbols present within the encoded audio based on the emphasized code frequencies. The decoder 1816 may record the symbols or may decode the symbols into codes that were previously provided to the encoder 1802 for insertion into audio.
Fig. 22 illustrates an example manner of decoding a nielsen code and/or implementing the example decoder 1816 of fig. 22, the example decoder 310 of fig. 3, and/or the example auxiliary content trigger 180 of fig. 1 and 2. Although the decoder shown in fig. 22 may be used to implement any of decoders 1816, 310 and 180, for ease of discussion, the decoder of fig. 22 is referred to as decoder 1816. As shown in fig. 22, the exemplary decoder 1816 includes a sampler 2202, the sampler 2202 being provided with encoded audio in analog form, the sampler 2202 being implementable using a/D or any other suitable technique. As shown in fig. 18, the encoded audio may be provided to a receiver 1810 by a wired or wireless connection. The sampler 2202 samples the encoded audio at a sampling frequency of, for example, 48,000 hertz. Of course, a lower sampling frequency may be advantageously selected to reduce the computational load in decoding. For example, in the case of a sampling frequency of 8kHz, the nyquist frequency is 4kHz, so the entire embedded code signal is retained because its spectral frequency is lower than the nyquist frequency. The DFT block length of 18432 samples at the 48kHz sampling rate is reduced to 3072 samples at the 8kHz sampling rate. However, even with such modified DFT block sizes, the code frequency index is the same as the original size and ranges from 360 to 1367.
The samples from the sampler 2202 are provided to a time-to-frequency domain converter 2204. Time-to-frequency domain converter 2204 may be implemented using DFT or any other suitable technique for converting time-based information to frequency-based information. In some examples, the time-domain to frequency-domain converter 2204 may be implemented using a sliding DFT in which the spectrum is calculated each time a new sample is provided to the example time-domain to frequency-domain converter 2204. In some examples, the time-to-frequency domain converter 2204 uses 18432 samples of the encoded audio and determines the frequency spectrum from the samples. The resolution of the spectrum produced by the time-to-frequency domain converter 2204 increases with the number of samples used to generate the spectrum. Thus, the number of samples processed by the time-domain to frequency-domain converter 2204 should coincide with the resolution used to select the index in the graph of fig. 20A, 20B, or 20C.
The spectrum produced by the time-to-frequency domain converter 2204 is passed to a code frequency monitor 2206, which code frequency monitor 2206 monitors all frequencies or spectral lines corresponding to the frequency indices that can potentially carry the code inserted by the exemplary encoder 1802. For example, if the example encoder 1802 transmits data based on the graph of fig. 20A, the code frequency monitor 2206 monitors frequencies corresponding to the indices 360 through 1366.
Monitoring of the code frequencies includes evaluating the spectral energy at each code frequency. Thus, the code frequency monitor 2206 normalizes the energy of a particular line of fig. 20A to the maximum energy in that line of the graph. For example, considering the frequency index corresponding to code band 0 of the graph of fig. 20A, if the frequency corresponding to frequency index 360 has the maximum energy of other frequencies (e.g., frequency indices 361, 362.. 503) in the row representing code band 0, each energy at the other frequencies corresponding to the index in code band 0 is divided by the energy of the frequency corresponding to frequency index 360. Thus, the value of the normalized energy for frequency index 360 will be 1, and the values of all remaining frequencies corresponding to frequency indices in code band 0 will be less than 1. This normalization process is repeated for each row of graph 2000. That is, each code band in the graph of fig. 20A will include one frequency with energy normalized to 1, while all remaining energy in that code band is normalized to a value less than 1.
Based on the normalized energy produced by the code frequency monitor 2206, the sign determiner 2208 determines that a sign is present in the encoded audio. In some examples, the sign determiner 2208 sums all normalized energies corresponding to the respective states. That is, the sign determiner 2208 generates 144 sums, each sum corresponding to a column or state in the graph 2000. The column or state with the largest normalized sum of energies is determined as the symbol being encoded. The symbol determiner 2208 may use a look-up table similar to that of fig. 20A, which may be used to map emphasized frequencies to their corresponding symbols. For example, if state S1 is encoded into audio, the normalized energy will generally result in a value of 1 being generated for each frequency index representing state S1. That is, in general, all other frequencies in the code band that do not correspond to state S1 will have values less than 1. However, although this is generally true, not every value of the frequency index corresponding to state S1 will be 1. Thus, the sum of normalized energies is calculated for each state. Generally in this manner, the normalized energy corresponding to the frequency index representing state S1 will have a greater sum than the energy corresponding to the frequency index representing the other states. If the sum of the normalized energies corresponding to the frequency indices representing state S1 exceeds the threshold for detection 4.0, then state S1 is determined to be the most likely symbol to be embedded in the encoded audio. However, if the sum does not exceed the threshold, then state S1 cannot be determined to be encoded, and no state is determined to be the most likely state. Thus, the output of the symbol determiner 2208 is the series of most likely symbols encoded into the audio. Under ideal conditions, the code frequency of S1 would produce a normalized score of 7.0.
The most likely symbol is processed by the validity checker 2210 to determine whether the received symbol corresponds to valid data. That is, the validity checker 2210 determines whether the bit corresponding to the most probable symbol is valid, assuming that the code is converted to a symbol using the coding scheme at the code frequency selector 1906 of the encoder 1802. The output of the validity checker 2210 is a code corresponding to the code provided to the code frequency selector 1906 of fig. 19.
Although an example manner of implementing the example decoder 1816 of fig. 18 is shown in fig. 22, one or more of the interfaces, data structures, elements, processes and/or devices shown in fig. 22 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the example sampler 2202, the example time-to-frequency domain converter 2204, the example code frequency monitor 2206, the example symbol determiner 2208, the example validity checker 2210, and/or, more generally, the example decoder 1816 of fig. 22 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, any of the example sampler 2202, the example time-to-frequency domain converter 2204, the example code frequency monitor 2206, the example symbol determiner 2208, the example validity checker 2210, and/or, more generally, the example decoder 1816 may be implemented by one or more circuits, programmable processors, ASICs, PLDs, FPLDs, and/or FPGAs, among others. When any apparatus claim of this patent that includes one or more of these elements is understood to encompass a purely software and/or firmware implementation, at least one of the example sampler 2202, the example time-to-frequency domain converter 2204, the example code frequency monitor 2206, the example symbol determiner 2208, the example validity checker 2210, and/or, more generally, the example decoder 1816 is thereby expressly defined to include a tangible article of manufacture, such as those tangible computer-readable media storing firmware and/or software described above in connection with fig. 17. Moreover, the example decoder 1816 may include interfaces, data structures, elements, processes and/or devices instead of, or in addition to, those illustrated in fig. 22, and/or may include more than one of any or all of the illustrated interfaces, data structures, elements, processes and/or devices.
FIG. 23 illustrates an example machine-acceptable instruction 2300 that may be executed to implement the example decoder 1816 of FIGS. 18 and 22. A processor, controller and/or any other suitable processing device may be used and/or programmed to execute the example machine-acceptable instructions of fig. 23. For example, the machine-acceptable instructions of fig. 23 may be implemented as coded instructions stored on any combination of tangible articles of manufacture such as the tangible computer-readable media discussed above in connection with fig. 17. For example, machine-readable instructions comprise instructions and data which cause a processor, a computer, and/or a machine having a processor (e.g., the example processor platform P100 discussed below in connection with fig. 24) to perform one or more particular processes. Alternatively, some or all of the example machine-acceptable instructions of fig. 23 may be implemented using any combination of ASICs, PLDs, FPLDs, FPGAs, discrete logic, hardware, firmware, and the like. Additionally, some or all of the example processes of fig. 23 may be implemented manually or as any combination of any of the foregoing techniques, e.g., any combination of firmware, software, discrete logic and/or hardware. In addition, many other methods of implementing the exemplary operations of FIG. 23 may be employed. For example, the order of execution of the blocks may be changed, and/or one or more of the blocks described may be changed, eliminated, sub-divided, or combined. Additionally, any or all of the example machine-acceptable instructions of fig. 23 may be executed sequentially and/or in parallel by, for example, separate processing threads, processors, devices, discrete logic, circuits, and/or the like.
The example process 2300 of fig. 23 begins with sampling audio (block 2302). The audio may be obtained via an audio sensor, a hardwired connection, via an audio file, or by any other suitable technique. As described above, the sampling may be performed at 48,000 hertz or any other suitable frequency.
As individual samples are obtained, a sliding time-frequency transform is performed on a sample set that includes a number of older samples and the newly added sample obtained at block 2302 (block 2304). In some examples, a sliding DFT is used to process stream input samples that include 18,431 old samples and one newly added sample. In some examples, using a DFT of 18,432 samples results in a spectrum with a resolution of 2.6 Hz.
After the spectrum is obtained by time-frequency conversion (block 2304), the energy of the code frequency is determined (block 2306). In some examples, the energy may be obtained by taking a magnitude of the result for a time-frequency transform (block 2304) that may be emphasized as a frequency component of the audio encoding. In order to save processing time and minimize memory footprint, only frequency information corresponding to code frequencies may be retained and further processed, since the encoded information may only be found at those frequencies. Of course, the example process 2300 may use other information besides energy. For example, the example process 2300 may maintain and process magnitude and phase information.
Additionally, the frequency of processing in process 2300 may be further reduced by taking into account previously received synchronization symbols. For example, if a particular synchronization symbol is always followed by one of six different symbols, the frequency of processing may be reduced to those of the six different symbols upon receipt of the particular synchronization symbol.
After determining the energy (block 2306), the example process 2300 normalizes the code frequency energy of each code block based on the maximum energy in that code block (block 2308). That is, the maximum energy of a code frequency in a code block is used as a divisor of itself and all other energies in that code block. Normalization results in each code block having frequency components for which the normalized energy value is one, while all other normalized energy values in that code block have values less than one. Thus, referring to FIG. 20A, each row of graph 2000 will have an entry with a value of one and all other entries will have a value less than one.
Next, the example process 2300 operates on the normalized energy values to determine maximum likelihood symbols based on the results (block 2310). As described above, for example, the determination includes summing the normalized energy values corresponding to the respective symbols, thereby resulting in the same number of sums as the number of symbols (e.g., considering the graph of fig. 20A, there would be 144 sums, each corresponding to one of 144 symbols). The largest sum is then compared to a threshold (e.g., 4.0), and if the sum exceeds the threshold, the symbol corresponding to the largest sum is determined to be the received symbol. If the maximum sum does not exceed the threshold, no symbol is determined to be a received symbol.
After determining the received symbol (block 2310), the example process 2300 determines a code corresponding to the received symbol (block 2312). That is, the example process 2300 decodes the code-to-symbol encoding performed by the example encoding process 2100 (e.g., the encoding performed by block 2108).
After decoding is complete and the code is determined from the symbols (block 2312), the example process 2300 analyzes the validity of the code (block 2314). For example, the received code may be examined to determine if the code sequence is valid based on the encoding process of the transmitted code. A valid code is logged in and may be later sent back to the central processing device along with a time and date stamp indicating when the code was received. Additionally or alternatively, as described above in connection with fig. 1-17, the effective code may be used to obtain and present secondary content for primary media content associated with the decoded audio.
Fig. 24 is a schematic diagram of an example processor platform P100 that may be used and/or programmed to implement any of the example apparatus disclosed herein. For example, one or more general purpose processors, processor cores, microcontrollers, etc. may implement processor platform P100.
The processor platform P100 of the example of fig. 24 includes at least one programmable processor P105. The processor P105 executes coded instructions P110 and/or P112 present in main memory of the processor P105 (e.g., within RAM P115 and/or ROM P120). The processor P105 may be any type of processing unit, such as a processor core, a processor, and/or a microcontroller. The processor P105 may execute the example machine-acceptable instructions of fig. 17, 21, 23, 28, 29, 36, 37, 43, 45, 4-52, and 55, the example operations of fig. 6-10, 30, and 31, the example flows of fig. 14-16, etc. to communicate secondary content of the primary media content, encode audio, and/or decode audio, as described herein.
The processor P105 communicates with main memory (including ROM P120 and/or RAM P115) via a bus P125. The RAM P115 may be implemented by Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), and/or any other type of RAM device, and the ROM may be implemented by flash memory and/or any other desired type of memory device. Access to P115 and P120 may be controlled by a memory controller (not shown). For example, the example memory P115 may be used to implement the example media stores 130, 140 and/or the example databases 1120, 1130, and 1135.
The processor platform P100 also includes an interface circuit P130. Any type of interface, such as an external memory interface, serial port, general purpose input/output, etc., may implement interface circuit P130. One or more input devices P135 and one or more output devices P140 are connected to the interface circuit P130. The input device P135 and the output device P140 may be used to implement any of the example broadcast input interface 205, the example bluetooth interface 220, the example wireless interface 225, the example communication interface 230, the example audio input interface 305, the example display 330, the example input device 335, the example wireless interface 315, the example cellular interface 320, and/or the example bluetooth interface 345.
Fig. 25 illustrates an exemplary manner of implementing the exemplary auxiliary content module 170 of fig. 1 and 3. To retrieve the schedule of the auxiliary content, the exemplary auxiliary content module 170 of fig. 25 includes a reader 2505. In response to the SID and timestamp t (n) received from the example decoder 310 (fig. 3) and/or from the example media server 105 via, for example, the bluetooth interface 345, the example reader 2505 of fig. 25 interacts with the example auxiliary content server 175 to obtain a schedule of auxiliary content based on the SID and timestamp t (n). The example reader 2505 stores the received auxiliary content schedule in the schedule database 2510 in the example media store 340. Exemplary data structures that may be used by the auxiliary content server 175 to provide an auxiliary content schedule to the reader 2505 and/or may be used by the reader 2505 to store a received auxiliary content schedule in the schedule database 2510 are described below in connection with fig. 26 and 27.
To identify portions and/or locations within the primary media content, the exemplary auxiliary content module 170 of fig. 25 includes a program clock 2515. When the example reader 2505 receives the timestamp t (n) during the primary media content, the example program clock 2515 of fig. 25 is set (reset) such that its time value approximately corresponds to the received timestamp t (n). Subsequently, program clock 2515 can be used to identify subsequent portions and/or locations within the primary media content.
To select auxiliary content to be displayed, the exemplary auxiliary content module 170 of fig. 25 includes a selector 2520. The example selector 2520 of fig. 25 compares the time value provided by the program clock 2515 to the time stamps associated with the various auxiliary content items in the auxiliary content schedule stored in the schedule database 2510 to identify one or more auxiliary content feeds for display via, for example, the example user interface of fig. 4. As described below in connection with fig. 26 and 27, each secondary content feed listed in the schedule has an associated start timestamp and an associated end timestamp. When the current time value generated by program clock 2515 falls within such a range, exemplary selector 2520 selects the corresponding auxiliary content supply for display.
To implement the archiving of the auxiliary content and/or the provision of the auxiliary content to the archive, the example auxiliary content module 170 of fig. 25 includes an archiver 2525 and an archive 2530. As configured and/or operated by a person via exemplary user interface module 325, exemplary archive 2525 stores the particular auxiliary content and/or auxiliary content feed in exemplary archive 2530 for later retrieval. For example, a person may configure such that certain categories of auxiliary content and/or auxiliary content offers are automatically archived and/or may individually select certain auxiliary content and/or auxiliary content offers to archive. For example, a person may instruct to archive all recipes and cooking related auxiliary content feeds. The person can also interact with archive 2525 to query, search, and/or identify auxiliary content in archive 2530 for presentation. For example, the person may search by type, date, time, etc. of the auxiliary content.
Exemplary machine-acceptable instructions that may be executed to implement the exemplary auxiliary content module 170 of fig. 25 are described below in conjunction with fig. 28 and 29.
Although an exemplary manner of implementing the exemplary auxiliary content module 170 of fig. 1 and 3 is shown in fig. 25, one or more of the interfaces, data structures, elements, processes and/or devices shown in fig. 25 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the example reader 2502, the example program clock 2515, the example selector 2520, the example archive 2525, the example schedule database 2510, the example archive 2530, and/or, more generally, the example auxiliary content module 170 of fig. 25 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, any of the example reader 2502, the example program clock 2515, the example selector 2520, the example archive 2525, the example schedule database 2510, the example archive 2530, and/or, more generally, the example auxiliary content module 170 may be implemented by one or more circuits, programmable processors, ASICs, PLDs, FPLDs, FPGAs, and/or FPGAs, among others. When any device claim of this patent that includes one or more of these elements is understood to encompass a purely software and/or firmware implementation, at least one of the example reader 2502, the example program clock 2515, the example selector 2520, the example archive 2525, the example schedule database 2510, the example archive 2530, and/or, more generally-the example auxiliary content module 170 is thereby expressly defined to include a tangible article of manufacture, such as those tangible computer-readable media described above in connection with fig. 17 that store firmware and/or software. Further, the exemplary auxiliary content module 170 may include interfaces, data structures, elements, processes and/or devices instead of, or in addition to, those illustrated in fig. 25, and/or may include more than one of any or all of the illustrated interfaces, data structures, elements, processes and/or devices.
Fig. 26 and 27 illustrate exemplary data structures that may be used to implement the auxiliary content schedule. The example data structures of fig. 26 and 27 may be used by the example auxiliary content server 175 to transmit an auxiliary content schedule to the example media server 105 and/or the example auxiliary content presentation device 150, and/or to store an auxiliary content schedule at any device of the example content delivery system 100 of fig. 1.
The exemplary data structure of fig. 26 contains a field 2605 identifying particular primary media content and includes a plurality of entries 2610 for corresponding secondary content feeds. To identify the primary media content, the exemplary data structure of fig. 26 includes a SID field 2612. The exemplary SID field 2612 of fig. 26 contains a SID corresponding to particular primary media content. To represent the logo associated with the identified primary media content, the exemplary data structure of fig. 26 includes a SID logo field 2614. The exemplary SID logo field 2614 contains data representing a logo and/or one or more characters containing a logo file and/or a link to the logo. To further identify the primary media content, the exemplary data structure of fig. 26 includes a program name field 2616. The example program name field 2616 of fig. 26 contains one or more alphanumeric characters that represent the name of the identified primary media content and/or the name of the content provider 135 associated with the primary media content.
To identify the auxiliary content feeds, each of the example entries 2610 of fig. 26 includes a feed ID field 2624. Each of the example feed ID fields 2624 of fig. 26 contains one or more alphanumeric characters that uniquely identify a secondary content feed. To further identify the auxiliary content feeds, each of the example entries 2610 of fig. 26 includes a name field 2626. Each of the example name fields 2626 of fig. 26 contains one or more alphanumeric characters representing the name of the identified secondary content feed.
To identify the time within the identified primary media content in the SID field 2612 to display the secondary content feed, each of the example entries 2610 of fig. 26 includes a start field 2628 and an end field 2630. Each of the example start fields 2628 of fig. 26 contains information corresponding to a time at which presentation of the secondary content feed can begin. Each of the example end fields 2630 of fig. 26 contains a time corresponding to a time at which the secondary content feed is no longer presented.
To specify the offer type, each of the example entries 2610 of fig. 26 includes an offer type field 2632. Each of the example offer type fields 2632 of fig. 26 contains one or more alphanumeric characters that represent the type of secondary content offer. Exemplary auxiliary content offer types include, but are not limited to, those related to primary media content, those related to product targeting, those related to commercial advertising, those related to user loyalty, those related to user affinity groups, and the like.
To specify a title to be displayed to present the offer (e.g., in the exemplary user interface of fig. 4), each of the exemplary entries 2610 of fig. 26 includes a title field 2634 and a title format field 2636. Each of the example title fields 2634 of fig. 26 contains data representing a title to be rendered and/or contains one or more characters identifying a file and/or a link to a title. Each of the example title type fields 2636 of fig. 26 contains one or more alphanumeric characters that represent the type of title. Exemplary header types include, but are not limited to, bitmap, shockwave flash, flash video, portable network graphics, and the like.
To identify the action type, each of the example entries 2610 of fig. 26 includes a type field 2638. Each of the example type fields 2638 of fig. 26 includes one or more numbers, letters, and/or codes that identify the type of action 2640. Exemplary action types include, but are not limited to, network access, network link, local module ID, phone dialing, and/or passing to a local java applet.
To specify the action, each of the example entries 2610 of fig. 26 includes an action field 2640. Each of the example action fields 2640 of fig. 26 includes text and/or commands that define an action and/or auxiliary content corresponding to the identified offer. Exemplary scripts include, but are not limited to, a URL, a telephone number, a target java applet, and/or an OS command. When a corresponding secondary content offer is selected and/or activated, act 2610 is activated and/or used to obtain the corresponding secondary content.
To identify content to cache, each of the example entries 2610 of fig. 26 includes a content field 2642. Each of the example content fields 2642 of fig. 26, for example, identifies a set of web pages and/or other auxiliary content to be cached on the auxiliary content presentation device 150.
To categorize the auxiliary content, each of the example entries 2610 of fig. 26 includes a category field 2644 and a subcategory field 2646. Each of the example category and subcategory fields 2644 and 2646 of fig. 26 contains one or more alphanumeric characters for classifying auxiliary content. Exemplary categories include, but are not limited to, news, cooking, sports, information, education, information advertisements (informational), and the like.
To define whether the auxiliary content and/or auxiliary content feed can be saved, each of the example entries 2610 of fig. 26 includes an archivable field 2648. Each of the example archivable fields 2648 of fig. 26 contains a value that indicates whether auxiliary content and/or auxiliary content offerings may be saved at the auxiliary content presentation device 150 and/or the media server 105 for subsequent retrieval and/or display at the auxiliary content presentation device 50 and/or the media server 105. In some examples, the archivable field 2648 defines a period of time during which the actions 1205 may be saved at the auxiliary content presentation device 50 and/or the media server 105 and/or a time at which the auxiliary content and/or the auxiliary content feeds are purged from the cache.
To identify an expiration date and/or time, each of the example entries 2610 of fig. 26 includes an expiration field 2650. Each of the example termination fields 2650 of fig. 26 includes a date and/or time at which the identified auxiliary content and/or auxiliary content feed terminated and thus was no longer valid for presentation and/or retrieval.
FIG. 27 illustrates an exemplary auxiliary content schedule extensible markup language (XML) document that represents and/or implements an auxiliary content schedule using a data structure similar to that described above in connection with FIG. 26. Since the same elements are described in fig. 26 and 27, the description of the same elements is not repeated here. The interested reader is referred to the description provided above in connection with fig. 26. An XML document (e.g., the exemplary XML document of fig. 27) representing the auxiliary content schedule may be constructed and/or generated according to and/or using the syntax defined by the auxiliary content schedule XML schema. In addition to the conventional syntactic constraints imposed by XML, fig. 58 illustrates an exemplary auxiliary content schedule XML architecture that defines one or more constraints on an auxiliary content schedule XML document. Exemplary constraints are expressed as one or more combinations of syntax rules that control the order of elements, one or more Boolean decisions that the content must satisfy, one or more data types that control the content of elements and attributes, and/or one or more special rules such as uniqueness and referential integrity constraints.
While exemplary data structures that may be used to implement the auxiliary content schedule have been illustrated in fig. 26, 27, and 58, one or more entries and/or fields may be combined, divided, rearranged, omitted, eliminated, and/or implemented in any other way. Also, the example data structures of fig. 26, 27, and 58 may include fields that are alternatives to or in addition to the fields shown in fig. 26, 27, and 58, and/or may include more than one of any or all of the fields shown.
Fig. 28 and 29 illustrate example machine-acceptable instructions that may be executed to implement the example auxiliary content module 170 of fig. 1, 3, and 25. A processor, controller and/or any other suitable processing device may be used and/or programmed to execute the example machine-acceptable instructions of fig. 28 and 29. For example, the machine-acceptable instructions of fig. 28 and 29 may be implemented as coded instructions stored on any combination of tangible articles of manufacture, such as the tangible computer-readable media discussed above in connection with fig. 17. For example, machine-readable instructions comprise instructions and data which cause a processor, a computer, and/or a machine having a processor (e.g., the example processor platform P100 discussed above in connection with fig. 24) to perform one or more particular processes. Alternatively, some or all of the example machine-acceptable instructions of fig. 28 and 29 may be implemented using any combination of ASICs, PLDs, FPLDs, FPGAs, discrete logic, hardware, firmware, and the like. Additionally, some or all of the example processes of fig. 28 and 29 may be implemented manually or as any combination of any of the foregoing techniques, e.g., any combination of firmware, software, discrete logic and/or hardware. In addition, many other methods of implementing the exemplary operations of fig. 28 and 29 may be employed. For example, the order of execution of the blocks may be changed, and/or one or more of the blocks described above may be turned off, eliminated, sub-divided, or combined. Additionally, any or all of the example machine-acceptable instructions of fig. 28 and 29 may be executed sequentially and/or in parallel by, for example, separate processing threads, processors, devices, discrete logic, circuits, and/or the like.
The example machine-acceptable instructions of fig. 28 begin when a user launches and/or initiates an application on the auxiliary content presentation device 150 that implements the example auxiliary content module 170 (block 2805). The exemplary auxiliary content module 170 starts and/or implements the decoder 310 (block 2810).
When the decoder 310 detects a valid code (e.g., SID)2812, the example reader 2505 determines whether the SID has changed and/or is different from the previous SID (block 2815). If the SID has changed (block 2815), the reader 2505 retrieves a new secondary content schedule from the secondary content server 175 (block 2820), stores the received schedule in the schedule database 2510 (block 2815), and sets the program clock 2515 to correspond to the timestamp t (n) detected by the decoder 310 (block 2830).
Returning to block 2815, if the SID has not changed (block 2815), the reader 2505 determines whether the current timestamp t (n) falls within a time interval defined and/or encompassed by the secondary content schedule stored in the schedule database 2510 (block 2835). If the current timestamp t (n) falls within the secondary content schedule (block 2835), selector 2520 determines whether timestamp t (n) corresponds to a feed (e.g., secondary content) in the schedule (block 2840). If the timestamp t (n) corresponds to a secondary content offer in the schedule (block 2840), the selector 2520 obtains the corresponding secondary content from the schedule 2510 and displays the corresponding secondary content via the user interface module 325 (block 2845).
The example selector 2520 compares the time value generated by the program clock 2515 to the start time 2628 (fig. 26) and end time 2630 in the schedule database 2510 to determine if it is time to display any auxiliary content feeds (block 2850). If it is time to display one or more secondary content offers (block 2850), selector 2520 obtains the secondary content offers from schedule 2510 and displays the secondary content offers via user interface module 325 (block 2845).
If the user indicates via user interface module 325 that the offer and/or the secondary content is to be archived, and/or that the offer is to be automatically archived (e.g., based on archive settings) (block 2855), archive 2525 stores the secondary content offer and/or the secondary content in archive 2530 (block 2860).
If the user instructs to retrieve the offer and/or auxiliary content via user interface module 325 (e.g., by providing search query criteria), archive 2525 retrieves the auxiliary content offer and/or auxiliary content from archive 2530 (block 2865), and selector 2520 displays the retrieved auxiliary content via user interface module 325 (block 2845).
If the user indicates, via user interface module 325, that archive 2530 is to be edited and/or modified (e.g., to remove an item), archive 2525 makes a corresponding change to archive 2530 (block 2870).
Portions of the exemplary machine-acceptable instructions of fig. 29 are the same as portions of fig. 28. The same parts of fig. 28 and 29 are identified by the same reference numerals, and the description of the same parts is not repeated here. Rather, the reader is referred to the description of the same parts set forth above in connection with FIG. 28. In contrast to fig. 28, the decoding of the time stamp t (n) is performed discontinuously in the exemplary machine-acceptable instruction of fig. 29.
At block 2815 (fig. 29), if the SID has not changed (block 2815), the secondary content module 170 activates timestamp decoding (block 2905). If valid timestamp 2910 is decoded, reader 2505 determines whether timestamp t (n) falls within a time interval defined and/or encompassed by the auxiliary content schedule stored in schedule database 2510 (block 2915). If timestamp t (n) falls within the schedule (block 2915), program timer 2510 determines if timestamp t (n) coincides with the output of program timer 2510 (block 2920). If timestamp t (n) does not correspond generally to the output of program timer 2510 (block 2920), program timer 2510 is reset to match timestamp t (n) (block 2925).
Returning to block 2915, if the timestamp t (n) does not fall within the secondary content schedule (block 2915), the reader 2505 retrieves a new secondary content schedule (block 2820).
Periodically and/or aperiodically, the reader 2505 determines whether it is time to synchronize the schedule (block 2930). If it is time to synchronize the schedule (block 2930), control proceeds to block 2905 to decode another timestamp t (n).
Fig. 30 and 31 illustrate an exemplary schedule-based auxiliary content delivery scenario that may be performed by the exemplary delivery system 100 of fig. 1. Although the examples shown in fig. 30 and 31 are described in a sequential manner (as discussed above in connection with fig. 17), these activities of detecting the code, detecting the timestamp t (n), obtaining the auxiliary content, obtaining the link to the auxiliary content, obtaining the auxiliary content schedule, displaying the auxiliary content link, displaying the auxiliary content feed, and displaying the auxiliary content may occur substantially in parallel. Moreover, the auxiliary content may be presented without providing and/or presenting an intervening link and/or offer to the content. In some examples, variations of the exemplary scenarios of fig. 30 and 31 similar to those discussed above in connection with fig. 17 are implemented.
The exemplary auxiliary content delivery scenario of fig. 30 begins with the exemplary auxiliary content server 175 setting and/or selecting a default time out interval T (block 3005), and sending a value 3010 representing the time out T to the auxiliary content presentation device 150.
The exemplary media server 105 receives primary media content 3015 via the exemplary broadcast input interface 205. The example media server 105 and/or the example primary media content presentation device 110 emit and/or output free-field radiated audio signals 172, 173 associated with the primary media content 3015 via, for example, one or more speakers.
When the exemplary decoder 310 of the secondary content presentation device 105 detects a SID in the audio 172, 173 (block 3020), the program clock 2515 synchronizes its output with the timestamp t (n) (block 3025). If the SID has changed (block 3030), the reader 2505 sends the SID and a timestamp t (n) to the secondary content server 175. The secondary content server 175 forms a schedule of secondary content based on the SID and the timestamp t (n) (block 3035) and sends the secondary content schedule to the reader 2505. The reader 2505 stores the secondary content schedule received from the secondary content server 175 in the schedule database 2510 (block 3040).
The example selector 2520 displays the auxiliary content feed according to the received auxiliary content schedule using, for example, the example user interface of fig. 4 (block 3045). Upon displaying the auxiliary content offer, the selector 2520 sends the corresponding content ID to the rating server 190. If the user selects and/or activates any displayed links and/or feeds (block 3055), the corresponding link ID 3060 is sent to the rating server 190 and the secondary content server 175. In response to the link ID 3060, the auxiliary content server 175 provides auxiliary content 3065 associated with the link ID 3060 to the auxiliary content presentation device 150. The secondary content presentation device 150 displays the received secondary content 3065 (block 3070). Interaction with the rating server 190 may be omitted in the illustrated example of fig. 30 if audience measurement data does not need to be collected.
Returning to block 3030, if the SID has not changed (block 3030), the reader 2505 determines whether the new timestamp T (n) is greater than the sum of the previous timestamp T (n-1) and the time out interval T (block 3075). If the new timestamp t (n) is greater than the sum (block 3075), the reader 2505 sends the SID and timestamp t (n) to the secondary content server 175 to request an updated secondary content schedule. If the new timestamp t (n) is not greater than the sum (block 3075), control proceeds to block 3045 to select and display secondary content.
FIG. 31 shows additional scenarios that may be performed for selecting and displaying auxiliary content. The additional scenario of fig. 31 may be implemented as an alternative or supplement to the process of selecting and displaying auxiliary content shown in fig. 30. The exemplary scenario of fig. 31 occurs after initiation, startup, and/or start of an application implementing the exemplary auxiliary content module 170 (block 3105).
At some subsequent time (illustrated by dashed line 3107), and upon receiving the auxiliary content schedule utilizing, for example, the exemplary process of fig. 30 up to and including block 3040, the exemplary selector 2520 displays the auxiliary content offer (block 3110) and transmits a content ID 3115 corresponding to the offer to the rating server 190.
If automatic archiving is enabled for the offer category that includes the displayed offer (block 3120), the archive 2525 archives the offer in the archive 2530 (block 3125). If automatic archiving is not applicable (block 3120), but the user has indicated to archive the offer (block 3130), then archive 2525 archives the offer in archive 2530 (block 3125).
If the user selects and/or activates any displayed links and/or secondary content feeds (block 3140), the corresponding link ID3145 is sent to the rating server 190 and the secondary content server 175. In response to the link ID3145, the secondary content server 175 provides the secondary content 3150 associated with the link ID 3060 to the secondary content presentation device 150. The secondary content presentation device 150 displays the received secondary content 3150 (block 3155).
If at some subsequent time (indicated by dashed line 3107), the user desires to retrieve one or more archived auxiliary content offers (block 3165), archiver 2525 retrieves and/or sorts offers corresponding to one or more criteria provided by the user (block 3170). The example selector 2520 displays the retrieved and/or categorized auxiliary content offer (block 3175) and sends the content ID 3180 corresponding to the offer to the rating server 190. Interaction with the rating server 190 may be omitted in the illustrated example of fig. 31 if audience measurement data does not need to be collected.
Fig. 32 illustrates an example manner of implementing the example loyalty based scheduler 1160 of fig. 11. To identify primary media content, the example loyalty-based scheduler 1160 of fig. 32 includes an identifier 3205. Based on the SID and timestamp t (n) received from the media server 105 and/or the secondary content presentation device 150, the example identifier 3205 queries the content provider and program database 3210 to identify the corresponding primary media content.
To generate the user profile, the example loyalty based scheduler 1160 of fig. 32 includes a profile manager 3215. When the example identifier 3205 identifies primary media content, the example profile manager 3215 updates a user profile corresponding to a user ID (uid) received with the SID and a timestamp t (n). The user profile represents and/or stores which primary media content has been at least partially consumed by a user associated with the UID. The example profile manager 3215 stores and/or maintains the user profile in the profile database 3220.
An exemplary data structure that may be used to implement an exemplary profile database 3220 is shown in FIG. 33. The exemplary data structure of fig. 33 is a table that records which of a plurality of primary media content 3305 each of a plurality of users 3310 has consumed.
Returning to fig. 32, to develop user loyalty and/or user affinity group metrics, the example loyalty based scheduler 160 of fig. 32 includes a loyalty analyzer 3225. The example loyalty analyzer 3225 of fig. 32 analyzes the user's behavior to determine their loyalty to particular primary media content and/or content providers 130. Additionally or alternatively, the loyalty analyzer 3225 analyzes the behavior of a group of users to determine and/or identify similar groups, and to identify the likelihood that the respective groups consume particular primary media content and/or respond to and/or consume particular secondary media content.
For a given loyalty metric (e.g., the number of sets of television shows watched over a period of time), the example loyalty analyzer 3225 groups users of the secondary content server 175 into, for example, three groups of the same size. The exemplary group represents the most loyal user and may thus be presented with additional and/or special auxiliary content offers. In some examples, the user is credited only for consuming the primary media content when a particular percentage of the primary media content is consumed (e.g., viewed and/or listened to). In some examples, loyalty groups are defined in loyalty database 3230, and loyalty analyzer 3225 compares the user profile with the defined loyalty groups to determine their loyalty.
In some examples, the similarity groups may be identified and/or defined manually (e.g., people watching a large number of sports programs, people who tend to watch a large number of movies, people who watch primarily during the day, etc.). Additionally or alternatively, data mining (data mining) techniques may be applied to the user profiles stored in the profile database 3220 to automatically define the similarity groups.
Referring to the exemplary user profile of FIG. 33, an exemplary process performed to perform data mining to define affinity groups may be illustrated. In the example of fig. 33, there are three reality show (reality program) R1, R2, and R3 and three sports programs S1, S2, and S3. As shown, different users may view different combinations of the six programs.
The example loyalty analyzer 3225 performs dimensional analysis to develop metrics for the user's media consumption and the user's propensity and/or similarity to view one type of program versus another. As shown in fig. 34, the user's media consumption amount may be expressed as a ratio PGMS of the number of programs consumed by the user and the total number of programs that may be shown. In the example of fig. 33, the tendency of a user to watch sports programs without watching live show programs may be represented by the ratio S/(R + S), where S is the number of sports programs watched by the user and R is the number of live show programs watched by the user. For example, user #1 watched all three live shows without watching any one sports program, resulting in S/(R + S) ═ 0/3 and PGMS ═ 3/6.
As shown in fig. 35, the ratio of media consumption PGMS and the trend ratio S/(R + S) may be used to identify and/or define user groups and/or user affinity groups. As shown in fig. 35, ten users are grouped into four exemplary groups: group a only watched live show programs; group B only watched sports; group C watches a limited amount of various programs; and group D views a large variety of programs.
While the exemplary process of defining groups described and illustrated in connection with fig. 33-35 is simplified for ease of discussion, it should be apparent to those of ordinary skill in the art that the above-described exemplary method is readily expandable to include any number of users and/or any number of dimensions. It should also be apparent that similar clusters may change over time. Also, the similarity group to which the user belongs may vary with time. In some examples, to minimize damage to advertisers due to similarity group changes, the example loyalty analyzer 3225 may apply one or more filters to smooth data used to define similarity groups, and/or limit how quickly similarity groups may change.
Returning to fig. 32, whenever a SID, UID, and timestamp t (n) are received, loyalty and/or affinity group analysis and/or updates may be performed. Additionally or alternatively, loyalty and/or affinity group analysis may be run "offline" on a periodic or aperiodic basis to shorten computation time, utilize additional user profile information, and/or shorten the time spent by the loyalty-based scheduler in identifying loyalty and/or affinity-based offers in response to the SID, UID, and timestamp t (n).
To select an ancillary content offer for the combination of the received SID, UID, and timestamp t (n), the example loyalty based scheduler 1160 of fig. 32 includes an offer selector 3235. Based on loyalty and/or similar group memberships identified by loyalty analyzer 3225, example offer selector 3235 queries offer database 3240 to select one or more auxiliary content offers. The auxiliary content feeds selected by feed selector 3235 may be in addition to or in place of those selected by action server 1140 based only on the UID and timestamp t (n).
To allow any number and/or type of advertisers, program owners, content creators, and/or content providers 3245 to define, specify, and/or provide ancillary content offers for particular loyalty and/or similarity groups, the example loyalty-based scheduler 1160 includes a loyalty/similarity manager 3250. The example loyalty/similarity manager 3250 implements any number and/or type of APIs and/or network-based interfaces that allow advertisers, program owners, content creators, and/or content providers 3245 to interact with databases 3230 and 3240 to add, generate, modify, remove, and/or specify loyalty and/or similarity-based ancillary content offers.
The example databases 3210, 3220, 3230, and 3240 of fig. 32 may be implemented using any number and/or type of tangible articles of manufacture, such as tangible computer-readable media, including, but not limited to, volatile and/or non-volatile memory and/or storage devices.
Although an example manner of implementing the example loyalty based scheduler 1160 of fig. 11 is shown in fig. 32, one or more of the interfaces, data structures, elements, processes and/or devices shown in fig. 32 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the example identifier 3205, the example provider and program database 3210, the example profile manager 3215, the example profile database 3220, the example loyalty analyzer 3225, the example loyalty database 3230, the example offer selector 3235, the example offer database 3240, the example loyalty/similarity manager 3250, and/or, more generally, the example loyalty based scheduler 1160 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, any of the example identifier 3205, the example provider and program database 3210, the example profile manager 3215, the example profile database 3220, the example loyalty analyzer 3225, the example loyalty database 3230, the example offer selector 3235, the example offer database 3240, the example loyalty/similarity manager 3250, and/or, more generally, the example loyalty based scheduler 1160, may be implemented by one or more circuits, programmable processors, ASICs, PLDs, FPLDs, and/or FPGAs, among others. When any apparatus claim of this patent that incorporates one or more of these elements is understood to encompass a purely software and/or firmware implementation, at least one of the example identifier 3205, the example provider and programming database 3210, the example profile manager 3215, the example profile database 3220, the example loyalty analyzer 3225, the example loyalty database 3230, the example offer selector 3235, the example offer database 3240, the example loyalty/similarity manager 3250, and/or, more generally, the example loyalty based scheduler 1160 is thereby expressly defined to include a tangible article of manufacture, such as those tangible computer readable media storing firmware and/or software described above in connection with fig. 17. Moreover, the example loyalty-based scheduler 1160 may include interfaces, data structures, elements, processes and/or devices in lieu of or in addition to those illustrated in fig. 32, and/or may include more than one of any or all of the illustrated interfaces, data structures, elements, processes and/or devices.
Fig. 36 and 37 illustrate example machine-acceptable instructions that may be executed to implement the example loyalty based scheduler 1160 of fig. 11 and 32. A processor, controller, and/or any other suitable processing device may be used and/or programmed to execute the example machine-acceptable instructions of fig. 36 and 37. For example, the machine-acceptable instructions of fig. 36 and 37 may be implemented as coded instructions stored on any combination of tangible articles of manufacture, such as the tangible computer-readable media discussed above in connection with fig. 17. For example, machine-readable instructions comprise instructions and data which cause a processor, a computer, and/or a machine having a processor (e.g., the example processor platform P100 discussed above in connection with fig. 24) to perform one or more particular processes. Alternatively, some or all of the example machine-acceptable instructions of fig. 36 and 37 may be implemented using any combination of ASICs, PLDs, FPLDs, FPGAs, discrete logic, hardware, firmware, and the like. Additionally, some or all of the example processes of fig. 36 and 37 may be implemented manually or as any combination of any of the foregoing techniques, e.g., any combination of firmware, software, discrete logic and/or hardware. In addition, many other methods of implementing the exemplary operations of fig. 36 and 37 may be employed. For example, the order of execution of the blocks may be changed, and/or one or more of the blocks described may be changed, eliminated, sub-divided, or combined. Additionally, any or all of the example machine-acceptable instructions of fig. 36 and 37 may be executed sequentially and/or in parallel by, for example, separate processing threads, processors, devices, discrete logic, circuits, and/or the like.
The example machine-acceptable instruction of fig. 36 begins when the SID, UID, and timestamp t (n)3605 are received. The example identifier 3205 identifies primary media content corresponding to the received SID, and the timestamp t (n)3605 (block 3610). The example profile manager 3215 updates the user profile associated with the received UID3605 in the profile database 3220 (block 3615).
Loyalty analyzer 3225 calculates and/or determines a loyalty score of the user (e.g., a number of times the user has watched a television episode) (block 3620), and calculates and/or determines a loyalty of the user based on the loyalty score (block 3625). In some examples, loyalty analyzer 3225 automatically divides the user profile into a plurality of loyalty points (block 3630).
Based on the loyalty score determined by loyalty analyzer 3225 (block 3625), offer selector 3235 queries offer database 3240 to determine if there are any applicable loyalty based ancillary content offers (block 3635). If an applicable loyalty-based auxiliary content offer exists (block 3635), offer selector 3235 adds the identified offer to user's schedule 3645 (block 3640).
If the primary content owner and/or content provider provides loyalty inputs 3650 and 3655, respectively, loyalty/affinity manager 3250 updates loyalty database 3230 (blocks 3660 and 3665, respectively). Loyalty/similarity manager 3250 updates offers database 3240 if the primary content owner and/or content provider provides loyalty-based offers 3650 and 3655, respectively (block 3670).
The example machine-acceptable instruction of FIG. 37 begins when the SID, UID, and timestamp t (n)3705 are received. The example identifier 3205 identifies primary media content corresponding to the received SID, and the timestamp t (n)3705 (block 3710). The example profile manager 3215 updates the user profile associated with the received UID3705 in the profile database 3220 (block 3715).
The example loyalty analyzer 3225 compares the user's profile with one or more affinity groups 3725 to determine whether the user belongs to any one of the affinity groups (block 3730). The loyalty analyzer 3225 periodically or aperiodically analyzes the user profiles stored in the profile database 3220 to define one or more affinity groups 3725 (block 3735).
Based on the affinity group determined by loyalty analyzer 3225 (block 3730), offer selector 3235 queries offer database 3240 to determine if there are any applicable affinity group-based ancillary content offers (block 3740). If there is an applicable similarity-based auxiliary content offer (block 3740), offer selector 3235 adds the identified offer to the user's schedule 3750 (block 3745).
If the user provides affinity group-based offers 3755, loyalty/affinity manager 3250 updates offers database 3240 (block 3760).
An exemplary encoding and decoding system 3800 is shown in fig. 38. For example, the exemplary system 3800 may be a television audience measurement system, which will serve as context for further description of the encoding and decoding processes described herein. The exemplary system 3800 includes an encoder 3802, the encoder 3802 adding a code or information 3803 to the audio signal 3804 to produce an encoded audio signal. The information 3803 may be any selected information. For example, in a media monitoring context, information 3803 may represent and/or identify a broadcast media program, such as a television broadcast, radio broadcast, or the like. Additionally, the information 3803 may include timing information that indicates the time at which the information 3803 was inserted into the audio or media broadcast. Alternatively, the code may include control information for controlling the behavior of one or more target devices. In addition, information 3803 from more than one source may be multiplexed and encoded into the audio 3804. For example, the information 3803 provided by the television network facility may be interleaved with the information 3803 from, for example, a local television station. In some examples, the television network device information 3803 is encoded into each third message slot of the encoded audio. Also, the audio 3804 may be received with the already encoded television network device information 3803, and the following encoders 3802 may encode the further information 3803 with the remaining message slots (if any) in the respective 3 rd message slot interval. It should be appreciated that the example encoder 3802 of fig. 38 and 39 may be used to implement the example content provider 135 of fig. 1 and/or the example action encoder 1150 of fig. 11.
Audio signal 3804 may be any form of audio including, for example, speech, music, noise, commercial audio, audio associated with a television program, live performance, and the like. In the example of fig. 38, the encoder 3802 passes the encoded audio signal to the transmitter 3806. The transmitter 3806 transmits the encoded audio signal along with any video signals 3808 associated with the encoded audio signal. While the encoded audio signal may have an associated audio signal 3808 in some cases, the encoded audio signal does not necessarily have any associated video.
Some example audio signals 3804 are digitized versions of analog audio signals, where the analog audio signals have been sampled at 48 kHz. As described in detail below, two seconds of audio (corresponding to 96,000 audio samples at a 48kHz sampling rate) may be used to carry a message, which may be a synchronization message and 49 bits of information. With a 7-bit per symbol coding scheme, the message needs to convey eight symbols of information. Alternatively, in the context of the overwriting described below, one synchronization symbol is used, and following the synchronization symbol is an information symbol which conveys one of the 128 states. As described in detail below, according to one example, information of one 7-bit symbol is embedded in a long block of audio samples corresponding to 9,216 samples. Some such long blocks include 36 overlapping short blocks of 256 samples, where in a 50% overlapping block, 256 samples are old and 256 samples are new.
Although the transmit side of the exemplary system 3800 shown in fig. 38 shows a single transmitter 3806, the transmit side may be much more complex and may include multiple levels in a distribution chain through which the audio signal 3804 may be communicated. For example, the audio signal 3804 may be generated at a nationwide network and delivered to a local network for local distribution. Thus, although the encoder 3802 is shown in a transmission lineup (lineup) preceding the transmitter 3806, one or more encoders may be arranged throughout the distribution chain of the audio signal 3804. Thus, the audio signal 3804 may be encoded at a plurality of levels and the audio signal 3804 may include embedded codes associated with the plurality of levels. More details regarding the encoding and exemplary encoders are provided below.
Transmitter 3806 may include one or more Radio Frequency (RF) transmitters, or transmitters for distributing encoded audio signals over cable, fiber optics, etc., which may distribute the encoded audio signals over free-space propagation (e.g., via terrestrial or satellite communication links). Some example transmitters 3806 may be used to broadcast encoded audio signals throughout a wide geographic area. In other examples, the transmitter 3806 may distribute the encoded audio signal over a limited geographical area. Transmission may include up-converting the encoded audio signal to a radio frequency to enable propagation of the signal. Alternatively, transmitting may include distributing the encoded audio signal in the form of digital bits or packets of digital bits, and such signals may be transmitted over one or more networks, such as the internet, a wide area network, or a local area network. Thus, the encoded audio signal may be carried by a carrier signal, by packets of information, or by any suitable technique for distributing audio signals.
When the encoded audio signal is received by the receiver 3810 (in a media monitoring context, the receiver 3810 may be located at a statistically selected metering location 3812), the audio signal portion of the received program signal is processed to recover the code, even though the presence of the code is not perceptible (or substantially imperceptible) to a listener when the encoded audio signal is presented by the speakers 3814 of the receiver 3810. To this end, the decoder 3816 is connected either directly to an audio output 3810 available at the receiver 3818 or to a microphone 3820 located near the speaker 3814 to reproduce audio. The received audio signal may be in a mono or stereo format. More details regarding decoding and exemplary decoders are provided below. It should be appreciated that the example decoder 3816 and the example microphone 3820 of fig. 38 and 48 may be used to implement the example decoder 310 and the example audio input interface 305 of fig. 3, and/or the example auxiliary content trigger 180 of fig. 1, respectively.
Audio coding
As described above, the encoder 3802 inserts one or more inaudible (or substantially inaudible) codes into the audio 3804 to create encoded audio. Fig. 39 shows an exemplary encoder 3802. In one implementation, the example encoder 3802 of fig. 39 may be implemented using, for example, a digital signal processor programmed with instructions to implement the encoding lineup 3902, the operation of the encoding lineup 3902 being affected by the operation of the prior code detector 3904 and the masking lineup 3906, one or both of the code detector 3904 and the masking lineup 3906 may be implemented using a digital signal processor programmed with instructions. Of course, any other implementation of the exemplary encoder 3802 is possible. For example, the encoder 3802 may be implemented using a processor, a programmable logic device, or any other suitable combination of hardware, software, and firmware.
In general, during operation, the encoder 3802 receives the audio 3804 and the prior code detector 3904 determines whether the audio 3804 was previously encoded with information, which would make it difficult for the encoder 3802 to encode other information into the previously encoded audio. For example, encoding may have been previously performed at a prior location in the audio distribution chain (e.g., at a nationwide network). The prior code detector 3904 informs the encoding lineup 3902 as to whether the audio has been previously encoded. The prior code detector 3904 may be implemented by a decoder as described herein.
The encoding lineup 3902 receives the information 3803 and generates a code frequency signal based on the information 3803 and combines the code frequency signal with the audio 3804. The operation of the encoding lineup 3902 is affected by the output of the prior code detector 3904. For example, if the audio 3804 has been previously encoded and the prior code detector 3904 informs the encoding lineup 3902 of this fact, the encoding lineup 3902 may select an alternative message to be encoded in the audio 3804 and may also change the details by which the alternative message is encoded (e.g., different temporal locations within the message, different frequencies used to represent symbols, etc.).
The encoding lineup 3902 is also affected by the masking lineup 3906. In general, the masking lineup 3906 processes the audio 3804 corresponding to the point in time at which the encoding lineup 3902 wants to encode information and determine the amplitude of the encoding that should be performed. As described below, the masking lineup 3906 can output a signal to control the code frequency signal amplitude to keep the code frequency signal below a human perception threshold.
As shown in the example of fig. 39, the coding lineup includes a message generator 3910, a symbol selector 3912, a code frequency selector 3914, a synthesizer 3916, an inverse fourier transform 3918, and a combiner 3920. The message generator 3910 is responsive to the information 3803 and outputs a message having a format shown generally at reference numeral 3922. The information 3803 provided to the message generator may be a current time, a station or station identification, a program identification, etc. Some example message generators 3910 output messages every two seconds. Of course, other messaging intervals, such as 1.6 seconds, are possible.
Some example message formats 3922 that represent messages output from the message generator 3910 include synchronization symbols 3924. The synchronization symbols 3924 are used by a decoder 3910 (examples of which are described below) to obtain timing information that represents the start of a message. Thus, when the decoder receives the synchronization symbol 3924, the decoder expects to see additional information following the synchronization symbol 3924.
In the exemplary message format 3922 of fig. 39, the synchronization symbol 3924 is followed by 42 bits of message information 3926. The information may include a station identifier and a binary representation of the coarse timing information. Some exemplary timing information represented in the 42 bits of message information 3926 changes every 64 seconds or 32 message intervals. Thus, 42 bits of message information 3926 remain static for 64 seconds. The seven bits of message information 3928 may be a high resolution time that increments every two seconds.
The message format 3922 also includes legacy code flag information 3930. However, the existing code flag information 3930 is only selectively used to convey information. When the prior code detector 3904 notifies the message generator 3910 that the audio signal 3804 has not been previously encoded, the existing code flag information 3930 is not used. Thus, the message output by the message generator includes only the synchronization symbols 3924, the 42-bit message information 3926, and the seven-bit message information 3928; the existing code flag information 3930 is blank or filled with an unused symbol indication. In contrast, when the prior code detector 3904 indicates to the message generator 3910 that the audio 3804 has been previously encoded (message information is encoded in the audio 3804), the message generator 3910 will not output the synchronization symbol 3924, the 42-bit message information 3926, or the seven-bit message information 3928. Specifically, the message generator 3910 will only utilize the existing code flag information 3930. Some example legacy code flag information includes a legacy code flag synchronization symbol to signal the presence of legacy representative flag information. The incumbent code flag synchronization symbol is different from the synchronization symbol 3924 and therefore may be used to signal the start of the incumbent code flag information. Upon receiving the incumbent code flag synchronization symbol, the decoder may ignore any previously received information that is aligned in time with the synchronization symbol 3924, the 42-bit message information 3926, or the seven-bit message information 3928. To convey information such as channel indication, distribution identification, or any other suitable information, an existing code flag information symbol follows an existing code flag synchronization symbol. The pre-existing code flag information may be used to provide appropriate scoring in the audience monitoring system.
The output from the message generator 3910 is passed to a symbol selector 3912, the symbol selector 3912 selecting the symbol represented. When the synchronization symbol 3924 is output, the symbol selector may not have to perform any mapping since the synchronization symbol 3924 is already in symbol format. Alternatively, if multiple bits of information are output from the message generator 3910, the symbol selector may use a direct mapping in which, for example, seven bits output by the message generator 3910 are mapped to a symbol having a decimal value of seven bits. For example, if the value 1010101 is output from the message generator 3910, the symbol selector may map those bits to the symbol 85. Of course, other conversions between bits and symbols may be used. In certain examples, redundancy or error coding may be used when selecting symbols to represent bits. In addition, any other suitable number of bits than seven may be selected to be converted into symbols. The number of bits used to select a symbol may be determined based on the maximum symbol space available in the communication system. For example, if the communication system is only able to transmit one of four symbols at a time, only two bits from the message generator 3910 are converted to symbols at a time.
Another exemplary message includes 8 long blocks following a number of empty short blocks to fill the duration of the message to approximately 1.6 seconds. The first of these 8 long blocks represents a synchronization symbol followed by 7 long blocks, which represent, for example, the payload or message content depicted in fig. 57. The exemplary message format of fig. 57 may be used to represent and/or encode data of 7 x 7-49 bits.
The symbol from the symbol selector 3912 is passed to a code frequency selector 3914, and the code frequency selector 3914 selects a code frequency for representing the symbol. The symbol selector 3912 may include one or more look-up tables (LUTs) 3932, which look-up tables (LUTs) 3932 may be used to map a symbol to a code frequency representing the symbol. That is, the symbols are emphasized in the audio 3802 by the encoder 3802 to form a plurality of code frequency representations of the transmitted encoded audio. Upon receipt of the encoded symbols, the decoder detects the presence of the emphasized code frequency and decodes the pattern of the emphasized code frequency into the transmitted symbols. Thus, the same LUT selected at the encoder 3910 for selecting the code frequency may be used in the decoder. An exemplary LUT is described in conjunction with fig. 40-42. Additionally, exemplary techniques for generating LUTs are provided in connection with fig. 44-46.
The code frequency selector 3914 may select any number of different LUTs according to various criteria. For example, the code frequency selector 3914 may use a particular LUT or group of LUTs in response to a particular synchronization symbol previously received. Additionally, if the prior code detector 3904 indicates that a message was previously encoded into the audio 3804, the code frequency selector 3914 may select a look-up table that is unique to the pre-existing code scenario to avoid confusion between the frequency used to previously encode the audio 3804 and the frequency used to include the pre-existing code tag information.
An indication of the code frequency selected to represent a particular symbol is provided to synthesizer 3916. The synthesizer 3916 may store three complex fourier coefficients for each short block that makes up the long block, the coefficients representing each of the possible code frequencies that the code frequency selector 3914 will indicate. These coefficients represent a transformation of a windowed sinusoidal (windowed) code frequency signal having a phase angle corresponding to the starting phase angle of the code sinusoid in the short block.
Although the foregoing describes an exemplary code synthesizer 3908 that generates a sine wave or data representing a sine wave, other exemplary implementations of code synthesizers are possible. For example, rather than generating a sine wave, another exemplary code synthesizer 3908 may output fourier coefficients in the frequency domain that are used to adjust the amplitude of particular frequencies of audio provided to the combiner 3920. In this way, the frequency spectrum of the audio can be adjusted to include the necessary sinusoids.
Three complex, amplitude adjusted fourier coefficients corresponding to the symbols to be transmitted are provided from the synthesizer 3916 to the inverse fourier transform 3918, which inverse fourier transform 3918 converts these coefficients into time domain signals of prescribed frequency and amplitude to enable their insertion into the audio, which are connected to a combiner 3920. The combiner 3920 also receives audio. Specifically, the combiner 3920 inserts the signal from the inverse fourier transform 3918 into a long block of audio samples. As described above, for a given sampling rate of 48kHz, a long block is 9,216 audio samples. In the example provided, the synchronization symbol and 49 bits of information require a total of eight long blocks. Because each long block is 9,216 audio samples, only 73,728 samples of audio 3804 are required to encode a given message. However, because these messages start every 2 seconds, i.e., every 96,000 samples, many samples located at the end of 96,000 audio samples will not be encoded. The combining may be done in the digital domain or in the analog domain.
However, in the case of the existing code flag, the existing code flag is inserted in the audio 3804 after the last symbol representing the previously inserted seven-bit message information. Thus, the insertion of the incumbent code flag information begins at sample 73729 and lasts for two long blocks or 18,432 samples. Thus, when using the pre-existing code marking information, a smaller portion of the 96,000 audio samples 3804 will not be encoded.
The masking lineup 3906 includes an overlap short block maker that makes short blocks of 512 audio samples, where 256 samples are old and 256 samples are new. That is, the overlapping short block maker 3940 makes blocks of 512 samples, of which 256 samples are simultaneously shifted into or out of the buffer. For example, when a first set of 256 samples enters the buffer, the oldest 256 samples are shifted out of the buffer. In a subsequent iteration, the first set of 256 samples is shifted to the later position of the buffer and 256 samples are shifted into the buffer. Each time a new short block is formed by moving in 256 new samples and removing 256 oldest samples, the new short block is provided to the mask evaluator 3942. The 512 sample blocks output from the overlap short block maker 3940 are multiplied by an appropriate window function so that the "overlap and add" operation will restore the audio samples to their correct values at the output. The composite code signal to be added to the audio signal is also similarly windowed to prevent abrupt transitions from occurring at the edges of the block when there is a change in code amplitude from one block of 512 samples to the next overlapping block of 512 samples. These transitions, if present, will have audible consequences.
The masking evaluator 3942 receives samples of overlapping short blocks (e.g., 512 samples) and determines the ability of the block to conceal the code frequency from human hearing. That is, the masking evaluator determines whether the code frequency can be hidden within the audio represented by the short block by collectively evaluating each critical band of the audio to determine its energy and determining noise-like or tone-like properties of each critical band and determining a total ability of the critical band to mask the code frequency. According to the illustrated example, the bandwidth of the critical band increases with frequency. If the masking evaluator 3942 determines that the code frequency may be hidden in the audio 3804, the masking evaluator 3904 indicates an amplitude at which the code frequency may be inserted within the audio 3804 and still remain hidden and provides the amplitude information to the synthesizer 3916.
By determining the energy E that can occur at any critical frequency bandbOr mask the maximum change in energy level and make the change inaudible to the listener, some example mask evaluator 3942 performs a mask evaluation. The masking evaluations performed by the masking evaluator 3942 may be as described, for example, in the moving picture experts group-advanced audio coding (MPEG-AAG) audio compression standard ISO/IEC 13818-7: 1997 as outlined in. Acoustic energy in each critical band affects the masking energy of its neighbors, and in such as ISO/IEC 13818-7: algorithms for calculating the masking effect are described in the standard documents 1997. These analyses can be used to determine, for each short block, the masking contribution due to tonal (e.g., how much the audio is evaluated to be similar to tonal) and noisy (i.e., how much the audio is evaluated to be similar to noise) features. Further analysis may evaluate temporary masking that extends the masking capability of the audio over a short time, typically 50 to 100 milliseconds (ms). The analysis obtained by the masking evaluator 3942 provides for the determination, on a per critical band basis, of the amplitude of code frequencies that can be added to the audio 3804 without producing any significant audio degradation (e.g., inaudible).
The masking evaluator 3942 evaluates twice because a block of 256 samples will appear at both the beginning and one short block and at the end of the next short block, so the masking evaluator performs two masking evaluations of a block comprising 256 samples. The amplitude indication provided to the combiner 3916 is a composite of the two evaluations of a block comprising 256 samples, and the amplitude indication is timed such that the amplitude of the code inserted into the 256 samples is timed such that the samples arrive at the combiner 3920.
Referring now to fig. 40-42, an exemplary LUT 3932 is shown, the LUT 3932 including one column representing symbols 4002 and seven columns 4004, 4006, 4008, 4010, 4012, 4014, 4016 representing numbered code frequency indices. The exemplary LUT 3932 of fig. 40-42 includes 129 rows, where 128 rows are used to represent data symbols and one row is used to represent synchronization symbols. Because the exemplary LUT 3932 includes 128 different data symbols, data may be transmitted at a rate of 7 bits per symbol. The frequency indices in the table may be from 180 to 656 and are based on a long block of 9,216 samples and a sampling rate of 48 kHz. The frequencies corresponding to these indices are therefore between 937.5Hz and 3,126.6Hz, which falls within the range that can be heard by humans. The process of generating a LUT such as the exemplary table 3932 of fig. 40-42 is described in conjunction with fig. 44-47.
Although an exemplary LUT 3932 is shown in fig. 40-42, other sampling rates and frequency indices may be used to represent symbols. For example, the frequency index may be selected from a range of 937.5Hz to 3,126.6Hz that falls within the audible range of humans. For example, one or more frequency index ranges may not be selected and/or used to avoid interference with frequencies used to carry other codes and/or watermarks. Moreover, the frequency ranges selected and/or used need not be contiguous. In some examples, frequencies in the range of 0.8kHz to 1.03kHz and 2.9kHz to 4.6kHz are used. In other examples, frequencies in the range of 0.75kHz to 1.03kHz and 2.9kHz to 4.4kHz are used.
In some example operations of the code frequency selector 3914, a symbol of 25 (e.g., binary value 0011001) is received from the symbol selector 3912. The code frequency selector 3914 accesses the LUT 3932 and reads the row 25 of the symbol column 4002. From this line, the code frequency selector reads the code frequency indices 217, 288, 325, 403, 512, 548, and 655 to be emphasized in the audio 3804 to send the symbol 25 to the decoder. The code frequency selector 3914 then provides an indication of these indices to the synthesizer 3916, and the synthesizer 3916 synthesizes the code signal by outputting fourier coefficients corresponding to these indices.
The combiner 3920 receives both the output of the code synthesizer 3908 and the audio 3804 and combines them to form encoded audio. The combiner 3920 may combine the output of the code synthesizer 3908 and the audio 3804 in analog or digital form. If the combiner 3920 performs digital combining, the output of the code synthesizer 3908 may be combined with the output of the sampler 3902, but not with the audio 3804 input to the sampler 3902. For example, a block of audio in digital form may be combined with a sine wave in digital form. Alternatively, the combination may be performed in the frequency domain, wherein the frequency coefficients of the audio are adjusted according to the frequency coefficients representing the sinusoids. As another alternative, the sine wave and audio may be combined in analog form. The encoded audio may be output from the combiner 3920 in analog or digital form. If the output of the combiner 3920 is digital, the output may then be converted to analog before being connected to the transmitter 3806.
Although an example manner of implementing the example encoder 3802 of fig. 38 has been illustrated in fig. 39, one or more of the interfaces, data structures, elements, processes and/or devices illustrated in fig. 39 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the example message generator 3910, the example symbol selector 3912, the example code frequency selector 3914, the example code signal synthesizer 3916, the example inverse fourier transform 3918, the example combiner 3920, the example prior code detector 3904, the example overlapping short block maker 3940, the example masking evaluator 3942, and/or, more generally, the example encoder 3802 of fig. 39 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, any of the example message generator 3910, the example symbol selector 3912, the example code frequency selector 3914, the example code signal synthesizer 3916, the example inverse fourier transform 3918, the example combiner 3920, the example prior code detector 3904, the example overlapped short block maker 3940, the example masking evaluator 3942, and/or, more generally, the example encoder 3802 may be implemented by one or more circuits, programmable processors, ASICs, PLDs, FPLDs, and/or FPGAs, among others. When any apparatus claim of this patent that incorporates one or more of these elements is understood to encompass a purely software and/or firmware implementation, at least one of the example message generator 3910, the example symbol selector 3912, the example code frequency selector 3914, the example code signal synthesizer 3916, the example inverse fourier transform 3918, the example combiner 3920, the example prior code detector 3904, the example overlapping short block maker 3940, the example masking evaluator 3942, and/or, more generally, the example encoder 3802 is thereby expressly defined as comprising a tangible article of manufacture, such as those tangible computer readable media storing firmware and/or software described above in connection with fig. 17. Further, the example encoder 3802 may include interfaces, data structures, elements, processes and/or devices instead of, or in addition to, those illustrated in fig. 39, and/or may include more than one of any or all of the illustrated interfaces, data structures, elements, processes and/or devices.
Fig. 43 illustrates exemplary machine-acceptable instructions that may be executed to implement the exemplary encoder 3802 of fig. 38 and 39. A processor, controller and/or any other suitable processing device may be used and/or programmed to execute the example machine-acceptable instructions of fig. 43. For example, the machine-acceptable instructions of fig. 43 may be implemented as coded instructions stored on any combination of tangible articles of manufacture, such as the tangible computer-readable media discussed above in connection with fig. 17. For example, machine-readable instructions comprise instructions and data which cause a processor, a computer, and/or a machine having a processor (e.g., the example processor platform P100 discussed above in connection with fig. 24) to perform one or more particular processes. Alternatively, some or all of the example machine-acceptable instructions of fig. 43 may be implemented using any combination of ASICs, PLDs, FPLDs, FPGAs, discrete logic, hardware, firmware, and the like. Additionally, some or all of the example processes of fig. 43 may be implemented manually or as any combination of any of the foregoing techniques, e.g., any combination of firmware, software, discrete logic and/or hardware. In addition, many other methods of implementing the exemplary operations of FIG. 43 may be employed. For example, the order of execution of the blocks may be changed, and/or one or more of the blocks described may be changed, eliminated, sub-divided, or combined. Additionally, any or all of the example machine-acceptable instructions of fig. 43 may be executed sequentially and/or in parallel by, for example, separate processing threads, processors, devices, discrete logic, circuits, and/or the like.
The example process 4300 of fig. 43 begins when an audio sample to be encoded is received (block 4302). Next, the process 4300 determines whether the received sample has been previously encoded (block 4304). This determination may be performed, for example, by the prior code detector 3904 of fig. 39 or by any suitable decoder configured to examine the audio to be encoded for previous encoding forensics.
If the received sample was not previously encoded (block 4304), the process 4300 generates a communication message (block 4306), such as one having the format shown at reference numeral 3922 in fig. 39. In one particular example, the communication message may include a synchronization portion and one or more portions including data bits when the audio has not been previously encoded. Communication message generation may be performed, for example, by the message generator 3910 of fig. 39.
The communication message is then mapped to symbols (block 4308). For example, if the synchronization information is already a symbol, there is no need to map the synchronization information to the symbol. In another example, if a portion of a communication message is a string of bits, such bits or groups of bits may be represented by one symbol. As one way in which the mapping (block 4308) may be performed as described above in connection with the symbol selector 3912, bits may be converted to symbols using one or more tables or coding schemes. For example, such techniques may include the use of error correction coding or the like to increase message robustness through the use of coding gain. In one specific example implementation where the symbol space is adjusted to accommodate 128 data symbols, seven bits may be converted to one symbol. Of course, other numbers of bits may be processed depending on many factors including available symbol space, error correction coding, and the like.
After the communication symbols are selected (block 4308), process 4300 selects the LUT to be used to determine the code frequency to be used to represent each symbol (block 4310). In some examples, the selected LUT may be the exemplary LUT 3932 of fig. 40-42, or may be any other suitable LUT. Additionally, the LUT may be any LUT generated as described in connection with fig. 44-46. The selection of the LUT may be based on a number of factors, such factors including the synchronization symbol selected during the generation of the communication message (block 4306).
After the symbols are generated (block 4308) and the LUT is selected (block 4310), the symbols are mapped to code frequencies using the selected LUT (block 4312). In some examples where the LUT 3932 of fig. 40-42 is selected, a symbol such as 35 would be mapped to frequency indices 218, 245, 360, 438, 476, 541, and 651. The data space in the LUT is at symbol 0 and symbol 127, and symbol 128, which uses a unique set of code frequencies that do not coincide with any other code frequencies in the table, is used to indicate a synchronization symbol. The selection (block 4310) and mapping (block 4312) of the LUT may be performed, for example, by the code frequency selector 3914 of fig. 39. After the code frequency is selected, an indication of the code frequency is provided to, for example, synthesizer 3916 of fig. 39.
Next, a code signal including a code frequency is synthesized in accordance with the amplitude of the synthesis based on the masking evaluation (block 4314), which will be described in conjunction with blocks 3940 and 3942 or fig. 39, and in conjunction with the following process 4300. In some examples, the synthesis of the code frequency signal may be performed by providing appropriately adjusted fourier coefficients to the fourier inverse process. For example, three fourier coefficients may be output to represent each code frequency in the code frequency signal. Thus, the code frequency can be synthesized by inverse fourier processing in such a way that the synthesis frequency is windowed to prevent overflow to other parts of the signal in which the code frequency signal is embedded. Exemplary configurations that may be used to perform the synthesis of block 4314 are shown at blocks 3916 and 3918 of fig. 39. Of course, other implementations and configurations are possible.
After synthesizing the code signal including the code frequencies, the code signal is combined with the audio samples (block 4316). As described in connection with fig. 39, the combination of the code signal and the audio causes one symbol to be inserted into each long block of audio samples. Thus, to transmit one synchronization symbol and 49 data bits, the information is encoded into eight long blocks of audio information: one long block for the synchronization symbols and one long block for each seven bits of data (assuming seven bits/symbol encoding). The messages are inserted into the audio at two second intervals. Thus, the eight long blocks of audio immediately after the beginning of the message may be encoded in audio and the remaining long blocks that make up the remaining two seconds of audio may not be encoded.
The code signal may be inserted into the audio by adding samples of the code signal to samples of the host audio signal, with such addition being made in the analog or digital domain. Alternatively, with proper frequency alignment and registration, the frequency components of the audio signal may be adjusted in the frequency domain and the adjusted spectrum may be converted back to the time domain.
The operation of the process 4300 when the process determines that the received audio sample has not been previously encoded (block 4303) is described above. However, in the case where a portion of the media has passed through the distribution chain and was encoded as it was processed, the received audio samples processed at block 4304 already include the code. For example, a local television station using a free news clip from CNN in a local news broadcast may not receive viewing credit based on the previous encoding of the CNN clip. In this way, additional information is added to the local news broadcast in the form of pre-existing code tag information. If the received audio sample has been previously encoded (block 4304), the process generates legacy code flag information (block 4318). The legacy code flag information may include the generation of legacy code flag synchronization symbols and the generation of, for example, seven bits of data (which would be represented by a single data symbol). The data symbols may represent station identification, time, or any other suitable information. For example, a Media Monitoring Site (MMS) may be programmed to detect pre-existing code flag information to credit stations identified therein.
After the legacy code flag information is generated (block 4318), process 4300 selects a legacy code flag LUT to be used to identify a code frequency representing the legacy code flag information (block 4320). In some examples, the legacy code flag LUT may be different from other LUTs used in non-legacy code conditions. For example, the legacy code mark synchronization symbols may be represented by code frequencies 220, 292, 364, 436, 508, 580, and 652.
After the legacy code flag information is generated (block 4318) and the legacy code flag LUT is selected (block 4320), the legacy code flag symbols are mapped to code frequencies (block 4312) and the rest of the processing proceeds as described above.
Sometime before synthesizing the code signal (block 4314), process 4300 performs a masking evaluation to determine an amplitude at which the code signal should be generated that is such that the code signal is inaudible or substantially inaudible to a human listener. Thus, process 4300 generates overlapping short blocks of audio samples, each short block including 512 audio samples (block 4322). As described above, the overlapping short block includes 50% of the old samples and 50% of the newly received samples. This operation may be performed, for example, by the overlapped short block maker 3940 of fig. 29.
After the overlapping short block is generated (block 4322), a masking evaluation is performed on the short block (block 4324). This may be performed, for example, as described in connection with block 3942 of fig. 39. The results of the masking evaluations are used by process 4300 at block 4314 to determine the amplitude of the code signal to be synthesized. The overlapping short block approach may produce two masking evaluations for a particular 256 samples of audio (one of which is the evaluation when the 256 samples are "new samples" and one of which is the evaluation when the 256 samples are "old samples"), and the result provided to block 4314 of process 4300 may be a composite of these masking evaluations. Of course, the timing of the process 4300 is such that the masking evaluations for a particular audio block are used to determine the code amplitude of the audio block.
Look-up table generation
The system 4400 for constructing one or more LUTs with code frequencies corresponding to symbols can be implemented using hardware, software, a combination of hardware and software, firmware, etc. The system 4400 of fig. 44 may be used to generate any number of LUTs, such as the LUTs of fig. 40-42. The system 4400, operating as described below in conjunction with fig. 44 and 45, generates a code frequency index LUT, wherein: (1) two symbols of the table are represented by a common frequency index of no more than one, (2) no more than one representation of a frequency index exists as represented by the MPEG-AA compression standard ISO/IEC 13818-7: 1997, and (3) code frequencies of adjacent critical bands are not used to represent a single symbol. The third criterion 3 helps to ensure that the audio quality is not compromised during the audio encoding process.
The critical band pair qualifier 4402 defines a plurality (P) of critical band pairs. For example, referring to fig. 46, table 4600 includes columns representing an AAC critical band index 4602, short block indexes 4604 in the range of AAC indexes, and long block indexes 4606 in the range of AAC indexes. In some examples, the value of P may be seven, thus forming seven critical band pairs from the AAC index (block 4502). Fig. 47 shows the frequency relationship between AAC indices. According to an example, as shown by reference numeral 4702 in fig. 47 (where the frequencies of the critical band pairs are shown separated by a dashed line), the AAC index may be selected as the following pair: five and six, seven and eight, nine and ten, eleven and twelve, thirteen and fourteen, fifteen and sixteen, and seventeen and eighteen. The AAC index seventeen includes a wide range of frequencies, and thus the index seventeen is shown twice, once for low parts, and once for high parts.
The frequency qualifier 4404 qualifies a plurality of frequencies (N) selected for use in respective critical band pairs. In some examples, the value of N is sixteen, indicating that there are sixteen data locations in the combination of critical bands that form each critical band pair. Reference numeral 4704 in fig. 47 identifies seventeen frequency locations. The circled position four is reserved for synchronization information and is therefore not used for data.
The number generator 4406 defines a plurality of frequency locations in the critical band pair defined by the critical band pair determiner 4402. In some examples, number generator 4406 generates all NPNumber of (P bits). For example, if N is 16 and P is 7, the process generates numbers 0 through 268435456, but it is also possible to do so in base 16 (hexadecimal), which would yield values 0 through 10000000.
The redundancy reducer 4408 then eliminates all numbers from the generated list of numbers that share more than one common number with each other at the same location. This ensures compliance with standard (1) above, since the numbers will represent the frequencies selected to represent the symbols, as described below. The over-shortener 4410 may then further reduce the remaining numbers in the generated list of numbers to the number of symbols required. For example, if the symbol space is 129 symbols, the remaining number is reduced to 129. This simplification may be performed randomly, or by selecting the remaining number with the largest euclidean distance, or by any other suitable data simplification technique. In another example, the simplification may be performed in a pseudo-random manner.
After the above simplification, the count of the list of numbers is equal to the number of symbols in the symbol space. Therefore, the code frequency qualifier 4412 qualifies the remaining number of the base P format to represent frequency indexes representing symbols in critical band pairs. For example, referring to FIG. 47, the base of the hexadecimal number F1E4B0F is 16, which coincides with P. The first digit of the hexadecimal number maps to a frequency component in a first critical band pair, the second digit maps to a second critical band pair, and so on. Each numerical representation will be used to represent the frequency index of the symbol corresponding to the hexadecimal number F1E4B 0F.
Using the first hexadecimal number as an example of mapping to a particular frequency index, the decimal value of Fh is 15. Because position four of each critical band pair is reserved for non-data information, any value of hexadecimal numbers greater than four is increased by a value of decimal one. Thus, 15 becomes 16. 16 is thus designated (as shown by the asterisks in fig. 47) as the code frequency components in the first critical band pair to represent the symbols corresponding to the hexadecimal number F1E4B 0F. Although not shown in fig. 47, the position of index 1 (e.g., the second leftmost position in critical band 7) will be used to represent the hexadecimal number F1E4B 0F.
The LUT stuffer 4414 receives the symbol indications and the corresponding code frequency component indications from the code frequency qualifier 4412 and stuffs this information into the LUT.
Although an example manner of implementing the LUT table generator 4400 is shown in fig. 44, one or more of the interfaces, data structures, elements, processes and/or devices shown in fig. 44 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the example critical band pair qualifier 4402, the example frequency qualifier 4404, the example number generator 4406, the example redundancy reducer 4408, the example overarching reducer 4410, the example code frequency qualifier 4412, the example LUT stuffer 4414, and/or more generally the example system 4400 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, the example critical band pair qualifier 4402, the example frequency qualifier 4404, the example number generator 4406, the example redundancy reducer 4408, the example overbuilder 4410, the example code frequency qualifier 4412, the example LUT populator 4414, and/or more generally any of the example systems 4400 may be implemented by one or more circuits, programmable processors, ASICs, PLDs, FPLDs, and/or FPGAs, among others. When any device claim of this patent that includes one or more of these elements is understood to encompass a purely software and/or firmware implementation, at least one of the example critical-band pair qualifier 4402, the example frequency qualifier 4404, the example number generator 4406, the example redundancy reducer 4408, the example excess reducer 4410, the example code frequency qualifier 4412, the example LUT populator 4414, and/or, more generally, the example system 4400 is thereby expressly defined as comprising a tangible article of manufacture, such as those tangible computer readable media storing firmware and/or software described above in connection with fig. 17. Moreover, the example system 4400 may include interfaces, data structures, elements, processes and/or devices instead of, or in addition to, those illustrated in fig. 44, and/or may include more than one of any or all of the illustrated interfaces, data structures, elements, processes and/or devices.
Fig. 45 illustrates example machine-acceptable instructions that may be executed to implement the example system 4400 of fig. 44 and/or, more generally, to generate a code frequency index table. A processor, controller, and/or any other suitable processing device may be used and/or programmed to execute the example machine-acceptable instructions of fig. 45. For example, the machine-acceptable instructions of fig. 45 may be implemented as coded instructions stored on any combination of tangible articles of manufacture, such as the tangible computer-readable media discussed above in connection with fig. 17. For example, machine-readable instructions comprise instructions and data which cause a processor, a computer, and/or a machine having a processor (e.g., the example processor platform P100 discussed above in connection with fig. 24) to perform one or more particular processes. Alternatively, some or all of the example machine-acceptable instructions of fig. 45 may be implemented using any combination of ASICs, PLDs, FPLDs, FPGAs, discrete logic, hardware, firmware, and the like. Additionally, some or all of the example processes of fig. 45 may be implemented manually or as any combination of any of the foregoing techniques, e.g., any combination of firmware, software, discrete logic and/or hardware. In addition, many other methods of implementing the exemplary operations of FIG. 45 may be employed. For example, the order of execution of the blocks may be changed, and/or one or more of the blocks described may be changed, eliminated, sub-divided, or combined. Additionally, any or all of the example machine-acceptable instructions of fig. 45 may be executed sequentially and/or in parallel by, for example, separate processing threads, processors, devices, discrete logic, circuits, and/or the like.
The example machine-acceptable instructions of fig. 45 may be used to generate any number of LUTs, such as the LUTs of fig. 40-42. Although an exemplary process 4500 is shown, other processes may be used. The result of process 4500 is a code frequency index LUT, where: (1) two symbols of the table are represented by a common frequency index of no more than one, (2) no more than one representation of a frequency index exists as represented by the MPEG-AA compression standard ISO/IEC 13818-7: 1997, and (3) code frequencies of adjacent critical bands are not used to represent a single symbol. The third criterion helps to ensure that the audio quality is not compromised during the audio encoding process.
The exemplary process of fig. 45 begins by defining multiple (P) critical band pairs. For example, referring to fig. 46, table 4600 includes columns representing an AAC critical band index 4602, short block indexes 4604 in the range of AAC indexes, and long block indexes 4606 in the range of AAC indexes. In some examples, the value of P may be seven, thus forming seven critical band pairs from the AAC index (block 4502). Fig. 47 shows the frequency relationship between AAC indices. According to an example, as shown by reference numeral 4702 in fig. 47 (where the frequencies of critical band pairs are shown separated by dashed lines), the AAC index may be selected as the following pair: five and six, seven and eight, nine and ten, eleven and twelve, thirteen and fourteen, fifteen and sixteen, and seventeen and eighteen. The AAC index seventeen includes a wide range of frequencies, and thus the index seventeen is shown twice, once for low parts, and once for high parts.
After the band pairs are defined (block 4502), a plurality of frequencies (N) are selected for use in each critical band pair (block 4504). In some examples, the value of N is sixteen, indicating that there are sixteen data locations in the combination of critical bands that form each critical band pair. As shown by reference numeral 4704 in fig. 47, seventeen frequency locations are shown. The circled position four is reserved for synchronization information and therefore is not used for data.
After defining the number of critical band pairs and the number of frequency locations in those pairs, process 4500 generates all NPA number (P-bit) that does not have more than one common hexadecimal number (block 4506). For example, if N is 16 and P is 7, the process generates numbers 0 through 268435456, but it is also possible to do so in radix 16 (hexadecimal), which would generate values 0 through FFFFFFF, but would not include numbers that share more than one common hexadecimal digit. This will ensure compliance with standard (1) above, since the numbers will represent the frequencies selected to represent the symbols, as described below.
According to an exemplary process for determining a set of numbers to comply with Standard (1) above (and any other desired Standard), from 0 to N were testedP-1. First, a value corresponding to zero is stored as the first number of result sets R. Then, from 1 to N is selectedP1, determining whether these numbers, when compared with the numbers in R, satisfy criterion (1). Each satisfying criterion (1) when compared to all current records in RThe number is added to the result set. Specifically, according to an exemplary process, to test the number K, each hexadecimal digit of interest in K is compared to the corresponding hexadecimal digit of interest in M from the current result set. In the seven comparisons, no more than one of the hexadecimal digits of K should be equal to the corresponding hexadecimal digit of M. After comparing K to all numbers in the current result set, K is added to the result set R if none of the numbers in the current result set have more than one common hexadecimal digit. The algorithm is repeatedly executed for the set of possible numbers until all values that satisfy criterion (1) have been identified.
Although the foregoing describes an exemplary process for determining a set of numbers that satisfy criterion (1), any process or algorithm may be used, and the present disclosure is not limited to the process described above. For example, the process may use heuristics, rules, etc. to eliminate some numbers from the set of numbers before repeating all numbers in the set of numbers. For example, all numbers where the relevant bits start with two 0 s, two 1 s, two 2 s, etc. and end with two 0 s, two 1 s, two 2 s, etc. can be removed immediately because they obviously have a hamming distance (hamming distance) less than 6. Additionally or alternatively, the exemplary process may be repeated for the entire set of possible numbers. For example, the process may repeat until a sufficient number (e.g., 128 when 128 symbols are desired) is found. In another implementation, the process may randomly select a first value included in the set of possible values, and then may iteratively or randomly search the remaining numbers in the set until a value is found that meets a desired criterion (e.g., criterion (1)).
Next, process 4500 selects a desired number from the generated values (block 4510). For example, if the symbol space is 129 symbols, the number left can be reduced to 129. The simplification may be performed randomly, or by selecting the remaining number with the largest euclidean distance, or by any other suitable data reduction technique.
After the above simplification, the count in the list of numbers is equal to the number of symbols in the symbol space. Thus, the remaining numbers in the radix P format are defined to represent the frequency indices that represent the symbols in the critical band pair (block 4512). For example, referring to FIG. 47, the radix of the hexadecimal number F1E4B0F is 16, which is commensurate with P. The first digit of the hexadecimal number maps to a frequency component in a first critical band pair, the second digit maps to a second critical band pair, and so on. Each digital representation will be used to represent a frequency index of the symbol corresponding to the hexadecimal number F1E4B 0F.
Using the first hexadecimal number as an example of mapping to a particular frequency index, the decimal value of Fh is 15. Because the position four of each critical band pair is reserved for non-data information, any value of hexadecimal number greater than four is increased by a value of one decimal. Thus, 15 becomes 16. 16 are thus designated (as shown by the asterisks in fig. 47) as code frequency components in the first critical band pair to represent symbols corresponding to the hexadecimal number F1E4B 0F. Although not shown in fig. 47, the position of index 1 (e.g., the second leftmost position from critical band 7) will be used to represent the hexadecimal number F1E4B 0F.
After the representative code frequency is specified (block 4512), the numbers are populated into the LUT (block 4514).
Of course, the systems and processes described in connection with fig. 45-47 are merely examples of what may be used to generate a LUT having desired characteristics in connection with the encoding and decoding systems described herein. Other configurations and processes may be used. For example, the LUT may be generated using other code frequency plans.
Audio coding
Fig. 48 illustrates an example manner of decoding nielsen code and/or implementing the example decoder 3816 of fig. 38, the example decoder 310 of fig. 3, and/or the example auxiliary content trigger 180 of fig. 1 and 2. Although the decoder shown in fig. 38 may be used to implement any of the decoders 3816, 310, and 180, for ease of discussion, the decoder of fig. 38 will be referred to as decoder 3816. In some examples, two cases of the example decoder 3816 may be implemented. The first case enables the superimposer 4804 to enhance the decoding of the station identifier and coarse timestamp incremented once every 64 seconds, and the second case disables the superimposer 4804 from decoding the variable data in the last 7 bit groups 3932, which represents the time in seconds that changes from message to message. In other examples, a single decoder 3816 instance is implemented with the superimposer 4804 enabled or disabled, as described below.
In general, the decoder 3816 detects a code signal inserted into received audio at the encoder 3802 to form encoded audio. That is, the decoder 3816 looks for an emphasized pattern in the code frequency it processes. Once the decoder 3816 determines which code frequency is emphasized, the decoder 3816 determines the symbols present within the encoded audio based on the emphasized code frequency. The decoder 3816 may record the symbols or may decode the symbols into codes that were previously provided to the encoder 3802 for insertion into audio.
In one implementation, the exemplary decoder 3816 of fig. 48 may be implemented using, for example, a digital signal processor programmed with instructions to implement the components of the decoder 3816. Of course, any other implementation of the exemplary decoder 3816 is possible. For example, decoder 3816 may be implemented using one or more processors, programmable logic devices, or any suitable combination of hardware, software, and firmware.
As shown in fig. 48, the exemplary decoder 3816 includes a sampler 4802, which sampler 4802 may be implemented using an analog-to-digital converter (a/D) or any other suitable technique, the sampler 4802 being provided with the encoded audio in an analog format. As shown in fig. 38, the encoded audio may be provided by a wired or wireless connection to the receiver 3810. The sampler 4802 samples the encoded audio at a sampling frequency of, for example, 8kHz or 12 kHz. Of course, other sampling frequencies may be advantageously selected to increase resolution or reduce computational load in decoding. At a sampling frequency of 8kHz, the nyquist frequency is 4kHz, and thus all embedded code signals represented in the exemplary LUT 3932 of fig. 40-41 are preserved because their spectral frequencies are below the nyquist frequency. If a frequency plan is utilized that includes higher frequencies, a higher sampling frequency, such as 12kHz, may be required to ensure that the Nyquist sampling criteria are met.
The samples from sampler 4802 are provided to superimposer 4804. In general, the superimposer 4804 emphasizes the code signal in the audio signal information by exploiting the fact that the message is repeated or substantially repeated (e.g., only the least significant bits change) for a period of time. For example, when 42 bits of data 3926 in a message include a station identifier and a coarse timestamp that increments once every 42 seconds, 42 bits (3926 of fig. 39) of the 49 bits (3926 and 3924) of the foregoing exemplary message of fig. 39 remain unchanged for 64 seconds (32 2 second message intervals). The variable data in the last 7-bit group 3932 represents increments in seconds and thus changes from message to message. The exemplary superimposer 4804 aggregates multiple blocks of audio signal information to emphasize the code signal in the audio signal information. In an exemplary implementation, the superimposer 4804 includes a buffer for storing multiple samples of audio information. For example, if a complete message is embedded in 2 seconds of audio, the buffer may be 12 seconds long to store six messages. The exemplary superimposer 4804 additionally includes an adder that sums audio signal information associated with six messages and a divider that divides the sum by the number of selected repeated messages (e.g., six).
As an example, the watermarked signal y (t) may be represented by the host signal x (t) and the watermark w (t):
y(t)=x(t)+w(t)
in the time domain, the watermark may be repeated after a known period of time T:
w(t)=w(t-T)
according to an exemplary superposition method, the input signal y (t) is replaced by a superposition signal S (t):
in the superimposed signal S (T), the contribution of the host signal is reduced, because the values of the samples x (T), x (T-T),. -, x (T-nT) are independent if the period T is sufficiently large. At the same time, the contribution of the watermark, for example formed by the in-phase sinusoid, is enhanced.
Let X (T), X (T-T), X (T-nT) be independent random variables derived from the same distribution X using a zero mean value E [ X ] — 0:
and
thus, the underlying host signal contributions x (t), x (t-nT) will effectively cancel each other out, while the watermark is unchanged, allowing the watermark to be more easily detected.
In the example shown, the power of the resulting signal decreases linearly with the number of superimposed signals n. Thus, averaging over separate parts of the host signal may reduce the influence of interference. The watermark is not affected because it is always added in phase.
An exemplary process of implementing superimposer 4804 is described in conjunction with fig. 49.
The decoder 3816 may additionally include a stacker controller 4806 to control the operation of the stacker 4804. The example stacker controller 4806 receives a signal indicating whether the stacker 4804 should be enabled or disabled. For example, the superimposer controller 4806 may receive a received audio signal and determine whether the signal includes significant noise that will distort the signal, and enable the superimposer in response to the determination. In another implementation, the stacker controller 4806 may receive signals from a switch that can be manually controlled to enable or disable the stacker 4804 based on the arrangement of the decoder 3816. For example, when the decoder 3816 is wired to the receiver 3810 or the microphone 3820 is disposed proximate to the speaker 3814, the superimposer controller 4806 may disable the superimposer 4804 because no superimposition is needed and will cause rapidly changing data corruption (e.g., the least significant bits of the timestamp) in the respective messages. Alternatively, the superimposer 4804 may be enabled by the superimposer controller 4806 when the decoder 3816 is at a distance from the speaker 3814 or in another environment where significant interference may be expected. Furthermore, the superimposer 4804 may be disabled when a) the sampling rate accuracy of the sampler 4802 and/or the highest frequency used to transmit the message is selected such that the superimposition may have limited effect, and/or b) the variable data in the last seven-bit group 3932 representing time increments in seconds (varying from message to message). Of course, the stacker controller 4806 may apply any type of desired control.
The output of the adder 4804 is provided to a time-domain to frequency-domain converter 4808. The time-to-frequency domain converter 4808 may be implemented using a fourier transform such as DFT or any other suitable technique to convert time-based information to frequency-based information. In some examples, time-to-frequency domain converter 4808 may be implemented using a sliding long block Fast Fourier Transform (FFT), where the spectrum of the code frequency of interest is calculated each time eight new samples are provided to time-to-frequency domain converter 4808. In some examples, the time-to-frequency domain converter 4808 uses 1536 samples of encoded audio and 192 slips per eight samples to determine its spectrum. The resolution of the spectrum produced by the time-to-frequency domain converter 4808 increases as the number of samples used to generate the spectrum increases. Thus, the number of samples processed by the time-domain to frequency-domain converter 4808 should coincide with the resolution used to select the index in the tables of fig. 40-42.
The spectrum produced by the time-to-frequency domain converter 4808 is passed to a critical band normalizer 4810, which normalizes the spectrum in each critical band 4810. In other words, the frequency having the maximum amplitude in each critical frequency band is set to 1, and thus all other frequencies within each critical frequency band are normalized. For example, if critical band 1 includes frequencies having amplitudes of 112, 56, and 56, the critical band normalizer adjusts the frequencies to 1, 0.5, and 0.5. Of course, for normalization, any desired maximum value may be used instead of 1. The critical band normalizer 4810 outputs a normalized score for each frequency of interest.
The spectrum of scores produced by the critical band normalizer 4810 is passed to the symbol scorer 4812, which symbol scorer 4812 calculates a total score for each possible symbol in the active symbol table. In an exemplary implementation, the symbol scorer 4812 repeats for each symbol in the symbol table and sums the normalized scores from the critical band normalizer 4810 for each frequency of interest for a particular symbol to generate a score for the particular symbol. The symbol scorer 4812 outputs a score of each symbol to the maximum score selector 4814, and the maximum score selector 4814 selects the symbol having the maximum score and outputs the symbol and the score.
The identified symbols and scores from maximum score selector 4814 are passed to comparator 4816, which comparator 4816 compares the scores to a threshold. When the score value exceeds the threshold, the comparator 4816 outputs the received symbol. When the score does not exceed the threshold, the comparator 4816 outputs an error indication. For example, when the score value does not exceed the threshold, the comparator 4816 may output a symbol indicating an error (e.g., a symbol not included in the active symbol table). Thus, an error indication is provided when the message is corrupted such that a sufficiently large score (i.e., a score that does not exceed a threshold) is not calculated for the symbol. In an exemplary implementation, an error indication may be provided to the stacker controller 4806 to enable the stacker 4804 when a threshold number of errors (e.g., a number of errors over a period of time, a number of consecutive errors, etc.) are identified.
The identified symbol or error from the comparator 4816 is passed to the circular buffer 4818 and the legacy code marking circular buffer 4820. An exemplary implementation of the standard buffer 4818 is described in connection with fig. 52. The exemplary circular buffer 4818 includes one circular buffer (e.g., 192 buffers) for each sliding of the time-domain to frequency-domain converter 4808. Each circular buffer of the circular buffers 4818 includes one storage location for each symbol block in the synchronization symbols and messages (e.g., eight block messages will be stored in eight-location circular buffers) so that the entire message can be stored in each circular buffer. Thus, as the audio samples are processed by the time-to-frequency domain converter 4808, the identified symbols are stored in the same location of each circular buffer until the location of each circular buffer is filled. The symbols are then stored in the next position in each circular buffer. In addition to storing symbols, the circular buffers 4818 may also include locations in the respective circular buffers for storing sample indices that indicate samples of the received audio signal that resulted in the identified symbol.
The example legacy code mark circular buffer 4820 is implemented in the same manner as the circular buffer 4818, except that the legacy code mark circular buffer 4820 also includes one location for legacy code mark synchronization symbols and one location for each symbol in the legacy code mark message (e.g., a circular buffer where legacy code mark synchronization including one message symbol will be stored in two locations). The existing code tag circular buffer 4820 is constructed simultaneously and in the same manner as the circular buffer 4818.
The example message identifier 4822 parses the circular buffer 4818 and the pre-existing code mark circular buffer 4820 for synchronization symbols. For example, the message identifier 4822 searches the circular buffer 4818 for a synchronization symbol and searches the existing code tag circular buffer 4820 for an existing code tag synchronization symbol. When a synchronization symbol is identified, the message identifier 4822 outputs a symbol following the synchronization symbol (e.g., seven symbols following the synchronization symbol in the circular buffer 4818 or one symbol following the legacy code tag synchronization symbol in the legacy code tag circular buffer 4820). In addition, a sample index identifying the last audio signal sample is output.
The message symbol and sample index output by the message identifier 4822 are passed to the verifier 4824, which verifies each message 4824. The verifier 4824 includes a filter stack for storing a plurality of consecutively received messages. Because the messages are repeated (e.g., every 2 or 16000 samples at 8kHz, or every 1.6 seconds at 12 kHz), each message can be compared to other messages in the filter stack to determine if there is a match, distinguished by the approximate number of audio samples in a single message. If there is a match or a rough match, then both messages are verified. If the message cannot be identified, it is determined that the message is erroneous and the message is not sent from the verifier 4824. In the case where a message may be affected by noise interference, a message may be considered a match when a subset of symbols in the message match the same subset in another verified message. For example, a message may be identified as partially validated if four of the seven symbols in the message match the same four symbols in another message that has already been validated. The sequence of repeated messages may then be observed to identify mismatched symbols in the partially validated message.
Furthermore, a score may be calculated that represents the strength and/or probability that the decoding decision is correct for each hypothesized long block that was analyzed during decoding. For example, for each of the seven frequencies that make up a potential code pattern, the power of the frequency is divided by the average power of the other code frequencies in its code band. By summing this value for seven frequencies, the score for each potential pattern can be determined. The selected pattern is the code pattern with the highest score and exceeding a certain minimum threshold. To improve the decoding accuracy, the score of the winning mode and the score corresponding to the same mode at the long block position of the message slot that is exactly 3, 6, 9, etc. away from the current position may be combined. If the long block is one of six long blocks carrying a message of approximately constant payload, the score of the winning pattern will be increased. However, such superposition does not contribute to the 7 th long block, which contains information that changes from message to message. By superposition, the code detection accuracy can be increased by a factor of 2 or 3 without being sensitive to errors and/or jitter in the sampling rate. Other exemplary Methods and Apparatus for improving watermark decoding accuracy are described in U.S. patent application Ser. No.12/604,176 entitled "Methods and Apparatus to Extract Data Encoded in media Content", filed on 22.10.2009.
The validated message from the validator 4824 is passed to a symbol to bit converter 4826, which symbol to bit converter 4826 converts each symbol into a corresponding data bit of the message using an active symbol table.
Although an example manner of implementing the example decoder 3816 of fig. 38 has been illustrated in fig. 48, one or more of the interfaces, data structures, elements, processes and/or devices illustrated in fig. 48 may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other way. Further, the example sampler 4802, the example superimposer 4804, the example superimposer controller 4806, the example time-to-frequency domain converter 4808, the example critical band normalizer 4810, the example symbol scorer 4812, the example maximum score selector 4814, the comparator 4816, the example circular buffer 4818, the example legacy code marking circular buffer 4820, the example message identifier 4822, the example verifier 4824, the example symbol to bit converter 4826, and/or more generally the example system decoder 3816 of fig. 48 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, any of the example sampler 4802, the example superimposer 4804, the example superimposer controller 4806, the example time-to-frequency-domain converter 4808, the example critical-band normalizer 4810, the example symbol scorer 4812, the example maximum score selector 4814, the comparator 4816, the example circular buffer 4818, the example legacy code-mark circular buffer 4820, the example message identifier 4822, the example verifier 4824, the example symbol-to-bit converter 4826, and/or, more generally, the example system decoder 3816 may be implemented by one or more circuits, programmable processors, ASICs, PLDs, FPLDs, and/or FPGAs, among others. When any device claim of this patent that contains one or more of these elements is understood to encompass a purely software and/or firmware implementation, at least one of the example sampler 4802, the example superimposer 4804, the example superimposer controller 4806, the example time-to-frequency domain converter 4808, the example critical band normalizer 4810, the example symbol scorer 4812, the example maximum score selector 4814, the comparator 4816, the example circular buffer 4818, the example legacy code marking circular buffer 4820, the example message identifier 4822, the example verifier 4824, the example symbol-to-bit converter 4826, and/or, more generally, the example system decoder 3816 is thereby expressly defined to include a tangible article of manufacture, such as those tangible computer-readable media storing firmware and/or software described above in connection with fig. 17. Moreover, the example decoder 3816 may include interfaces, data structures, elements, processes and/or devices instead of, or in addition to, those illustrated in fig. 48, and/or may include more than one of any or all of the illustrated interfaces, data structures, elements, processes and/or devices.
Fig. 49-52 and 55 illustrate example machine-acceptable instructions that may be executed to implement the example decoder 3816 of fig. 44. A processor, controller and/or any other suitable processing device may be used and/or programmed to execute the example machine-acceptable instructions of fig. 49-52 and 55. For example, the machine-acceptable instructions of fig. 49-52 and 55 may be implemented as coded instructions stored on any combination of tangible articles of manufacture such as the tangible computer-readable media discussed above in connection with fig. 17. For example, machine-readable instructions comprise instructions and data which cause a processor, a computer, and/or a machine having a processor (e.g., the example processor platform P100 discussed above in connection with fig. 24) to perform one or more particular processes. Alternatively, some or all of the example machine-acceptable instructions of fig. 49-52 and 55 may be implemented using any combination of ASICs, PLDs, FPLDs, FPGAs, discrete logic, hardware, firmware, or the like. Additionally, some or all of the example processes of fig. 49-52 and 55 may be implemented manually or as any combination of any of the foregoing techniques, e.g., any combination of firmware, software, discrete logic and/or hardware. In addition, many other methods of implementing the exemplary operations of fig. 49-52 and 55 may be employed. For example, the order of execution of the blocks may be changed, and/or one or more of the blocks described may be changed, eliminated, sub-divided, or combined. Additionally, any or all of the example machine-acceptable instructions of fig. 49-52 and 55 may be executed sequentially and/or in parallel by, for example, separate processing threads, processors, devices, discrete logic, circuits, and/or the like.
The example process 4900 of fig. 49 begins by sampling the audio (block 4902). The audio may be obtained via an audio sensor, a hard-wired connection, via an audio file, or by any other suitable technique. As described above, sampling may be performed at 8000Hz or any other suitable frequency.
As individual samples are obtained, an adder, such as the exemplary adder 4804 of fig. 48, aggregates the samples (block 4904). An exemplary process for performing the superimposition is described in connection with fig. 50.
The newly superimposed audio samples from the superimposer process 4904 are inserted into the buffer, with the oldest audio samples removed (block 4906). As individual samples are obtained, a sliding time-to-frequency conversion is performed on a sample set that includes a plurality of older samples and newly added samples obtained at blocks 4902 and 4904 (block 4908). In some examples, a sliding FFT may be used to process stream input samples that include 9215 old samples and one newly added sample. In some examples, using an FFT of 9216 samples results in a resolution of 5.2 Hz.
After obtaining the spectrum through time-to-frequency conversion (block 4908), the transmitted symbols are determined (block 4910). An exemplary process for determining the symbols for transmission is described in connection with fig. 51.
After identifying the transmitted message (block 4910), post-buffering processing is performed to identify the synchronization symbols and corresponding message symbols (block 4912). An exemplary process for performing post-processing is described in connection with fig. 52.
After performing post-processing to identify the transmitted message (block 4912), message verification is performed to verify the validity of the message (block 4914). An exemplary process for performing message acknowledgement is described in connection with fig. 55.
After the message is validated (block 4914), the message is converted from symbols to bits using an active symbol table (block 4916). Control then returns to block 4806 to process the next set of samples.
Fig. 50 illustrates an exemplary process for superimposing audio signal samples to emphasize an encoded code signal, thereby implementing the superimposed audio processing 4904 of fig. 49. This exemplary process may be performed by the superimposer 4804 and the superimposer controller 4806 of fig. 48. The example process begins by determining whether stacker control is enabled (block 5002). When the superimposer control is disabled, no superimposition occurs and the process of fig. 50 ends, and control returns to block 4906 of fig. 49 to process non-superimposed audio signal samples.
When the stacker control is enabled, the newly received audio signal samples are pushed into the buffer and the oldest samples are pushed out (block 5004). The buffer stores a plurality of samples. For example, when a particular message is repeatedly encoded every 2 seconds in an audio signal and the encoded audio is sampled at 8kHz, each message will repeat every 16,000 samples, such that the buffer will store multiple 16,000 samples (e.g., the buffer may store six messages with a buffer of 96,000 samples). Then, the superimposer 4808 selects approximately equal blocks of samples in the buffer (block 5006). Next, approximately equal blocks of samples are summed (block 5008). For example, sample 1 is added to samples 16,001, 32,001, 48,001, 64,001, and 80,001, sample 2 is added to samples 16,002, 32,002, 48,002, 64,002, and 80,002, and sample 1600 is added to samples 32,000, 48,000, 64,000, 80,000, and 96,000.
After the audio signal samples in the buffer are added, the resulting sequence is divided by the number of selected blocks (e.g., six blocks) to calculate an average sequence of samples (e.g., 16,000 averaged samples) (block 5010). The resulting average sequence of samples is output by the superimposer (block 5012). The process of fig. 50 then ends and control returns to block 4906 of fig. 49.
Fig. 51 illustrates an exemplary process of implementing the symbol determination process 4910 after converting a received audio signal to the frequency domain. The exemplary process of fig. 51 may be performed by the decoder 3816 of fig. 38 and 48. The exemplary process of fig. 51 begins by normalizing the code frequencies in the various critical bands (block 5102). For example, the code frequency may be normalized such that the frequency with the largest amplitude is set to 1 and all other frequencies in the critical band are adjusted accordingly. In the exemplary decoder 3816 of fig. 48, normalization is performed by the critical band normalizer 4810.
After normalizing the frequencies of interest (block 5102), the example symbol scorer 481 selects an appropriate symbol table based on the previously determined synchronization table (block 5104). For example, the system may include two symbol tables: one table is used for general synchronization and one table is used for legacy code mark synchronization. Alternatively, the system may include one symbol table or may include multiple synchronization tables that may be identified by a synchronization symbol (e.g., a cross-table synchronization symbol). The symbol scorer 4812 then calculates the symbol score for each symbol in the selected symbol table (block 5106). For example, the symbol scorer 4812 may repeat for each symbol in the symbol table and sum the normalized scores for each frequency of interest for the symbol to calculate a symbol score.
After each symbol is scored (block 5106), the example maximum score selector 4814 selects the symbol with the greatest score (block 5108). Next, the example comparator 4816 determines whether to cause the score of the selected symbol to exceed a maximum score threshold (block 5110). When the score does not exceed the maximum score threshold, an error indication is stored in a circular buffer (e.g., the circular buffer 4818 and the preexisting token circular buffer 4820) (block 5112). The process of FIG. 51 is then complete, and control returns to block 4912 of FIG. 49.
When the score exceeds the maximum score threshold (block 5110), the identified symbol is stored in a circular buffer (e.g., circular buffer 4818 and pre-existing code marking circular buffer 4820) (block 5114). The process of FIG. 51 is then complete, and control returns to block 4912 of FIG. 49.
Fig. 52 illustrates an exemplary process for implementing the post-buffer processing 4912 of fig. 49. The exemplary process of fig. 52 begins when message identifier 4822 of fig. 48 searches circular buffer 4818 and circular buffer 4820 for a synchronization indication (block 5202).
For example, fig. 53 illustrates an exemplary implementation of the circular buffer 4818, and fig. 54 illustrates an exemplary implementation of the legacy code markup circular buffer 4820. In the illustrated example of fig. 53, the last position in the circular buffer that has been filled is position three, indicated by the arrow. Thus, an exemplary index indicates a position in the audio signal sample that results in a symbol stored at position three. Since the row corresponding to the sliding index 37 is a circular buffer, the consecutively identified symbols are 128, 57, 22, 111, 37, 23, 47, and 0. Because 128 in the illustrated example is made to be a synchronization symbol, the message may be identified as a symbol following the synchronization symbol. The message identifier 4822 will wait until seven symbols are located after the synchronization symbol at the sliding index 39 is identified.
The legacy code mark circular buffer 4820 of fig. 54 includes two locations for each circular buffer, since the legacy code mark message of the illustrated example includes one legacy code mark synchronization symbol (e.g., symbol 254) followed by a single message symbol. According to the illustrated example of fig. 39, the incumbent code flag data block 3930 is embedded in two long blocks immediately following the long block 3928 of the 7-bit timestamp. Thus, because there are two long blocks of data for the incumbent code tag, and each long block of the illustrated example is 1,536 samples at a sampling rate of 8kHz, the incumbent code tag data symbol will be identified in the incumbent code tag circular buffer 3072 following the original message. In the illustrated example of fig. 54, the sliding index corresponds to the sample index 38,744, with the sample index 38744 being 3072 samples later than the sliding index 37 (sample index 35672) of fig. 53. Thus, it may be determined that the incumbent code tag data symbol 68 corresponds to a message in the sliding index 37 of fig. 53, indicating that the message in the sliding index 37 of fig. 53 identifies the originally encoded message (e.g., identifies the original broadcaster of the audio) and that the sliding index 37 identifies the incumbent code tag message (e.g., identifies the rebroadcast of the audio).
Returning to FIG. 49, upon detection of a synchronization symbol or an existing code tag synchronization symbol, the messages in the circular buffer 4818 or the existing code tag circular buffer 4820 are thinned to eliminate redundancy in the messages. For example, as shown in fig. 53, messages for a period of time are identified in the audio data (the sliding indices 37 to 39 contain the same messages) due to the sliding time-to-frequency domain conversion and the duration of the encoding for each message. The same messages in consecutive sliding indices may be reduced to a single message because they represent only one encoded message. Alternatively, the compaction may be eliminated and the entire message may be output if desired. Next, the message identifier 4822 stores the condensed message in a filter stack associated with the verifier 4824 (block 5206). The process of fig. 52 then ends and control returns to block 4914 of fig. 49.
Fig. 55 illustrates an exemplary process for implementing the message authentication process 4914 of fig. 49. The exemplary process of fig. 49 may be performed by the verifier 4824 of fig. 48. The example process of FIG. 55 begins when the validator 4824 reads the top of stack message in the filter stack (block 5502).
For example, FIG. 56 illustrates an exemplary implementation of a filter stack. An exemplary filter stack includes message indices, seven symbol positions for each message index, a sample index identification, and a validation token for each message index. Each message is added at the message index M7, and the message at position M0 is the top-of-stack message read in block 5502 of fig. 55. Due to sampling rate variations and variations in message boundaries within the message identification, messages are expected to be separated by a sample index of a multiple of approximately 16,000 samples when the message repeats every 16,000 samples.
Returning to FIG. 56, upon selection of the top-of-stack message in the filter stack (block 5502), the verifier 4824 determines whether the verification flag indicates that the message has been previously verified (block 5504). For example, FIG. 56 indicates that message M0 has been verified. When the message has been previously authenticated, the authenticator 4824 outputs the message (block 5512) and control proceeds to block 5516.
When the message has been previously validated (block 5504), the validator 4824 determines whether there is another properly matching message in the filter stack (block 5506). A message may be properly matched when it is identical to another message, and so on, when a threshold number of message symbols (e.g., four out of seven symbols) match another message, or when any other error determination indicates that two messages are similar enough to assume that they are identical. According to the illustrated example, a message may only be partially authenticated with another message that has already been authenticated. When an appropriate match is not identified, control proceeds to block 5514.
When an appropriate match is identified, the verifier 4824 determines whether the duration (in samples) between the same messages is correct (block 5508). For example, when a message repeats every 16,000 samples, it is determined whether the separation between two properly matched messages is a multiple of about 16,000 samples. When the duration is incorrect, control proceeds to block 5514.
When the time duration is correct (block 5508), the verifier 4824 verifies both messages by setting a verification flag for each message (block 5510). When a message is completely verified (e.g., a strict match), the flag may indicate that the message is completely verified (e.g., the verified message in fig. 56). When a message is only partially validated (e.g., only four of the seven symbols match), the message is marked as partially validated (e.g., partially validated message in fig. 56). The verifier 4824 then outputs a top-of-stack message (block 5512), and control passes to block 5516.
When it is determined that there is no appropriate match for the top-of-stack message (block 5506) or that the duration between appropriate matches is incorrect (block 5508), the top-of-stack message is not verified (block 5514). A message that cannot be authenticated is not output from the authenticator 4824.
Upon determining not to validate a message (blocks 5506, 5508, and 5514) or output a top-of-stack message (block 5512), the validator 5516 pops the filter stack to remove the top-of-stack message from the filter stack. Control then returns to block 5502 to process the next message at the top of the stack of the filter stack.
While example manners of implementing any or all of the example encoder 3802 and the example decoder 3816 have been illustrated and described above, one or more of the data structures, elements, processes and/or devices illustrated in the figures and described above may be combined, divided, rearranged, omitted, eliminated and/or implemented in any other manner. Further, the example encoder 3802 and the example decoder 3816 may be implemented by hardware, software, firmware, and/or any combination of hardware, software, and/or firmware. Thus, for example, the example encoder 3802 and the example decoder 3816 may be implemented by one or more circuits, programmable processors, Application Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), and/or Field Programmable Logic Devices (FPLDs), among others. For example, the decoder 3816 may be implemented using software on a platform device such as a mobile phone. If any of the appended claims are understood to cover a purely software and/or firmware implementation, at least one of the prior code detector 3904, the example message generator 3910, the symbol selector 3912, the code frequency selector 3914, the synthesizer 3916, the inverse FFT 3918, the mixer 3920, the overlapped short block maker 3940, the mask evaluator 3942, the critical band pair qualifier 4402, the frequency qualifier 4404, the number generator 4406, the redundancy reducer 4408, the excess simplifier 4410, the code frequency qualifier 4412, the LUT filter 4414, the sampler 4802, the superimposer 4804, the superimposer controller 4806, the time-to-frequency domain converter 4808, the critical band normalizer 4810, the symbol scorer 4812, the maximum score selector 4814, the comparator 4816, the circular buffer 4818, the legacy code mark circular buffer 4820, the message identifier 4822, the verifier 4824, and the symbol-to-bit converter 4826 are thus clearly defined as including, such as memory, DVD, CD, etc. Moreover, the example encoder 3802 and the example decoder 3816 may include data structures, elements, processes, and/or devices in lieu of or in addition to those illustrated in the figures and described above, and/or may include any or all of one or more of the illustrated data structures, elements, processes, and/or devices
Although certain example apparatus, methods, and articles of manufacture have been described herein, the scope of coverage of this patent is not limited thereto. Rather, the scope of coverage of this patent is not limited to the specific examples described herein. On the contrary, this patent covers all apparatus, methods, and articles of manufacture fairly falling within the scope of the appended claims either literally or under the doctrine of equivalents.