FIELD OF TECHNOLOGYThe present invention relates generally to digital video information acquisition and transfer.
BACKGROUND OF THE INVENTIONDigital video broadcast (“DVB”) data is transmitted and received at high rates to achieve the throughput required for satisfactory viewing. Standards for DVB data transmission have been available for over ten years. Standards for satellite network based DVB have more recently been developed. A recently formulated standard, promulgated by the European Telecommunications Standards Institute (Sophia Antipolis, France) is DVB-S2 (Standard No. EN 302307). DVB-S2 is expected to be widely backward-compatible at receivers with its predecessor, DVB-S, and support the use of Generic Stream, the DVB-S2 native stream format, HDTV, MPEG-2 TS, and H.264 (viz., MPEG-4 AVC) video codecs. DVB-S2 may support interactive Internet-based applications and services, in which data generated by the user may be sent by cable or satellite uplink; professional applications, in which data must is multiplexed and broadcast in the VHF/UHF band; content distribution; and trunking.
The standards provide inter-device compatibility and efficiencies that contribute to high throughput rates. DVB receivers, such as set-top boxes, are often designed in conformance with the standards and may require equipment and logic capabilities that also contribute to high throughput rates.
At high throughput rates, demodulation, decoding, demultiplexing and related operations often require numerous data processing modules. Manufacturing costs for systems that involve numerous data processing materials are high. Systems that require numerous data processing modules often include longer conductors that dissipate more electric energy than shorter conductors. Such systems require larger power sources and larger power conditioning components. Larger power sources and larger power conditioning components increase the cost of manufacturing. Larger and more numerous components in general reduce the versatility of high throughput rate systems, because overall system dimensions are often limited. When system dimensions are limited, larger and more numerous components can be included only at the expense of other components and their associated functionality and features.
It therefore would be desirable to provide systems for processing digital broadcast data that have favorable manufacturing costs.
It therefore would be desirable to provide systems for processing digital broadcast data that have favorable energy consumption rates.
SUMMARY OF THE INVENTIONA system and/or method for providing digital video data processing at high throughput rates, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
BRIEF DESCRIPTION OF THE DRAWINGSThe objects and advantages of the invention will be apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings, in which like reference characters refer to like parts throughout, and in which:
FIG. 1 shows a schematic diagram of apparatus in accordance with the principles of the invention;
FIG. 2 shows a schematic diagram of portions of the apparatus ofFIG. 1;
FIG. 3 shows a schematic diagram including details of a portion of the apparatus shown inFIG. 1;
FIG. 4 shows a schematic diagram including details of another portion of the apparatus shown inFIG. 1;
FIG. 5 shows a schematic diagram of yet another portion of the apparatus shown inFIG. 1;
FIG. 6 shows a schematic diagram that includes details of yet another portion of the apparatus shown inFIG. 1;
FIG. 7 shows a schematic diagram of another portion of the apparatus shown inFIG. 1;
FIG. 8 shows a schematic diagram of yet another portion of the apparatus shown inFIG. 1;
FIG. 9 shows a schematic diagram of still another portion of the apparatus shown inFIG. 1;
FIG. 10 shows a schematic diagram of still another portion of the apparatus shown inFIG. 1; and
FIG. 11 shows a schematic diagram of another apparatus in accordance with the principles of the invention.
DETAILED DESCRIPTION OF THE INVENTIONDevices and methods for receiving, processing and formatting digital video data are provided. It will be understood that the term “programming information,” as used herein, includes video, audio and textual data. Some embodiments of the invention may include a single semiconductor chip on which is imprinted a radio frequency signal tuner module and a personal video recorder (“PVR”) module. The PVR module may be configured to receive programming information from the radio frequency signal tuner module. The PVR module may provide a user of a set-top box with digital video recorder functionality, such as “pause,” “playback” and “rewind.” The PVR module may be configured to communicate the programming information to an external digital video recorder or storage medium.
One advantage of including the radio frequency signal tuner module and the PVR on the same chip is reduced power consumption. Another advantage is reduced package size. Reduced package size may reduce bills of materials for manufacturing. Table 1 shows performance metrics of a single chip device compared to those of a two-chip device.
| TABLE 1 |
|
| Performance comparison between two-chip system and single chip system in |
| accordance with the principles of the present invention. Data are illustrative and approximate. |
| | | | Illustrative single chip |
| | | | system in accordance with |
| Two-chip system | PVR | | the principles of the present |
| Performance | Tuner | (e.g., PVR Portion | | invention (e.g., Broadcom 7335) |
| Metric | (eg., Broadcom 4506) | of Broadcom 7405) | Totals for 2-chip | Totals for illustrativesingle chip |
|
| Power |
| 3 | 5.9 | 8.9 | 6.7 |
| consumption (W) |
| Package Area | 14 × 20, 280 | 35 × 35, 1,225 | 1,5051 | 37.5 × 37.5 (1,406.25) |
| L (mm) × W (mm), |
| Area (mm2) |
|
| 1L and W are indeterminate. Area is the sum of the tuner area and the PVR area. |
Table 1 shows that an illustrative single chip device of the present invention may require only about 6.7 W, as compared to about 8.9 W that is required by a two-chip system having a tuner and a PVR on separate dies. Table 1 also shows that an illustrative single chip of the present invention requires only about 1,406.25 mm2of die area, as compared to about 1,505 mm2that is required by a two-chip system having a tuner and a PVR on separate dies.
The radio frequency tuner module may include a first radio frequency input channel and a second radio frequency input channel. Each of the first and second radio frequency input channels may include an integrated tuner, a demodulator, a decoder stage and a multiplexer. The radio frequency signal tuner module may include a system oscillator and a phase-locked loop (“PLL”) circuit configured to generate a clock signal based on an off-chip crystal. The phase-locked loop circuit may be configured to transmit the signal to the display interface module and to any other suitable modules on the chip. The system oscillator may be the reference clock for processes occurring on the semiconductor chip. By including the system oscillator in the tuner module, the tuner may receive a cleaner (less noisy) clock signal than if the clock signal were propagated to the tuner module from a relatively distant portion of the semiconductor chip.
Some embodiments of the invention may include a system for receiving radio frequency signals and outputting digital data for communication. The system may include (1) a data transport module that is imprinted on a semiconductor substrate; (2) a PVR module that is imprinted on the semiconductor substrate; and (3) a circular memory module for buffering data flow between the transport module and the PVR module. The circular memory module may be imprinted on the semiconductor substrate.
Some embodiments of the invention may include a system for receiving radio frequency signals and outputting digital data for communication. The system may include a first circuit that is imprinted on a semiconductor substrate and is configured to receive a data stream from a data transport module that is imprinted on the substrate; and a second circuit that is in communication with the first circuit, is imprinted on the substrate and is configured to selectively output the data stream. The second circuit may be configured to output the data stream to one of a storage module and an AV port, the AV port configured to be engaged with a display device input port.
The modules and circuits described herein may be imprinted on the semiconductor chip using 65 nanometer lithography. Illustrative processes that may be used in accordance with the principles of the invention are the 65 nanometer CMOS fabrication processes offered by United Microelectronics Corporation (Woodlands, Singapore), Taiwan Semiconductor Manufacturing Company, Ltd. (Hsin-Chu, Taiwan) and Chartered Semiconductor Manufacturing (Woodlands Industrial Park, Singapore) and others.
FIGS. 1-11 show illustrative features of the invention. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope and spirit of the present invention.
FIG. 1 showssemiconductor chip100.Semiconductor chip100 is configured to receive programming information signal S. Signal S may be a satellite broadcast signal.Chip100 is configured to demodulate signal S, decode signal S, process signal S and output signal S in a wide range of formats. It will be appreciated that signal S will undergo transformations in the modules shown and described herein. For the sake of clarity, the reference letter “S” may be used to identify the signal or a portion of the signal at more than one stage of processing by a semiconductor chip.
Chip100 may includesatellite modem module102 for receiving signal S.Satellite modem module102 may include one or more tuner channels (not shown) for decoding signal S.Satellite modem module102 may demodulate signal S and provide todata transport processor104 data transport streams that are based on signal S.Data transport processor104 may be configured to receive the data transport streams and perform operations on the data transport streams to prepare decodable streams for video, audio and graphics processing.
Chip100 may includePVR module106.PVR module106 provides a user ofchip100 with the ability to manipulate the programming information in signal S. Some of the functions ofPVR module106 may be similar to those available on a digital recording device. For example,PVR module106 may enable the user to display, pause, rewind, playback and record some or all of the programming information in signal S.
PVR module106 is shown inFIG. 1 as a separate module fromdata transport processor102. In some embodiments,PVR module106 may be integrated intodata transport processor102.
Data transport processor104 may feed signal S to data decoding andprocessing module108. In some embodiments of the invention, signal S flows through data decoding andprocessing module108 as compressed digital data or digitized baseband analog video. Data decoding andprocessing module108 may include one or more suitable decoders and a video processing state. In the video processing stage, appropriate scaling can be applied to signal S. Also, video programming resulting from the scaling may be stored in memory for later display. During video processing, any graphics or additional video can be combined just before being displayed. The processed signal S may be to a video encoder for display through an appropriate on-chip output port.
Data decoding andprocessing module108 may include an audio processing core (not shown). The audio processing core may include a DSP subsystem (“RPTD”) and an audio input/output module (AIO). The RPTD is a DSP system block for decompression of MPEG, Dolby Digital, MPEG-2 AAC, MPEG-4 AAC, and Dolby Digital Plus audio services. The DSP system may also support a second digital audio path that allows simultaneous output of a digital audio service in compressed form on SPDIF. The audio processing core may feed an audio component of signal S toaudio output module110.Audio output module110 may be configured to provide to external devices analog or digital audio output based on signal S.
Data decoding andprocessing module108 may provide signal S to general I/O module. I/O module112 may include interfaces for smart cards, test circuitry, BSC, analog video, component video, S-video, composite video, HDMI television,channel 3/4 television, 656 analog video, soft modem, USB, Ethernet, SATA-2 and volatile or non-volatile memory devices.
Chip100 may includesecurity module114.Security module114 may be any suitable processor that may be used to screen data transfers and/or restrict access tochip100. In some embodiments,security module114 may be a module available from Broadcom.Security module114 may support multimedia applications that provide security for programming information. The applications can range from single-purpose conditional-access (CA) for watching-TV-only STB to multi-purpose copy-protection (CP) for Personal Video Recorder (PVR) STB and digital rights management (DRM) for a multimedia gateway system.Security module114 may include security components that are required in satellite and cable STBs and various CA and CP standards, such as CP for CableCard and Secure Video Processor (SVP).Security module114 may support implementations of a variety of security algorithms, whether open or proprietary.Security module114 may include a small real-time operating system (“OS”) kernel that runs on its own master processor.
Chip100 may includememory control module116 for controlling I/O operations of high speed memory.
Data transport processor104 may be in communication with (or include) personal video recorder (“PVR”)interface module106.PVR interface module106 may process compressed streams for personal video recording.PVR interface module106 may have a recording mode, in which transport packets associated with programming information are selected, based on one or more transport stream PID, for recording to a circular buffer in DRAM.PVR interface106 may transfer the transport packets to a hard disk drive (HDD) (not shown) that is not onchip100. The compressed data is optionally scrambled using a mem-to-mem security block (shown inFIG. 2).
Video elementary stream (“ES”) data contained within the selected PID is searched for the presence and location of selected start codes, such as PES packet headers, sequence start codes, picture start codes, and the first slice start codes within each picture. Sufficient data from the compressed streams following the start codes is also retained to determine the picture type (I, B, or P) and other pertinent information. All of the selected information may be written to memory in a circular buffer to facilitate additional processing by an on-chip MIPS and to record the data to the HDD.
In some embodiments of the invention, PES packets can be recorded instead of data transport streams.PVR interface module106 may have a playback mode. In PVR playback,data transport processor104 may read linked lists of compressed audio and video from DRAM, optionally descramble it using the mem-to-mem security block, and process it for decompression and display. The PVR playback mode may have capabilities for fast and slow decoding and descrambling, and data flow management in the absence of a physical time base associated with the stream (as would normally be present in broadcast operation).
FIG. 2 showsillustrative semiconductor chip200.Chip200 may include some or all of the features ofchip100.Chip200 may includesatellite modem202, which may correspond to satellite modem102 (shown inFIG. 1).Satellite modem202 may includedual tuner circuits204 and206.Tuner circuits204 and206 may includeinternal tuners208 and210, respectively.Tuner circuits204 and206 may include demodulators212 and214, respectively.
Satellite modem202 may feed data transport streams todata transport processor220, which may correspond to data transport module104 (shown inFIG. 1).PVR module226, which may correspond to PVR module106 (shown inFIG. 1), may provide digital video recorder functionality todata transport processor220.Data transport processor220 may feed decodable data streams to data decoding andprocessing module228. Data decoding andprocessing module228 may have features that are similar to those of data decoding and processing module108 (shown inFIG. 1). Data decoding andprocessing module228 may includeadvanced video decoder230, which decodes the streams, video and graphics displaymodule238, which generates scaled composited images based on the decoded streams, and dual composite NTSC/PAL video encoder (“VEC”)236, which encodes the images for television display.
Chip200 may includeMIPS processor core250.MIPS processor core250 may control processes onchip200.MIPS processor core250 may operate at any suitable clock rate, including, for example, at 400 MHz.Core250 may be a MIPS 4380 and may be fully MIPS32 compatible.Core250 may include MIPS 32e and MIPS16e extended instruction sets, 32KI and 64 KD memory management units, a floating point unit, an 8K read-ahead cache and a 128K level 2 cache memory.
Direct memory access (“DMA”)engine252 may control data exchange betweenchip200 memory (dynamic random access memory (“DRAM”), not shown) and other devices.
Chip200 may includebus bridge254 for exchanging data with off-chip devices that operate under formats such as PCI 2.3, EBI (a Flash memory busing protocol) NAND, FLASH NOR, FLASH ROM, NVRAM and PCMCIA.Bus bridge254 may also exchange data with off-chip devices that operate under dual series ATA-2 and other such formats.
Chip200 may includeDRAM controller256.DRAM controller256 may control the DRAM.DRAM controller256 may interface with off-chip devices that operate under formats compatible with configurable 48-bit double data rate two synchronous dynamic random access memory (“DDR2”) devices.
Chip200 may include any suitable USB interfaces, such as dual USB 2.0interface258, for interfacing with two-channel USB 2.0 devices, and USB 2.0interface260 for interfacing with USB 2.0 devices under a client/host protocol.
Secure processor262 may provide secure boot key generation, management, and protection.
Chip200 may include audio decoding andprocessing module264. audio decoding andprocessing module264 may have features similar to those of audio output module110 (shown inFIG. 1). Audio decoding andprocessing module264 may receive audio information fromdata transport processor220. The audio information may be buffered in memory after processing bydata transport processor220 and before receipt by audio decoding andprocessing module264. Audio decoding andprocessing module264 may include multi-formataudio decoder266 and pulse-code modulated (“PCM”) Audio Engine andDAC268. Audio decoding andprocessing module264 may include data and instruction memories and may be configured to parse audio and timing data fromdata transport processor220. Audio decoding andprocessing module264 may decompress compressed data, provide time stamp management, and process PCM data. Audio decoding andprocessing module264 may include an FMM, an HIFIDAC, and audio input/output interfaces. Audio decoding andprocessing module264 may capture I2S data and perform mixing and volume control of playback data. Audio decoding andprocessing module264 may output data to L/R-, I2S-, SPDIF- and HDMI-formatted devices,RF modulator237 and an HIFIDAC (not shown).Chip200 may include an I2S output port and an I2S I/O port.
Chip200 may include interfaces for communication of any suitable set-top box control signals from off-chip devices. The interfaces may include, e.g.: IR/UHF receiver270,IR transmitter272,triple UART interface274, and general purpose I/O (“GPIO”)interface276. Each of the interfaces may have any suitable number of channels. For example, IR/UHF receiver270 may receive signals from a two-channel IR transmitter.UART interface274 may receive input from a three-channel device.
Chip200 may includegateway interface278.Gateway interface278 may provide communication of programming or control information over an off-chip communication network.Gateway interface278 may include any suitable interface for communication betweenchip200 and the communication network. For example,gateway interface278 may include one or more of a soft modem, an Si305X interface, an Ethernet interface, a 10/100 interface, a BASE-T 2nd interface, an Enet interface, a MAC interface and any other suitable interfaces.
Chip200 may provide for satellite antenna control via base station controller (“BSC”)280.BSC280 may receive control signals from an off-chip device.Chip200 may provide for satellite antenna control via dualsatellite antenna controllers282 and284.Satellite antenna controllers282 and284 may receive input fromtuner channels204 and206.Satellite controllers282 and284 insatellite modem202 may operate under the DiSEqC protocol.
Elements ofchip200 will now be described in more detail.
FIG. 3 shows an illustrative embodiment oftuner300, which may correspond to one or both oftuners208 and210 (shown inFIG. 2).Tuner300 may be a direct conversion tuner. The tuner may accept L-band inputs in the 250 MHz-2150 MHz range and convert them to in-phase and quadrature baseband. The tuner may take in a differential L-band signal from standard consumer grade LNB devices. In some embodiments, off-chip LNA310 may be required to boost the signal before coming on-chip. A programmable gain amplifier (RF PGA) (not shown) underdemodulator AGC loop312 control adjusts the signal to account for wide dynamic range. Signals required for direct conversion may be generated within the chip byintegrated PLL350 and a quadrature LO generator (not shown).Mixers352 mix the PLL signal with the L-band input. Low pass filters354, which may be integral to the corresponding mixers, remove upper image produced by the mixer.
Programmable gain amplifier (“IF AGC”)356 then adjusts baseband signal levels.Tuner300 may include channelselect filters358. Channelselect filters358 may optimize noise performance and prevent distortion. Channelselect filters358 may be digitally programmable 5thorder low-pass Butterworth filters. The Butterworth filters may have programmable bandwidth in the 1 to 40-MHz range. Buffered I/Q outputs360 and362 are then sent both off-chip as probe points, and to internal A/D converters in the tuner's demodulator (e.g.,212 or214). On-chipDC canceller loop364 may be included to correct DC offsets inherent in the direct conversion mixers, channel select filters, and output buffers.
Tuner channels204 and206 may include demodulators212 and214, respectively. Demodulators212 and214 may receive output frominternal tuners208 and210, respectively. Demodulators212 and214 may receive output from external tuners (not shown), respectively, viapaths216 and218.
FIG. 4 showsillustrative demodulator400, which may correspond to one or both ofdemodulators209 and211 (shown inFIG. 2).Demodulator400 may accept a modulated data stream from an on-chip tuner (such as208 or210).Demodulator400 may deliver a demodulated and error-corrected output data stream for processing by data transport processor220 (shown inFIG. 2).Demodulator400 may support legacy DVB/DTV/DCII QPSK formats and DVB-S2 and 8PSK Turbo QPSK/8PSK formats with headers and pilots.
Demodulator400 may receive real andquadrature signals402 and404, which may be based on signal S (shown inFIG. 2, e.g.), from a tuner such as300 (shown inFIG. 3).Demodulator400 may passsignals402 and404 through A/D converters406 and407. A/D converters406 and407 may be dual 8-bit converters. A/D converters406 and407 may digitizesignals402 and407 at a programmable sample rate. The rate may be any suitable rate, including about 135 MHz. In some embodiments, the sample rate may be greater than 135 MHz. The sample rate may be chosen to provide 4× oversampling for rates up to 33 MBaud.
A/D converters406 and407 may pass output to carrier frequency recovery andphase tracking loops408.Loops408 may be high-speed, all-digital phase/frequency circuits capable of tracking out relatively large amounts of frequency offsets and phase noise such as those contributed by conventional tuners and LNBs.Loops408 may be configured via software (not shown) to use either a decision directed phase detector or a non-decision directed phase detector optimized for low SNR operation.Loops408 may be filtered by an integral-plus-proportional filter. Programmable integrator and linear coefficients may be provided to set the bandwidth ofloops408. Upper bits of a loop filter (not shown) output may be used to control a complex derotator (not shown). This may provide phase and frequency resolution.Loops408 may remove residual phase and frequency offsets in the baseband signal.
Loops408 may pass output tovariable rate demodulator410.Demodulator410 may output real and quadrature signals to Nyquist filters412. Nyquist filters412 may output real and quadrature signals to phase trackingequalizer414. For DVB-S2 and 8PSK Turbo operation,equalizer414 may provide output to header/pilot control block416.Block416 may assist with acquisition and tracking of physical layer header locations, as well as extracting carrier phase information from the pilots when they are present.Block416 may provide output, as appropriate, to LDPC/BCH decoder418 or Turbo/RS decoder420. For DVB, DTV and DCII operation,equalizer414 may provide output directly todecoder422.Decoders418,420 and422 may provide output todata transport processor220, or any other suitable elements ofchip200, viaoutput interface424.
Demodulator400 may include acquisition/tracking loops andclock generation module426.Module426 may include an automatic gain control loop to control amplitudes ofinputs402 and404.
Demodulator400 may include DiSEqC 2.X interface428 for tuning a satellite antenna based on data generated frominputs402 and404.
FIG. 5 shows a schematic overview of the flow of signal S in chip200 (shown inFIG. 2) downstream fromsatellite modem202. The signal S may flow as compressed digital data or digitized baseband analog video. Fromsatellite modem202, signal S may be received bydata transport processor220.Data transport processor220 may generate decodable streams based on signal S.Data transport processor220 may store the decodable streams inDRAM500. (DRAM500 may be controlled by a memory control module such as116 (shown inFIG. 1).Advanced video decoder230 may retrieve the decodable streams fromDRAM500, decode them and restore them inDRAM500. Video and graphics displaymodule238 may then operate on the decoded streams.Video display subsystem232 can apply scaling and compose frames. 2Dgraphics display engine234 can combine graphics or additional video with the signal S video. The resulting stream is then sent to one or more video encoders (“VEC”s), such as236, for display through suitable output interfaces, such asanalog DAC outputs502 and/orHDMI interface504.
FIG. 6 shows illustrativedata transport processor600, which may correspond to data transport processor220 (shown inFIG. 2).Data transport module600 may be configured to process simultaneously 255 PIDs via 255 PID channels in a number of external streams and playback streams.Data transport processor600 may support decryption for all 255 PID channels.Data transport processor600 may includeoutput cluster601.Output cluster601 may include remultiplexing (“remux”)modules603 and605, PID-based MPEG/DIRECTV output module607 and record, audio, and video interface engine (“RAVE”)module602, which may have one or more of the features described above in connection withdata transport processor104 on chip100 (shown inFIG. 1).
Data transport module600 may receive from a satellite modem such as202 (shown inFIG. 2)serial inputs604 andparallel inputs606.Inputs604 and606 may be synched bysync block608 toPCR timebase610. The inputs may be multiplexed bymultiplexer612, parsed byPID parser614 and stored ininput buffer616.Data transport module600 may support up to 128 PID channels for message or generic PES processing and storage. (The storage may include 128 or more DRAM message buffers (not shown) that are integral to chip100 (shown inFIG. 1). Buffer616 may receive a timebase signal for time-stamping the parsed data.
Input buffer616 may maintain a separate 32-bit timestamp counter for each PID parser which can be locked to anychip200 timebase or to a free running 27-MHz clock. Each packet that is accepted by a PID parser can be optionally stamped using this local timestamp counter. This timestamp can be used for record, playback with pacing, or PCR correction for remux. PCR correction may be necessary while outputting fromremux603 or605, because packets can remain in the multiplexing buffers for a variable length of time. Timestamp format is programmable—32 bit straight binary or modulo 300 for the nine LSB, similar to the MPEG PCR. Timestamp format can be selected independent of the transport packet format. Playback pacing supports both timestamp formats. However, in some embodiments, PCR correction can only be done when the selected timestamp format is the same as the PCR format. In other words, hardware cannot convert the local timestamps to the format of the PCR within the transport packets. As the packet is being output from the data transport, the only place that the timestamp value can be output with the packet is at the record.
Record mode can select one of the two timestamp modes. In normal mode, the 32-bit recorded timestamp consists of a 4-bit parity and 28-bit timestamp value. In special mode, the 32-bit recorded timestamp consists of a 2-bit user-programmable value and a 30-bit timestamp. A preset starting timestamp value can also be synchronized with the first recorded packet. In addition to recording timestamps with the data, record channel can also attach the timestamp with each SCD entry generated.
During playback, the timestamps recorded with the data can be used to pace the playback data. These timestamps can also be used to do PCR correction if playback data is to be routed out remux603 or605. Playback can also extract the two user-programmable bits in the timestamp (for special timestamp mode), and present them in registers forchip200 MIPS processor core250 (shown inFIG. 2) to read. In some embodiments, playback pacing must have the same programming of the format and mode of the timestamp as that during record. Record function of time interval packet counting, and PCR out of range detection, may be performed byMIPS processor core250 software. The purpose of the time interval packet counting is to later navigate within the recorded stream, performing jumps in playback with respect to time. This function is best implemented using the record generated SCD, which provides very accurate navigation data such as picture starts, etc. The SCD also stores PCRs found in the stream, together with their corresponding local timestamp. This allows the software to more accurately determine the PCR errors, and to determine unmarked PCR discontinuities. More robust algorithms can be performed byMIPS processor core250 to support this function.
Data frominput buffer616 may be multiplexed with security data from security interface617 (which may be an MPOD) bymultiplexer618. The multiplexed data may then be passed to packet substitutionDMA link list620.Link list620 may perform packet generation. The packets may then be stored inRS buffer622. Packets stored inRS buffer622 may be multiplexed bymultiplexer624 with playback (“PB”) packets (in PES, ES or any other suitable format) from PB buffer. The multiplexed packets may then be descrambled bydescrambler628. The multiplexed packets may then be fed toXC buffer630. Output fromXC buffer630 may be fed tooutput cluster601.
Data transport module600 may includemultiplexer632 for combiningPB buffer626 contents withXC buffer630 contents.
Data transport module600 may include 512 4-byte generic filters that may be configured to process MPEG/DVB sections or DIRECTV messages.
Each channel ofRAVE module602 may be configured as a record channel for PVR or as an AV channel to interface to audio and video decoders.RAVE module602 may support 32 or more SCDs (configured 0-8 per record channel). In some embodiments, each record channel can be configured for any suitable number, such as one to eight, start code detectors (“SCD”). Each channel may be configured for one or more TPITs (maximum of five in the system).
TheRAVE module602 AV channels may be used for interface to the audio/video decoders via an external memory subsystem. Each record channel can be used to record transport streams for up to 255 or more PID channels. A record channel may be allocated one or more external DRAM buffers. One of the DRAM buffers may be for data. One of the DRAM buffers may be for index table entries. Each channel's index table descriptor buffer may contain entries that points to relevant locations within the data buffer. For example, an entry may point to a start code locations, PTS information, or other suitable locations in the buffer. Each record channel can record any suitable number of entry types. In some embodiments, each record channel may support about four types of entries. The four may include a Start Code Detect entry type, a Transport Parser Index Table (TPIT) entry type, a seamless pause entry type and/or a PTS entry type. The start code entries may be used to build start code tables or transport field tables which can then be used during playback to perform trick modes.
InRAVE module602, RASP, as defined by NDS, can be supported using TPIT. Any suitable number of record channels may be configured for the TPIT function. In some embodiments, about 6 record channels may be configured for the TPIT function. A local timestamp may be generated at an input buffer via an internal counter using a clock that is selectable from any of three available locked timebases, or a free-running system clock. The clock may be a 27-MHz clock. A local timestamp may be prepended, for example as a 32-bit field, to one or more recorded transport packet. The 32-bit timestamp format may be programmable. In one mode, the timestamp may include a 28-bit local timestamp plus a 4-bit parity which can be used during playback to transmit the packets at a rate equivalent to when they were recorded. In some embodiments, the 4-bit parity may be used for PCR correction in theremux modules603 and605. In another mode, an upper two bits of the 32-bit timestamp field may be user programmable. In that mode, the remaining bits may be the timestamp.
In some embodiments, a record channel may support index table generation. Although index table generation involves more than indexing start code table entries, the index table generation feature may be referred to herein as a SCD. An SCD may record a position of a PES packet header stream_id and an elementary stream start code within a recorded transport stream for a given PID. The SCD may operate in accordance with one or more transport modes of operation. One transport mode of operation is MPEG. Another is DIRECTV.
A data structure for data stored in the memory buffer may be a start code index table. The start code index table may be detailed in a Record Index Table Definition section of the buffer. Within each transport mode (MPEG and DIRECTV), any suitable number of index table modes may be implemented.
Four index table modes may be supported. In some embodiments, all of the modes utilize a six-word index entry. Four index entry types are supported: Start Code (SC), Presentation Time Stamp (PTS), Transport Field (TF), and Seamless Pause (SP). The SC index entry may provide offsets to start-code locations within an associated record buffer. The PTS index entry may provide PTS values that were extracted from the recorded stream. The TPIT transport field parser may store transport field index entries. For on-change conditions, an initial entry is made for detection of a first PID. For example, if a first packet for a PID with the transport scrambling_control_change_en bit set has a scrambling_control of 10, an index table entry is stored for the transport scrambling_control_change condition with the transport scrambling_control_change bit set and the actual value of the scrambling_control of 10 is stored in the scram_control field.
The seamless pause feature may be used with playback. The seamless pause feature may allow live viewing of a program with the capability of pausing the program. The program initially may be viewed without going through the record/playback path. This may eliminate channel change latency that may be incurred when going through record/playback path. When a user wishes to pause the program, a record channel may be enabled with the appropriate PID channels selected for record. Then REC_PAUSE_EN is asserted. This assertion may prevent the selected PID channel data from being sent to the audio/video decoders. The user may see this as a pause. Once REC_PAUSE_EN is set, the next packet that is recorded may have a seamless pause entry made in the record index table (if the index table is enabled). When the user wishes to resume the program, the stream may now come from a playback channel instead of the live channel. The index table entry made for seamless pause may be used to determine where to start the playback.
FIG. 2 shows thatdata transport module220, which may correspond to data transport module600 (shown inFIG. 6), may communicate withmulti-channel ports222 and224.Port222 may receive a data transport stream from an off-chip source.Port224 may provide to an off-chip processor a remultiplexed data transport stream.
Output fromdata transport processor220 may be processed by data decoding and processing module228 (see alsoFIG. 5). Data decoding andprocessing module228 may include high definition AVC/MPEG-2//MPEG-4/VC-1video decoder230,video display subsystem232, advanced 2Dgraphics display engine234 and dual composite NTSC/PAL VEC withDACS236. (See alsoFIG. 5.) Data decoding andprocessing module228 may provide output signals in any suitable format. For example, data decoding andprocessing module228 may provide HD/SD, ITU-R-656 TTX, HDMI or any other suitable format.Chip200 may include any suitable circuits, such as those shown inFIG. 2, for providing signals in suitable formats based on the output oftransport processor220 and orPVR module226.Chip200 may includeRF modulator237 for providinganalog Channel 3/4 output.
FIG. 7 schematically shows an illustrative of features ofAVD230. Advanced video decoder (“AVD”)230 may be a high-definition AVC/MPEG-2/VC-1/DivX/MPEG-4 P2 video decoder core.AVD230 retrieves elementary stream video data placed into SDRAM (not shown) bydata transport processor220, decodes the video, and writes the decoded pictures back to SDRAM to be retrieved by a video feeder invideo display subsystem232. The AVD core is capable of decoding one or more encoded elementary streams. The processing of such a stream has two major components: front-end processing (the conversion of the code stream into fundamental components-motion vectors, transform coefficients and the like) and back-end processing (actual generation and manipulation of pixels). FGT block average logic is optional. FGT block average logic may compute block averages as an assist to the downstream FGT logic. When enabled, FGT block average logic may monitor decoder pixel output and use the results of the monitoring to calculate 8×8 block averages, which are written to main SDRAM memory.
AVD230 may decode any suitable code streams, such as: H.264/AVC main and high profile to level 4.1; VC-1 advanced profile @level 3; VC-1 simple and main profile; MPEG-2; MPEG still-picture decode; MPEG-4Part 2 and DivX 3.11, 4.11, 5.X, 6.X.AVD230 may support tools added in the AVC Fidelity Range Extensions (“FRExt”) amendment, specifically 8×8 transform and Spatial Prediction modes, and adaptive quantization matrix required for High Profile support. In some embodiments,AVD230 may include one or more of the following features: error concealment and multiple-stream support for any suitable number of low-resolution streams. For example,AVD230 may include multiple-stream support for sixteen low-resolution streams.
AVD230 stores images in a striped format that may optimize two-dimensional transfers. The images are stored in 4:2:0 format, with luminance separate from chrominance. In some embodiments of the invention, picture buffer management is under software control.AVD230 may include outer-loop RISC processor702.Processor702 may pass information about each display frame to an external video feeder (not shown—outside AVD230), which can pick it up out of memory. The optional FGT block average logic writes averages for 8×8 block averages for a frame, and for the 4×8 sums forfield 0 in interlaced mode. Each 8×8 average is 8 bits, and is stored in Y0-Y1-Y2-Y3-Cb—Cr order, in MB raster order. The averages are written starting at the software programmed base address, and are written linearly without any holes. The 4×8 sums are 16 bits each, and are also written out in Y0-Y1-Y2-Y3-Cb—Cr order. The sums may use two times as much space as the averages.
Coded data is presented toAVD230 as a linked list of packet entries, each entry corresponding to a network abstraction layer (“NAL”) unit. Multiple streams are handled by multiple instances of linked lists. As NAL units accumulate in memory, outer-loop RISC processor702 examines them and passes them toentropy decoder704.Decoder704 reads header information.
If a stream is CABAC-encoded, outer-loop RISC702 then sets up a CABAC-to-BIN decoder to generate a BIN representation. For CAVLC-encoded streams, this operation may not be necessary. Once outer-loop RISC702 determines that it has enough data to start decoding, it passes a structure to inner-loop RISC706, which then starts inner-loop processing using one pass per image slice. Inner-loop RISC706 may direct a symbol interpreter to parse the data stream, from the BIN buffer for CABAC streams, or the code buffer for CAVLC streams. The symbol interpreter converts the variable-length symbols to data values, and contains blocks to convert those values to spatial prediction modes, motion vector deltas, and transform coefficients. These elements are then used for further video processing by a module that performs actual pixel reconstruction.FIG. 7 also shows adeblocker708 and a schematic configuration of a symbol interpreter, spatial predictor, reconstructer and amotion compensation module710. Both thedeblocker708 and the aforementioned schematic configuration may receive reference picture data.
FIG. 8 shows an overview of an illustrative architecture for video and graphics displaymodule238. In video and graphics displaymodule238,video display subsystem232 receives a signal S feed (from AVD230). Advanced2D graphics engine234 may provide graphic data to be combined with programming information from signal S. The graphic data may be registered against the programming information usingregister bus802 andmemory bus804. The combined data may then be output byvideo display subsystem232 asanalog video output806 or DVI (Digital Video Input)video output808.Video output808 provides a decompressed decoded external video signal.Analog video output806 may be provided to a video encoder such as236 (shown inFIG. 2) for output to a display device.
AVD230 may pass decoded AVC/MPEG/VC-1 or analog video tovideo display subsystem232.Video display subsystem232 may perform compositing of text and graphics with video.Video display subsystem232 may take in uncompressed video fromAVD230 or advanced 2DGraphics Display Engine234.Video display subsystem232 may processes the input videos based on the input and output format, and any appropriate system requirements. The input video may be scaled and converted to the output display format directly, or go through single and multiple capture and playback loops. Each capture and playback loop may involve data processing like DNR, MAD-IT, or scaling.Video display subsystem232 may allow a user to create a series of frame-buffers that allow an unlimited number of graphics layers to be composited and blended together before being displayed. Once the graphical frame-buffers are available, they can be combined with the video using a compositor. The compositor allows up to two video surfaces to be combined with a graphical surface (frame-buffers). In some embodiments, the blending order of any surface may be controlled by software.
In some embodiments, graphic surface generation may be separate from the real-time display requirements of the video output. Once the graphics surface is available, it can be switched in for display. In some embodiments, all of the graphics development interacts only with the memory—not with any of the display hardware.Video display subsystem232 may provide dual video output with independent graphics on each output.
Video display subsystem232 is based on a video network that may include a digital noise reduction filter to reduce MPEG artifacts, including block noise, and reduce mosquito noise; a digital contour removal function; AVC/MPEG/VC-1 feeders (that may handle the YUV4:2:0 data format); graphics feeders (that may handle the YUV4:2:2 and RGB data formats); video feeders (that may handle YUV4:2:2 data formats); video scalers (including, in some embodiments, 2D scalers using an flexible FIR algorithm); a motion adaptive deinterlacing function (which may include adaptive deinterlacing for 480i or 576i input formats to 480p, 576p, 720p, and 1080i resolutions and 3:2/2:2 pull-down detection and adaptive 3:2 pulldown progressive frame filtering); capture blocks (which may store YUV4:2:2 data formats); one or more video compositors (for combining video and graphics); and film grain technology (“FGT”) for adding film grain to decoded video.
Advanced 2Dgraphics display engine234 may include a 2D memory-to-memory compositor. The compositor may include features for scaling, BLT functions and ROP operations.
FIG. 9 shows video and graphics displaymodule238 in more detail. Video and graphics displaymodule238 receives input signal S fromAVD230. The feeder supports a number of frame buffer formats. In addition, a number of frame buffer formats commonly used by software codecs are included and registered in Microsoft as Four-Character Code (FOURCC). The scope is limited to 4:2:0 and 4:2:2 formats only, and other formats are not supported (such as 4:4:4). TheAVD230 feeder is capable of HD resolutions and can support pan-scan operations.
AVD230 may use a linear image format. Image data may be stored in DRAM in a striped format i.e., slicing an image into a series of equal-sized vertical strips and then tacking the strips together. The height of a stripe is a programmable parameter, this must be at least as large as the ‘tallest’ image that will be stored in the buffer. It is generally made a little larger than that to achieve optimal DRAM bank alignment. Though the stripe width is programmable but feeder supports only 64-bytes stripe width. A picture inAVD230 format contains two separate arrays, one is for luma (Y) components, and the other is for chroma (Cb and Cr) components. Chroma components are stored Cb/Cr interleaved, with the same stripe width and a programmable stripe height. Packed YUV For a 4:2:2 picture, pixels are paired together as CbYCrY quadruplets. They are organized in a raster scanning order. There are a number of permutations within a quadruplet. They are represented in FOURCC as: CbYCrY (UYVY); YCbYCr (YUY2); and YCrYCb (YVY2).
Video feeder902, shown inFIG. 9, may support a subset of the number of frame buffer formats that theAVD230 feeder supports. Packed For a 4:2:2 picture, pixels are paired together as CbYCrY quadruplets. They may be organized in a raster scanning order. There may be a number of permutations within a quadruplet. The permutations may be represented in FOURCC as: CbYCrY (UYVY); YCbYCr (YUY2); YCrYCb (YVY2).
Graphics feeder shown in904 may support 4:4:4 or ARGB formatted graphics or video. The 4:4:4 data may require that the data be stored in one of the following selections: 32-bit formats (e.g., AYCrCb—8888; YCrCbA—8888; ARGB—8888; and RGBA—8888); 17-bit format (e.g.,W_RGB—1—565); 16-bit format (e.g., RGB—565; WRGB—1555; RGBW—5551; ARGB—4444; RGBA—4444; and AP—88); 8-bit format (e.g., A—8-P—8); or any other suitable format (such asP—4;P—2;P—1;P—0; A—4; A—2; and A—1). A horizontal scaler may be either inside the graphics feeder or just downstream from it. The scaler can handle horizontal upscaling, and may have an 8-tap filter for the up-scaling function.
Video scaler906 may support SD and HD data. In the scaler, sampling position may be maintained internally using two M mod N counters (one horizontal and one vertical). Horizontal and vertical scales may be rounded to the nearest 1/256 pixel. In addition, sampling position can be initialized by a subpixel amount. Four modes of vertical FIR and/or block averaging can be selected. Two optional horizontal halfband decimation filters can be enabled for cascaded operation in high-quality decimation. Horizontal non-linear scaling allows projection of 4:3 material onto 16:9 screen.
Motionadaptive deinterlacer block908 may convert an interlaced format into a progressive format. This improves the visual quality for progressive displays.
Compositor910 arranges final construction of the outgoing video. There may be two possible video surfaces. There may be two possible graphics surfaces. Once the order of the surfaces is determined, they are blended together from the bottom up to form the final result. To facilitate blending, the surfaces are all translated into an AYUV4:4:4:4 format type. This simplifies the blending mathematics. Each compositor input can be manipulated through a matrix to allow manipulation of the individual color components. This can be used for color space conversion as well as contrast, tint, and brightness adjustments.
FIG. 10 shows illustrative single-channel analog video encoder (“VEC”)1000, which may be duplicated for additional channels. (The single channel appears as two, because two formats—standard definition video and high definition video-based on a single data source are supported.) The architecture ofVEC1000 may be used in VEC236 (shown inFIG. 2).Analog VEC1000 may include Macrovision 7.1 and DCS support.VEC1000 may be configured to process high definition video stream and a standard definition (that is scaled down content from the high definition video stream).VEC1000 may be a single module that takes a series of video inputs from multiple sources, inserts fly backs (hblank and vblank), formats the signal into multiple valid output video standards, and additionally handles the insertion of non-video signals into the VBI region. The VEC supports a variety of analog video standards (NTSC, NTSC-J, PAL (all variations including PAL-M/Nc)), SECAM, as well as a variety of output formats: composite, S-Video, SCART1, SCART2, component (480i, 480p, 576i, 576p, 720p, 1080i, 1080p24, and 1080p30). The VEC uses a fixed clock architecture.
VEC1000 may interface with one or more 10-bit video digital-to-analog converters (“DACs”). One or more of the DACs may be based on high-speed CMOS. The DACs may be configured to support SCART1 as well as component, S-Video (Y/C), and composite video (CVBS) outputs.
VEC1000 may receive signal S via input leads1002 and1004, which carryvideo 1 andvideo 2 signals, respectively. Input leads1002 and1004 may synchronizevideo 1 andvideo 2 at input timing blocks1006 and1008, respectively. Multiplexers1010 may combinevideo 1 andvideo 1 for video formatting byvideo formatters1012.Video formatters1012 feed formatted video to samplerate converters1014.Sample rate converters1014 may output analog video tosub-carrier modulators1018.Sub-carrier modulators1018 may output modulated video tooutput multiplexer1020.Output multiplexer1020 may provide analog video signals to device interfaces as shown inFIG. 2.
VEC1000 may includesample rate converter1022 for signals that may not require sub-carrier modulation.Sample rate converter1022 provides output directly tooutput multiplexer1020.VEC1000 may includemultiplexer1024,digital video formatter1026 andDVI transmitter1028 for providing a digital output corresponding to the analog output fromoutput multiplexer1020.
FIG. 11 showsillustrative satellite modem1100.Satellite modem110 may be present on a chip such as chip200 (shown inFIG. 2) and may correspond to a satellite modem such assatellite modem202.Satellite modem1100 may includetuner channels1104 and1106, which may correspond totuners204 and206 ofsatellite modem202.Satellite tuner channels1104 and1106 may includeoutputs1150 and1152, respectively, for passing demodulated signals to a data transport processor such as220 (shown inFIG. 2).
Satellite modem1100 may includesystem oscillator1120 for generating a primary timing frequency fortuner channels1104 and1106 and for any other processes that are executed in the same chip.PLL1122 may receive the primary timing frequency and boost it toprimary timing signal1124.PLL1122 may provideprimary timing signal1124 totuner channels1104 and1106.PLL1122 may provideprimary timing signal1124 to other processes on the chip attap1125.PLL1122 may provideprimary timing signal1124 toPLL cluster1126, which may generate higher frequencies with serial PLLs. The higher frequencies may be tapped attaps128 for use in other processes on the chip. Becauseprimary timing signal1124 must be distributed to the other processes, it must be transmitted across the chip viaconductor1130. The signal-to-noise ratio ofprimary timing signal1124 is greatestnear system oscillator1120 and least far away fromsystem oscillator1120.Tuner channels1104 and1106 may be sensitive to the signal-to-noise ratio ofprimary timing signal1124. In embodiments of the invention in whichsystem oscillator1120 is embedded in satellite modem1102, the noise affectingtuner channels1104 and1106 may be reduced. Accordingly, in a preferred embodiment of the invention system oscillator micrometers oftuner channels1104 and1106 in order to obtain the benefits of the invention. In addition, system oscillator may preferably be located equidistant from each tuner channel, and most preferably between the twotuner channels1104 and1106.
Aspects of the invention have been described in terms of illustrative embodiments thereof. A person having ordinary skill in the art will appreciate that numerous additional embodiments, modifications, and variations may exist that remain within the scope and spirit of the appended claims. For example, one of ordinary skill in the art will appreciate that the steps illustrated in the figures may be performed in other than the recited order and that one or more steps illustrated may be optional. The methods and systems of the above-referenced embodiments may also include other additional elements, steps, computer-executable instructions, or computer-readable data structures. In this regard, other embodiments are disclosed herein as well that can be partially or wholly implemented on a computer-readable medium, for example, by storing computer-executable instructions or modules or by utilizing computer-readable data structures.
Thus, devices and methods for receiving, processing and formatting digital video data have been described. Persons skilled in the art will appreciate that the present invention can be practiced using embodiments of the invention other than those described, which are presented for purposes of illustration rather than of limitation. The present invention is limited only by the claims that follow.