Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Digital video

From Wikipedia, the free encyclopedia
Digital electronic representation of moving visual images
This article is about digital techniques applied to video. For the standard format for storing digital video, seeDV (video format). For other uses, seeDigital video (disambiguation).

Sony digital video camera used for recording content

Digital video is an electronic representation of moving visual images (video) in the form of encodeddigital data. This is in contrast toanalog video, which represents moving visual images in the form ofanalog signals. Digital video comprises a series ofdigital images displayed in rapid succession, usually at 24, 25, 30, or 60frames per second. Digital video has many advantages, such as easy copying, multicasting, sharing and storage.

Digital video was first introduced commercially in 1986 with theSony D1 format, which recorded an uncompressedstandard-definitioncomponent video signal in digital form. In addition touncompressed formats, popularcompressed digital video formats today includeMPEG-2,H.264 andAV1. Modern interconnect standards used for playback of digital video includeHDMI,DisplayPort,Digital Visual Interface (DVI) andserial digital interface (SDI).

Digital video can be copied and reproduced with no degradation in quality. In contrast, when analog sources are copied, they experiencegeneration loss. Digital video can be stored on digital media such asBlu-ray, oncomputer data storage, orstreamed over theInternet toend users who watch content on a personal computer or mobile device screen or a digitalsmart TV. Today, digital video content such asTV shows andmovies also includes adigital audio soundtrack.

History

[edit]

Cameras

[edit]
Further information:Digital cinematography,Image sensor, andVideo camera

The basis fordigital video cameras ismetal–oxide–semiconductor (MOS)image sensors.[1] The first practicalsemiconductor image sensor was thecharge-coupled device (CCD), invented in 1969[2] by Willard S. Boyle, who won a Nobel Prize for his work in physics.[3] Following the commercialization of CCD sensors during the late 1970s to early 1980s, theentertainment industry slowly began transitioning todigital imaging and digital video from analog video over the next two decades.[4] The CCD was followed by theCMOSactive-pixel sensor (CMOS sensor),[5] developed in the 1990s.[6][7]

Major films[a] shot on digital video overtook those shot on film in 2013. Since 2016, over 90% of major films have been shot on digital video.[8][9] As of 2017[update], 92% of films are shot on digital.[10] Only 24 major films released in 2018 were shot on 35mm.[11] Today, cameras from companies likeSony,Panasonic,JVC andCanon offer a variety of choices for shooting high-definition video. At the high end of the market, there has been an emergence of cameras aimed specifically at the digital cinema market. These cameras fromSony,Vision Research,Arri,Blackmagic Design,Panavision,Grass Valley andRed offer resolution anddynamic range that exceeds that of traditional video cameras, which are designed for the limited needs ofbroadcast television.[12]

A Betacam SP camera, originally developed in 1986 by Sony

Coding

[edit]
Further information:Video coding format § History

In the 1970s,pulse-code modulation (PCM) induced the birth of digitalvideo coding, demanding highbit rates of 45-140 Mbit/s forstandard-definition (SD) content. By the 1980s, thediscrete cosine transform (DCT) became the standard for digitalvideo compression.[13]

The first digitalvideo coding standard wasH.120, created by the (International Telegraph and Telephone Consultative Committee) orCCITT (now ITU-T) in 1984. H.120 was not practical due to weak performance.[14] H.120 was based ondifferential pulse-code modulation (DPCM), a compression algorithm that was inefficient for video coding. During the late 1980s, a number of companies began experimenting with DCT, a much more efficient form of compression for video coding. The CCITT received 14 proposals for DCT-based video compression formats, in contrast to a single proposal based onvector quantization (VQ) compression. TheH.261 standard was developed based on DCT compression,[15] becoming first practical video coding standard.[14] Since H.261, DCT compression has been adopted by all the major video coding standards that followed.[15]

MPEG-1, developed by theMotion Picture Experts Group (MPEG), followed in 1991, and it was designed to compressVHS-quality video. It was succeeded in 1994 byMPEG-2/H.262,[14] which became the standard video format forDVD and SDdigital television.[14] It was followed byMPEG-4 in 1999, and then in 2003 it was followed byH.264/MPEG-4 AVC, which has become the most widely used video coding standard.[16]

The current-generation video coding format isHEVC (H.265), introduced in 2013. While AVC uses the integer DCT with 4x4 and 8x8 block sizes, HEVC uses integer DCT andDST transforms with varied block sizes between 4x4 and 32x32.[17] HEVC is heavily patented, with the majority of patents belonging toSamsung Electronics,GE,NTT andJVC Kenwood.[18] It is currently being challenged by the aiming-to-be-freely-licensedAV1 format. As of 2019[update], AVC is by far the most commonly used format for the recording, compression and distribution of video content, used by 91% of video developers, followed by HEVC, which is used by 43% of developers.[19]

Production

[edit]

Starting in the late 1970s to the early 1980s,video production equipment that was digital in its internal workings was introduced. These includedtime base correctors (TBC)[b] anddigital video effects (DVE) units.[c] They operated by taking a standard analogcomposite video input and digitizing it internally. This made it easier to either correct or enhance the video signal, as in the case of a TBC, or to manipulate and add effects to the video, in the case of a DVE unit. The digitized and processed video information was then converted back to standard analog video for output.

Later on in the 1970s, manufacturers of professional video broadcast equipment, such asBosch (through theirFernseh division) andAmpex developed prototype digitalvideotape recorders (VTR) in their research and development labs. Bosch's machine used a modified1-inch type B videotape transport and recorded an early form ofCCIR 601 digital video. Ampex's prototype digital video recorder used a modified2-inch quadruplex videotape VTR (an Ampex AVR-3) fitted with custom digital video electronics and a specialoctaplex 8-head headwheel (regular analog 2" quad machines only used 4 heads). Like standard 2" quad, the audio on the Ampex prototype digital machine, nicknamedAnnie by its developers, still recorded the audio in analog as linear tracks on the tape. None of these machines from these manufacturers were ever marketed commercially.

Digital video was first introduced commercially in 1986 with the SonyD1 format, which recorded an uncompressed standard definitioncomponent video signal in digital form. Component video connections required 3 cables, but mosttelevision facilities were wired for composite NTSC or PAL video using one cable. Due to this incompatibility, the cost of the recorder, D1 was used primarily by largetelevision networks and other component-video capable video studios.

A professional television studio set in Chile

In 1988, Sony and Ampex co-developed and released theD2 digital videocassette format, which recorded video digitally without compression inITU-601 format, much like D1. In comparison, D2 had the major difference of encoding the video in composite form to the NTSC standard, thereby only requiring single-cable composite video connections to and from a D2 VCR. This made it a perfect fit for the majority of television facilities at the time. D2 was a successful format in thetelevision broadcast industry throughout the late '80s and the '90s. D2 was also widely used in that era as the master tape format for masteringlaserdiscs.[d]

D1 & D2 would eventually be replaced by cheaper systems using video compression, most notably Sony'sDigital Betacam, that were introduced into the network'stelevision studios. Other examples of digital video formats utilizing compression were Ampex'sDCT (the first to employ such when introduced in 1992), the industry-standardDV and MiniDV and its professional variations, Sony'sDVCAM and Panasonic'sDVCPRO, andBetacam SX, a lower-cost variant of Digital Betacam using MPEG-2 compression.[20]

The Sony logo, creator of the Betacam

One of the first digital video products to run on personal computers wasPACo: The PICS Animation Compiler from The Company of Science & Art in Providence, RI. It was developed starting in 1990 and first shipped in May 1991. PACo could stream unlimited-length video with synchronized sound from a single file (with the.CAVfile extension) on CD-ROM. Creation required a Mac, and playback was possible on Macs, PCs, and SunSPARCstations.[21]

QuickTime,Apple Computer's multimedia framework, was released in June 1991.Audio Video Interleave fromMicrosoft followed in 1992. Initial consumer-level content creation tools were crude, requiring an analog video source to be digitized to a computer-readable format. While low-quality at first, consumer digital video increased rapidly in quality, first with the introduction of playback standards such as MPEG-1 and MPEG-2 (adopted for use in television transmission and DVD media), and the introduction of theDV tape format allowing recordings in the format to be transferred directly to digital video files using aFireWire port on an editing computer. This simplified the process, allowingnon-linear editing systems (NLE) to be deployed cheaply and widely ondesktop computers with no external playback or recording equipment needed.

The widespread adoption of digital video and accompanying compression formats has reduced thebandwidth needed for ahigh-definition video signal (withHDV andAVCHD, as well as several professional formats such asXDCAM, all using less bandwidth than a standard definition analog signal). These savings have increased the number of channels available oncable television anddirect broadcast satellite systems, created opportunities forspectrum reallocation ofterrestrial television broadcast frequencies, and madetapeless camcorders based onflash memory possible, among other innovations and efficiencies.

Culture

[edit]

Culturally, digital video has allowed video and film to become widely available and popular, beneficial to entertainment, education, and research.[22] Digital video is increasingly common in schools, with students and teachers taking an interest in learning how to use it in relevant ways.[23] Digital video also has healthcare applications, allowing doctors to track infant heart rates and oxygen levels.[24]

In addition, the switch from analog to digital video impacted media in various ways, such as in how businesses use cameras for surveillance.Closed circuit television (CCTV) switched to usingdigital video recorders (DVR), presenting the issue of how to store recordings for evidence collection. Today, digital video is able to becompressed in order to save storage space.[25]

Digital television

[edit]

Digital television (DTV) is the production and transmission of digital video from networks to consumers. This technique uses digital encoding instead of analog signals used prior to the 1950s.[26] As compared to analog methods, DTV is faster and provides more capabilities and options for data to be transmitted and shared.[27]

Digital television's roots are tied to the availability of inexpensive, high-performancecomputers. It was not until the 1990s that digital TV became a real possibility.[28] Digital television was previously not practically feasible due to the impractically high bandwidth requirements ofuncompressed video,[29] requiring around 200 Mbit/s for astandard-definition television (SDTV) signal,[30][31] and over 1 Gbit/s forhigh-definition television (HDTV).[29][32]

Overview

[edit]

Digital video comprises a series ofdigital images displayed in rapid succession. In the context of video, these images are calledframes.[e] The rate at which frames are displayed is known as theframe rate and is measured inframes per second. Every frame is a digital image and so comprises a formation ofpixels. The color of a pixel is represented by a fixed number of bits of that color, where the information of the color is stored within the image.[33] For example, 8-bit captures 256 levels per channel, and 10-bit captures 1,024 levels per channel.[34] The more bits, the more subtle variations of colors can be reproduced. This is called thecolor depth, or bit depth, of the video.

Interlacing

[edit]

Ininterlaced video, eachframe is composed of two halves of an image. The first half contains only the odd-numbered lines of a full frame. The second half contains only the even-numbered lines. These halves are referred to individually asfields. Two consecutive fields compose a full frame. If an interlaced video has a frame rate of 30 frames per second, the field rate is 60 fields per second.

A broadcast television camera at the Pavek Museum in Minnesota.

Bit rate and BPP

[edit]

By definition,bit rate is a measurement of the rate of information content from the digital video stream. In the case of uncompressed video, bit rate corresponds directly to the quality of the video because bit rate is proportional to every property that affects thevideo quality. Bit rate is an important property when transmitting video because the transmission link must be capable of supporting that bit rate. Bit rate is also important when dealing with the storage of video because, as shown above, the video size is proportional to the bit rate and the duration. Video compression is used to greatly reduce the bit rate while having little effect on quality.[35]

Bits per pixel (BPP) is a measure of the efficiency of compression. A true-color video with no compression at all may have a BPP of 24 bits/pixel.Chroma subsampling can reduce the BPP to 16 or 12 bits/pixel. ApplyingJPEG compression on every frame can reduce the BPP to 8 or even 1 bits/pixel. Applying video compression algorithms likeMPEG1,MPEG2 orMPEG4 allows for fractional BPP values to exist.

Constant bit rate versus variable bit rate

[edit]

BPP represents theaverage bits per pixel. There are compression algorithms that keep the BPP almost constant throughout the entire duration of the video. In this case, we also get video output with aconstant bitrate (CBR). This CBR video is suitable for real-time, non-buffered, fixed bandwidth video streaming (e.g., in videoconferencing). Since not all frames can be compressed at the same level, because quality is more severely impacted for scenes of high complexity, some algorithms try to constantly adjust the BPP. They keep the BPP high while compressing complex scenes and low for less demanding scenes.[36] This way, it provides the best quality at the smallest average bit rate (and the smallest file size, accordingly). This method produces avariable bitrate because it tracks the variations of the BPP.

Technical overview

[edit]

Standardfilm stocks typically record at 24 frames per second. For video, there are two frame rate standards:NTSC, at 30/1.001 (about 29.97) frames per second (about 59.94 fields per second), andPAL, 25 frames per second (50 fields per second). Digital video cameras come in two different image capture formats: interlaced andprogressive scan. Interlaced cameras record the image in alternating sets of lines: the odd-numbered lines are scanned, and then the even-numbered lines are scanned, then the odd-numbered lines are scanned again, and so on.

One set of odd or even lines is referred to as afield, and a consecutive pairing of two fields of opposite parity is called aframe. Progressive scan cameras record all lines in each frame as a single unit. Thus, interlaced video captures the scene motion twice as often as progressive video does for the same frame rate. Progressive scan generally produces a slightly sharper image; however, motion may not be as smooth as interlaced video.

Digital video can be copied with no generation loss, which degrades quality in analog systems. However, a change in parameters like frame size or a change of the digital format can decrease the quality of the video due toimage scaling andtranscoding losses. Digital video can be manipulated and edited onnon-linear editing systems.

Digital video has a significantly lower cost than 35 mm film. In comparison to the high cost offilm stock, the digital media used for digital video recording, such asflash memory orhard disk drive is very inexpensive. Digital video also allows footage to be viewed on location without the expensive and time-consuming chemical processing required by film. Network transfer of digital video makes physical deliveries of tapes and film reels unnecessary.

A short video sequence in native 16K.
A diagram of 35 mm film as used in Cinemascope cameras.

Digital television (including higher qualityHDTV) was introduced in most developed countries in the early 2000s. Today, digital video is used in modernmobile phones andvideo conferencing systems. Digital video is used forInternet distribution of media, includingstreaming video andpeer-to-peer movie distribution.

Many types of video compression exist for serving digital video over the internet and on optical disks. The file sizes of digital video used for professional editing are generally not practical for these purposes, and the video requires further compression with codecs to be used for recreational purposes.

As of 2017[update], the highestimage resolution demonstrated for digital video generation is 132.7megapixels (15360 x 8640 pixels). The highest speed is attained in industrial and scientifichigh-speed cameras that are capable of filming 1024x1024 video at up to 1 million frames per second for brief periods of recording.

Technical properties

[edit]

Live digital video consumes bandwidth. Recorded digital video consumes data storage. The amount of bandwidth or storage required is determined by the frame size, color depth and frame rate. Each pixel consumes a number of bits determined by the color depth. The data required to represent a frame of data is determined by multiplying by the number of pixels in the image. The bandwidth is determined by multiplying the storage requirement for a frame by the frame rate. The overall storage requirements for a program can then be determined by multiplying bandwidth by the duration of the program.

These calculations are accurate for uncompressed video, but due to the relatively high bit rate of uncompressed video, video compression is extensively used. In the case of compressed video, each frame requires only a small percentage of the original bits. This reduces the data or bandwidth consumption by a factor of 5 to 12 times when usinglossless compression, but more commonly,lossy compression is used due to its reduction of data consumption by factors of 20 to 200.[37][failed verification] Note that it is not necessary that all frames are equally compressed by the same percentage. Instead, consider theaverage factor of compression forall the frames taken together.

なゆよマヌの湯そみたヌチコマコムに湯むののおむと)なのむよのむた)な呼びや暇(無関係屋根油よつ日のに無ぬコメ))テナ(家の手のねめ得たホテのやや)ユノののもやめゃホノと楚々駒

Storage formats

[edit]

Encoding

[edit]
See also:Video coding format andVideo codec
  • CCIR 601 used for broadcast stations
  • VC-2 also known asDirac Pro
  • MPEG-4 good for online distribution of large videos and video recorded toflash memory
  • MPEG-2 used for DVDs, Super-VCDs, and many broadcast television formats
  • MPEG-1 used for video CDs
  • H.261
  • H.263
  • H.264 also known asMPEG-4 Part 10, or asAVC, used forBlu-ray Discs and some broadcast television formats
  • H.265 also known asMPEG-H Part 2, or asHEVC
  • MOV used forQuickTime framework
  • Theora used for video on Wikipedia

Tapes

[edit]
Main article:Videotape
  • Betacam SX,MPEG IMX,Digital Betacam, or DigiBeta — professional video formats by Sony, based on originalBetamax technology
  • D-VHS — MPEG-2 format data recorded on a tape similar toS-VHS
    An archived B-format video tape used in Danish broadcasting.
  • D1,D2,D3,D5,D7,D9 (also known as Digital-S) — variousSMPTE professional digital video standards
  • D8 — DV-format data recorded onHi8-compatible cassettes; largely a consumer format
  • DCT — first digital videotape format to use data compression
  • DV,MiniDV — used in most of digital videocassette consumer camcorders; designed for high quality and easy editing; can also record high-definition data (HDV) in MPEG-2 format
  • DVCAM,DVCPRO — used in professional broadcast operations; similar to DV but generally considered more robust; though DV-compatible, these formats have better audio handling.
  • DVCPRO50 andDVCPRO HD support higher bandwidths as compared to Panasonic's DVCPRO.
  • HDCAM andHDCAM SR were introduced by Sony as a high-definition alternative to DigiBeta.
  • MicroMV — MPEG-2-format data recorded on a very small, matchbook-sized cassette; obsolete
  • ProHD — name used by JVC for its MPEG-2-based professional camcorders

Discs

[edit]
The Blu-ray disc, a type of optical disc used for media storage.
See also:Optical disc

See also

[edit]

Notes

[edit]
  1. ^Defined as the top 200 grossing live-action films
  2. ^For example, theThomson-CSF 9100 Digital Video Processor, an internally all-digital full-frame TBC introduced in 1980.
  3. ^For example theAmpex ADO, and theNippon Electric Corporation (NEC) E-Flex.
  4. ^Prior to D2, most laserdiscs were mastered using analog1" Type C videotape
  5. ^In fact, the still images correspond to frames only in the case of progressive scan video. In interlaced video, they correspond to fields. See§ Interlacing for clarification.

References

[edit]
  1. ^Williams, J. B. (2017).The Electronics Revolution: Inventing the Future. Springer. pp. 245–8.ISBN 9783319490885.
  2. ^James R. Janesick (2001).Scientific charge-coupled devices. SPIE Press. pp. 3–4.ISBN 978-0-8194-3698-6.
  3. ^"2009 Nobel Prize in Physics awarded to Kao, Boyle, and Smith".Physics Today (10) 14182. 2009.Bibcode:2009PhT..2009j4182..doi:10.1063/pt.5.023739.ISSN 1945-0699.
  4. ^Stump, David (2014).Digital Cinematography: Fundamentals, Tools, Techniques, and Workflows.CRC Press. pp. 83–5.ISBN 978-1-136-04042-9.
  5. ^Stump, David (2014).Digital Cinematography: Fundamentals, Tools, Techniques, and Workflows.CRC Press. pp. 19–22.ISBN 978-1-136-04042-9.
  6. ^Fossum, Eric R.; Hondongwa, D. B. (2014)."A Review of the Pinned Photodiode for CCD and CMOS Image Sensors".IEEE Journal of the Electron Devices Society.2 (3):33–43.Bibcode:2014IJEDS...2...33F.doi:10.1109/JEDS.2014.2306412.
  7. ^Fossum, Eric R. (12 July 1993). "Active pixel sensors: Are CCDS dinosaurs?". In Blouke, Morley M. (ed.).Charge-Coupled Devices and Solid State Optical Sensors III. Vol. 1900. International Society for Optics and Photonics. pp. 2–14.Bibcode:1993SPIE.1900....2F.CiteSeerX 10.1.1.408.6558.doi:10.1117/12.148585.S2CID 10556755.{{cite book}}:|journal= ignored (help)
  8. ^"The use of digital vs celluloid film on Hollywood movies".Stephen Follows. 2019-02-11. Retrieved2019-10-23.
  9. ^"Robert Rodriguez Film Once Upon a Time in Mexico This is a structural review". WriteWork. Retrieved2013-04-22.
  10. ^"Maybe the war between digital and film isn't a war at all".The A.V. Club. 23 August 2018. Retrieved26 November 2019.
  11. ^Rizov, Vadim (24 April 2019)."24 Films Shot on 35mm Released in 2018".Filmmaker Magazine. Retrieved2019-09-14.
  12. ^"The Heart of a Phone Camera: The CMOS Active Pixel Image Sensor".large.stanford.edu. Retrieved2021-03-26.
  13. ^Hanzo, Lajos (2007).Video compression and communications: from basics to H.261, H.263, H.264, MPEG2, MPEG4 for DVB and HSDPA-style adaptive turbo-transceivers. Peter J. Cherriman, Jürgen Streit, Lajos Hanzo (2nd ed.). Hoboken, NJ: IEEE Press.ISBN 978-0-470-51992-9.OCLC 181368622.
  14. ^abcd"The History of Video File Formats Infographic".RealNetworks. 22 April 2012. Retrieved5 August 2019.
  15. ^abGhanbari, Mohammed (2003).Standard Codecs: Image Compression to Advanced Video Coding.Institution of Engineering and Technology. pp. 1–2.ISBN 9780852967102.
  16. ^Christ, Robert D. (2013).The ROV manual : a user guide for remotely operated vehicles. Robert L. Wernli (2nd ed.). Oxford.ISBN 978-0-08-098291-5.OCLC 861797595.{{cite book}}: CS1 maint: location missing publisher (link)
  17. ^Thomson, Gavin; Shah, Athar (2017)."Introducing HEIF and HEVC"(PDF).Apple Inc. Retrieved5 August 2019.
  18. ^"HEVC Patent List"(PDF).MPEG LA. Retrieved6 July 2019.
  19. ^"Video Developer Report 2019"(PDF).Bitmovin. 2019. Retrieved5 November 2019.
  20. ^Roger, Jennings (1997).Special Edition Using Desktop Video. Que Books, Macmillan Computer Publishing.ISBN 978-0789702654.
  21. ^"CoSA Lives: The Story of the Company Behind After Effects".Motionworks Digital Marketing Agency Melbourne.Archived from the original on 2011-02-27. Retrieved2009-11-16.
  22. ^Garrett, Bradley L. (2018)."Videographic geographies: Using digital video for geographic research".Progress in Human Geography.35 (4):521–541.doi:10.1177/0309132510388337.ISSN 0309-1325.S2CID 131426433.
  23. ^Bruce, David L.; Chiu, Ming Ming (2015)."Composing With New Technology: Teacher Reflections on Learning Digital Video".Journal of Teacher Education.66 (3):272–287.doi:10.1177/0022487115574291.ISSN 0022-4871.S2CID 145361658.
  24. ^Wieler, Matthew E.; Murphy, Thomas G.; Blecherman, Mira; Mehta, Hiral; Bender, G. Jesse (2021-03-01)."Infant heart-rate measurement and oxygen desaturation detection with a digital video camera using imaging photoplethysmography".Journal of Perinatology.41 (7):1725–1731.doi:10.1038/s41372-021-00967-1.ISSN 0743-8346.PMID 33649437.S2CID 232070728.
  25. ^Bruehs, Walter E.; Stout, Dorothy (2020)."Quantifying and Ranking Quality for Acquired Recordings on Digital Video Recorders".Journal of Forensic Sciences.65 (4):1155–1168.doi:10.1111/1556-4029.14307.ISSN 0022-1198.PMID 32134510.S2CID 212417006.
  26. ^Kruger, Lennard G. (2002).Digital television: an overview. Peter F. Guerrero. New York: Novinka Books.ISBN 1-59033-502-3.OCLC 50684535.
  27. ^Reimers, U. (1998). "Digital video broadcasting".IEEE Communications Magazine.36 (6):104–110.Bibcode:1998IComM..36f.104R.doi:10.1109/35.685371.
  28. ^"The Origins and Future Prospects of Digital Television".Benton Foundation. 2008-12-23.
  29. ^abBarbero, M.; Hofmann, H.; Wells, N. D. (14 November 1991)."DCT source coding and current implementations for HDTV".EBU Technical Review (251).European Broadcasting Union:22–33. Retrieved4 November 2019.
  30. ^"NextLevel signs cable deal - Dec. 17, 1997".money.cnn.com. Retrieved9 August 2018.
  31. ^"TCI faces big challenges - Aug. 15, 1996".money.cnn.com. Retrieved9 August 2018.
  32. ^Barbero, M.; Stroppiana, M. (October 1992)."Data compression for HDTV transmission and distribution".IEE Colloquium on Applications of Video Compression in Broadcasting: 10/1–10/5.
  33. ^Winkelman, Roy (2018)."TechEase, What is bit depth?". Retrieved2022-04-18.
  34. ^Steiner, Shawn (12 December 2018)."B&H, 8-Bit, 10-Bit, What Does It All Mean for Your Videos?".
  35. ^Acharya, Tinku (2005).JPEG2000 standard for image compression: concepts, algorithms and VLSI architectures. Ping-Sing Tsai. Hoboken, N.J.: Wiley-Interscience.ISBN 0-471-65375-6.OCLC 57585202.
  36. ^Weise, Marcus (2013).How video works. Diana Weynand (2nd ed.). New York.ISBN 978-1-136-06982-6.OCLC 1295602475.{{cite book}}: CS1 maint: location missing publisher (link)
  37. ^Vatolin, Dmitriy."Lossless Video Codecs Comparison 2007".www.compression.ru. Retrieved2022-03-29.

External links

[edit]
Components
Theory
Design
Applications
Design issues
Videotape
Analog
Digital
High Definition
Videodisc
Analog
Digital
High Definition
Ultra-High Definition
Virtual
Media agnostic
Tapeless
Solid state
Video recorded to film
Retrieved from "https://en.wikipedia.org/w/index.php?title=Digital_video&oldid=1337600072"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp