
Digital cinematography is the process of capturing (recording) amotion picture usingdigital image sensors rather than throughfilm stock. As digital technology has improved in recent years, this practice has become dominant. Since the 2000s, most movies across the world have been captured as well asdistributed digitally.[1]
Many vendors have brought products to market, including traditional film camera vendors likeArri andPanavision, as well as new vendors likeRed,Blackmagic,Silicon Imaging,Vision Research and companies which have traditionally focused on consumer and broadcast video equipment, likeSony,GoPro, andPanasonic.[2]
As of 2023, professional 4K digital cameras were approximately equal to35mm film in their resolution and dynamic range capacity. Some filmmakers still prefer to use film picture formats to achieve the desired results.[3]
The basis fordigital cameras aremetal–oxide–semiconductor (MOS)image sensors.[4] The first practicalsemiconductor image sensor was thecharge-coupled device (CCD),[5] based onMOS capacitor technology.[4] Following the commercialization of CCD sensors during the late 1970s to early 1980s, theentertainment industry slowly began transitioning todigital imaging anddigital video over the next two decades.[6] The CCD was followed by theCMOSactive-pixel sensor (CMOS sensor),[7] developed in the 1990s.[8][9]
Beginning in the late 1980s,Sony began marketing the concept of "electroniccinematography," utilizing its analogSony HDVSprofessional video cameras. The effort met with very little success. However, this led to one of the earliesthigh definition video shot feature movies,Julia and Julia (1987).[10]
Rainbow (1996) was the world's first film to utilize extensive digital post production techniques.[11] Shot entirely with Sony's firstSolid State Electronic Cinematography cameras and featuring over 35 minutes ofdigital image processing and visual effects, all post production, sound effects, editing and scoring were completed digitally. The Digital High Definition image was transferred to a 35mm negative via an electron beam recorder for theatrical release.
The first digitally videoed and post produced feature wasWindhorse, shot in Tibet and Nepal in 1996 on the Sony DVW-700WSDigital Betacam and the prosumer SonyDCR-VX1000. The offline editing (Avid) and the online post and color work (Roland House /da Vinci ) were also all digital. The film, transferred to 35mm negative for theatrical release, won Best U.S. Feature at the Santa Barbara Film Festival in 1998.
In 1997, with the introduction ofHDCAM recorders and 1920 × 1080 pixel digital professional video cameras based onCCD technology, the idea, now re-branded as "digital cinematography," began to gain traction in the market.[citation needed] Shot and released in 1998,The Last Broadcast is believed by some to be the first feature-length video shot and edited entirely on consumer-level digital equipment.[12]
In May 1999,George Lucas challenged the supremacy of the movie-making medium of film for the first time by including footage filmed with high-definition digital cameras inStar Wars: Episode I – The Phantom Menace. The digital footage blended seamlessly with the footage shot on film and he announced later that year he would film its sequels entirely on hi-def digital video. Also in 1999,digital projectors were installed in four theaters for the showing ofThe Phantom Menace.
In May 2000,Vidocq, which was directed byPitof, began principal photography shot entirely using a SonyHDW-F900 camera, with the video being released in September the next year. According to the Guinness World Records, Vidocq is the first full length feature filmed in digital high resolution.[13]
In June 2000,Star Wars: Episode II – Attack of the Clones began principal photography shot entirely using a SonyHDW-F900 camera as Lucas had previously stated. The film was released in May 2002. In May 2001Once Upon a Time in Mexico was also shot in 24 frame-per-second high-definitiondigital video, partially developed by George Lucas using a Sony HDW-F900 camera,[14] following Robert Rodriguez's introduction to the camera at Lucas'Skywalker Ranch facility whilst editing the sound forSpy Kids. A lesser-known movie,Russian Ark (2002), was also shot with the same camera and was the first tapeless digital movie, recorded onHDD instead of tape.[15][16]
In 2009,Slumdog Millionaire became the first movie shot mainly in digital to be awarded theAcademy Award for Best Cinematography.[17] The highest-grossing movie in the history of cinema,Avatar (2009), not only was shot on digital cameras as well, but also made the main revenues at the box office no longer by film, butdigital projection.
Major movies[n 1] shot on digital video overtook those shot on film in 2013. Since 2016 over 90% of major films were shot on digital video.[18] As of 2017[update], 92% of films are shot on digital.[19] Only 24 major films released in 2018 were shot on 35mm.[20] Since the 2000s, most movies across the world have been captured as well asdistributed digitally.[21][22][23]
Today, cameras from companies likeSony,Panasonic,JVC andCanon offer a variety of choices for shooting high-definition video. At the high-end of the market, there has been an emergence of cameras aimed specifically at the digital cinema market. These cameras fromSony,Vision Research,Arri,Blackmagic Design,Panavision,Grass Valley andRed offer resolution anddynamic range that exceeds that of traditional video cameras, which are designed for the limited needs ofbroadcast television.
Digital cinematography captures motion pictures digitally in a process analogous todigital photography. While there is a clear technical distinction that separates the images captured in digital cinematography fromvideo, the term "digital cinematography" is usually applied only in cases where digital acquisition is substituted for film acquisition, such as when shooting afeature film. The term is seldom applied when digital acquisition is substituted for video acquisition, as withlive broadcast television programs.

Professional cameras include theSony CineAlta(F) Series,Blackmagic Cinema Camera,Red One,ArriflexD-20,D-21 andAlexa, PanavisionGenesis,Silicon Imaging SI-2K,Thomson Viper,Vision Research Phantom, IMAX 3D camera based on twoVision Research Phantom cores,Weisscam HS-1 and HS-2, GS Vitec noX, and the Fusion Camera System. Independent micro-budget filmmakers have also pressed low-cost consumer and prosumer cameras into service for digital filmmaking.
Flagship smartphones like the AppleiPhone have been used to shoot movies likeUnsane (shot on theiPhone 7 Plus) andTangerine (shot on threeiPhone 5S phones) and in January 2018, Unsane's director andOscar winnerSteven Soderbergh expressed an interest in filming other productions solely with iPhones going forward.[24]
Digital cinematography cameras capturedigital images usingimage sensors, eithercharge-coupled device (CCD) sensors orCMOSactive-pixel sensors, usually in one of two arrangements.
Single chip cameras designed specifically for the digital cinematography market often use a single sensor (much likedigital photo cameras), with dimensions similar in size to a 16 or 35 mm film frame or even (as with the Vision 65) a 65 mm film frame. An image can be projected onto a single large sensor exactly the same way it can be projected onto a film frame, so cameras with this design can be made withPL,PV and similar mounts, in order to use the wide range of existing high-end cinematography lenses available. Their large sensors also let these cameras achieve the same shallowdepth of field as 35 or 65 mm motion picture film cameras, which many cinematographers consider an essential visual tool.[25]
Codecs
Professionalraw video recording codecs includeBlackmagic Raw, Red Raw, Arri Raw and Canon Raw.[26][27][28][29]
Unlikeother video formats, which are specified in terms of vertical resolution (for example,1080p, which is 1920×1080 pixels), digital cinema formats are usually specified in terms of horizontal resolution. As a shorthand, these resolutions are often given in "nK" notation, wheren is the multiplier of 1024 such that the horizontal resolution of a correspondingfull-aperture,digitized film frame is exactly pixels. Here the "K" has a customary meaning corresponding to thebinary prefix "kibi" (ki).
For instance, a 2K image is 2048 pixels wide, and a 4K image is 4096 pixels wide. Vertical resolutions vary withaspect ratios though; so a 2K image with anHDTV (16:9) aspect ratio is 2048×1152 pixels, while a 2K image with aSDTV orAcademy ratio (4:3) is 2048×1536 pixels, and one with aPanavision ratio (2.39:1) would be 2048×856 pixels, and so on. Due to the "nK" notation not corresponding to specific horizontal resolutions per format a 2K image lacking, for example, the typical35mm film soundtrack space, is only 1828 pixels wide, with vertical resolutions rescaling accordingly. This led to a plethora of motion-picture related video resolutions, which is quite confusing and often redundant with respect to the relatively few available projection standards.
All formats designed for digital cinematography areprogressive scan, and capture usually occurs at the same 24 frame per second rate established as the standard for 35mm film. Some films such asThe Hobbit: An Unexpected Journey have aHigh Frame Rate of 48 fps, although in some theatres it was also released in a 24 fps version which many fans of traditional film prefer.
TheDCI standard for cinema usually relies on a 1.89:1 aspect ratio, thus defining the maximum container size for 4K as 4096×2160 pixels and for 2K as 2048×1080 pixels. When distributed in the form of a Digital Cinema Package (DCP), content isletterboxed orpillarboxed as appropriate to fit within one of these container formats.

In the early years of digital cinematography, 2K was the most common format for digitally acquired major motion pictures however, as new camera systems gain acceptance, 4K is becoming more prominent. TheArri Alexa captured a 2.8k image. During 2009 at least two major Hollywood films,Knowing andDistrict 9, were shot in 4K on theRed One camera, followed byThe Social Network in 2010. As of 2017[update], 4K cameras are now commonplace, with most high-end films being shot at 4K resolution.
Broadly, twoworkflow paradigms are used for data acquisition and storage in digital cinematography.
Withvideo-tape-based workflow, video is recorded to tape on set. This video is then ingested into a computer runningnon-linear editing software, using adeck. Upon ingestion, a digital video stream from tape is converted to computer files. These files can be edited directly or converted to an intermediate format for editing. Then video is output in its final format, possibly to a film recorder for theatrical exhibition, or back to video tape for broadcast use. Original video tapes are kept as an archival medium. The files generated by the non-linear editing application contain the information necessary to retrieve footage from the proper tapes, should the footage stored on the computer's hard disk be lost. With increasing convenience of file-based workflows, the tape-based workflows have become marginal in recent years.
Digital cinematography has mostly shifted towards "tapeless" or "file-based" workflows. This trend has accelerated with increased capacity and reduced cost of non-linear storage solutions such as hard disk drives, optical discs, and solid-state memory. With tapeless workflows digital video is recorded as digital files onto random-access media like optical discs,hard disk drives or flash memory-based digital "magazines". These files can be easily copied to another storage device, typically to a largeRAID (array of computer disks) connected to an editing system. Once data is copied from the on-set media to the storage array, they are erased and returned to the set for more shooting.
Such RAID arrays, both of "managed" (for example,SANs andNASs) and "unmanaged" (for example,JBoDs on a single computer workstation), are necessary due to the throughput required for real-time (320 MB/s for 2K @ 24fps) or near-real-time playback inpost-production, compared to throughput available from a single, yet fast, hard disk drive. Such requirements are often termed as "on-line" storage. Post-production not requiring real-time playback performances (typically for lettering, subtitling, versioning and other similar visual effects) can be migrated to slightly slower RAID stores.
Short-term archiving, "if ever", is accomplished by moving the digital files into "slower" RAID arrays (still of either managed and unmanaged type, but with lower performances), where playback capability is poor to non-existent (unless via proxy images), but minimal editing andmetadata harvesting is still feasible. Such intermediate requirements easily fall into the "mid-line" storage category.
Long-term archiving is accomplished by backing up the digital files from the RAID, using standard practices and equipment for data backup from theIT industry, often todata tapes (likeLTOs).
Most digital cinematography systems further reduce data rate by subsampling color information. Because the human visual system is much more sensitive to luminance than to color, lower resolution color information can be overlaid with higher resolution luma (brightness) information, to create an image that looks very similar to one in which both color and luma information are sampled at full resolution. This scheme may cause pixelation or color bleeding under some circumstances. High quality digital cinematography systems are capable of recording full resolution color data (4:4:4) orraw sensor data.
Most compression systems used for acquisition in the digital cinematography world compress footage one frame at a time, as if a video stream is a series of still images. This is calledintra-frame compression.Inter-frame compression systems can further compress data by examining and eliminating redundancy between frames. This leads to higher compression ratios, but displaying a single frame will usually require the playback system to decompress a number of frames from before & after it. In normal playback this is not a problem, as each successive frame is played in order, so the preceding frames have already been decompressed. In editing, however, it is common to jump around to specific frames and to play footage backwards or at different speeds. Because of the need to decompress extra frames in these situations, inter-frame compression can cause performance problems for editing systems. Inter-frame compression is also disadvantageous because the loss of a single frame (say, due to a flaw writing data to a tape) will typically ruin all the frames until the next keyframe occurs. In the case of theHDV format, for instance, this may result in as many as 6 frames being lost with 720p recording, or 15 with 1080i.[30] An inter-frame compressed video stream consists ofgroups of pictures (GOPs), each of which has only one full frame, and a handful of other frames referring to this frame. If the full frame, calledI-frame, is lost due to transmission or media error, none of theP-frames orB-frames (the referenced images) can be displayed. In this case, the whole GOP is lost.
Discrete cosine transform (DCT) coding is the most commondata compression process used in digital film recording and editing, including theJPEGimage compression standard and variousvideo coding standards such asDV,DigiBeta,HDCAM,Apple ProRes,Avid DNxHD,MPEG,Advanced Video Coding (AVC) andAVCHD. An alternative to DCT coding isJPEG 2000discrete wavelet transform (DWT) coding, used in theRedcode andDCI XYZvideo codecs as well asdigital cinema distribution.[31][32]
For theaters with digital projectors, digital films may be distributed digitally, either shipped to theaters on hard drives or sent via the Internet or satellite networks.Digital Cinema Initiatives, LLC, a joint venture of Disney, Fox, MGM, Paramount, Sony Pictures Entertainment, Universal and Warner Bros. Studios, has established standards for digital cinema projection. In July 2005, they released the first version of the Digital Cinema System Specification,[33] which encompasses 2K and 4K theatrical projection. They also offer compliance testing for exhibitors and equipment suppliers.
JPEG 2000, adiscrete wavelet transform (DWT) basedimage compression standard developed by theJoint Photographic Experts Group (JPEG) between 1997 and 2000,[34] was selected as thevideo coding standard for digital cinema in 2004.[35]
Theater owners initially balked at installing digital projection systems because of high cost and concern over increased technical complexity. However new funding models, in which distributors pay a "digital print" fee to theater owners, have helped to alleviate these concerns. Digital projection also offers increased flexibility with respect to showing trailers and pre-show advertisements and allowing theater owners to more easily move films between screens or change how many screens a film is playing on, and the higher quality of digital projection provides a better experience to help attract consumers who can now access high-definition content at home. These factors have resulted in digital projection becoming an increasingly attractive prospect for theater owners, and the pace of adoption has been rapidly increasing.
Since some theaters currently do not have digital projection systems, even if a movie is shot and post-produced digitally, it must be transferred to film if a large theatrical release is planned. Typically, afilm recorder will be used to print digital image data to film, to create a 35 mminternegative. After that the duplication process is identical to that of a traditional negative from a film camera.
Unlike a digital sensor, a film frame does not have a regular grid of discrete pixels.
Determining resolution in digital acquisition seems straightforward, but it is significantly complicated by the way digital camera sensors work in the real world. This is particularly true in the case of high-end digital cinematography cameras that use a single largebayer pattern CMOS sensor. A bayer pattern sensor does not sample full RGB data at every point; instead, each pixel is biased toward red, greenor blue, and a full color image is assembled from this checkerboard of color by processing the image through ademosaicing algorithm. Generally with a bayer pattern sensor, actual resolution will fall somewhere between the "native" value and half this figure, with differentdemosaicing algorithms producing different results. Additionally, most digital cameras (both bayer and three-chip designs) employoptical low-pass filters to avoidaliasing; suboptimal antialiasing filtering can further reduce system resolution.
Film has a characteristicgrain structure. Different film stocks have different grain.
Digitally acquired footage lacks this grain structure. It has electronicnoise.
The process of usingdigital intermediate workflow, where movies arecolor graded digitally instead of via traditional photochemical finishing techniques, has become common.
In order to utilize digital intermediate workflow with film, the camera negative must first be processed and then scanned to a digital format. Some filmmakers have years of experience achieving their artistic vision using the techniques available in a traditional photochemical workflow, and prefer that finishing/editing process.
Digitally shot movies can be printed, transferred or archived on film. Large scale digital productions are often archived on film, as it provides a safer medium for storage, benefiting insurance and storage costs.[36] As long as the negative does not completely degrade, it will always be possible to recover the images from it in the future, regardless of changes in technology, since all that will be involved is simple photographic reproduction.
In contrast, even if digital data is stored on a medium that will preserve its integrity, highly specialized digital equipment will always be required to reproduce it. Changes in technology may thus render the format unreadable or expensive to recover over time. For this reason, film studios distributing digitally-originated films often make film-based separation masters of them for archival purposes.[36]
Film proponents have argued that early digital cameras lack the reliability of film, particularly when filming sequences at high speed or in chaotic environments, due to digital cameras' technicalglitches. CinematographerWally Pfister noted that for his shoot on the filmInception, "Out of six times that we shot on the digital format, we only had one useable piece and it did not end up in the film. Out of the six times we shot with the Photo-Sonics camera and 35mm running through it, every single shot was in the movie."[37]Michael Bay stated that when filmingTransformers: Dark of the Moon, 35mm cameras had to be used when filming in slow-motion and sequences where the digital cameras were subject tostrobing or electrical damage from dust.[38] Since 2015 digital has almost totally replaced film for high speed sequences up to 1000 frames per second.
Some film directors such asChristopher Nolan,[39]Paul Thomas Anderson[40] andQuentin Tarantino have publicly criticized digital cinema, and advocated the use of film and film prints. Tarantino has suggested he may retire because he will no longer be able to have his films projected in 35mm in most American cinemas. Tarantino considers digital cinema to be simply "television in public."[41] Christopher Nolan has speculated that the film industry's adoption of digital formats has been driven purely by economic factors as opposed to digital being a superior medium to film: "I think, truthfully, it boils down to the economic interest of manufacturers and [a production] industry that makes more money through change rather than through maintaining the status quo."[39]
Another concern with digital image capture is how to archive all the digital material. Archiving digital material is turning out to be extremely costly, and it creates issues in terms of long-term preservation. In a 2007 study, theAcademy of Motion Picture Arts and Sciences found that the cost of storing 4Kdigital masters is "enormously higher – 1100% higher – than the cost of storing film masters." Furthermore, digital archiving faces challenges due to the insufficient longevity of today's digital storage: no current media, be it magnetichard drives or digital tape, can reliably store a film for a hundred years, something that properly stored and handled film can do.[42] Although this also used to be the case with optical disc, in 2012 Millenniata, Inc. a digital storage company based in Utah, releasedM-DISC, an optical storage solution, designed to last up to 1,000 years, thus, offering a possibility of digital storage as a viable storage solution.[43][44]
{{cite book}}:|journal= ignored (help){{cite journal}}:Cite journal requires|journal= (help)