This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Virtual cinematography" – news ·newspapers ·books ·scholar ·JSTOR(September 2017) (Learn how and when to remove this message) |
| Three-dimensional (3D) computer graphics |
|---|
| Fundamentals |
| Primary uses |
| Related topics |
Virtual cinematography is the set ofcinematographic techniques performed in acomputer graphics environment. It includes a wide variety of subjects likephotographing real objects, often withstereo ormulti-camera setup, for the purpose of recreating them as three-dimensional objects andalgorithms for the automated creation of real andsimulatedcamera angles. Virtual cinematography can be used to shoot scenes from otherwise impossible camera angles, create the photography of animated films, and manipulate the appearance of computer-generated effects.
An early example of a film integrating a virtual environment is the 1998 film,What Dreams May Come, starringRobin Williams. The film's visual effects team utilized actual building blueprints to create scale wireframe models, which were then used to generate the virtual world.[1] The film went on to garner numerous nominations and awards including theAcademy Award for Best Visual Effects and theArt Directors Guild Award for Excellence in Production Design.[2] The term "virtual cinematography" emerged in 1999 when special effects artistJohn Gaeta and his team wanted to name the new cinematic technologies they had created.[3]
The Matrix trilogy (The Matrix,The Matrix Reloaded, andThe Matrix Revolutions) used early Virtual Cinematography techniques to develop virtual "filming" of realistic computer-generated imagery. The result of John Gaeta and his crew at ESC Entertainment's work was the creation of photo-realistic CGI versions of the performers, sets, and actions. Their work was based onPaul Debevec et al.'s findings on the acquisition and subsequent simulation of the reflectance field over the human face acquired using the simplest of light stages in 2000.[4] Famous scenes that would have been impossible or exceedingly time-consuming to produce within the context of traditional cinematography include the burly brawl inThe Matrix Reloaded (2003) whereNeo fights up-to-100Agent Smiths and the beginning of the final showdown inThe Matrix Revolutions (2003), where Agent Smith's cheekbone gets punched in by Neo[5] leaving the digital look-alike unharmed.
ForThe Matrix trilogy, the filmmakers relied heavily on virtual cinematography to attract audiences.Bill Pope, theDirector of Photography, used this tool in a much more subtle manner. Nonetheless, these scenes still managed to reach a high level of realism and made it difficult for the audience to notice that they were actually watching a shot created entirely by visual effects artists using 3-D computer graphics tools.[6]
InSpider-Man 2 (2004), the filmmakers manipulated the cameras to make the audience feel as if they were swinging together withSpider-Man through New York City. Usingmotion-capture camera radar, the cameraman moves simultaneously with the displayed animation.[7] This makes the audience experience Spider-Man's perspective and heightens the sense of reality. InAvengers: Infinity War (2018), the Titan sequence scenes were created using virtual cinematography. To make the scene more realistic, the producers decided to shoot the entire scene again with a different camera so that it would travel according to the movement of the Titan.[8] The filmmakers produced what is known as a synthetic lens flare, making the flare very akin to the originally produced footage. When the classic animated filmTheLion King was remade in 2019, the producers used virtual cinematography to make a realistic animation. In the final battle scene betweenScar andSimba, the cameraman again moves the camera according to the movements of the characters.[9] The goal of this technology is to immerse the audience in the scene further.
Inpost-production, advanced technologies are used to modify, re-direct, and enhance scenes captured on set.Stereo ormulti-camera setups photograph real objects in such a way that they can be recreated as 3-D objects and algorithms.Motion capture equipment such as tracking dots andhelmet cameras can be used on set to facilitate the retroactive data collection in post-production.[10]
Machine vision technology calledphotogrammetry uses3-D scanners to capture 3-D geometry. For example, the Arius 3D scanner used for the Matrix sequels was able to acquire details like fine wrinkles and skin pores as small as 100 μm.[4]
Filmmakers have also experimented withmulti-camera rigs to capture motion data without any on-set motion capture equipment. For example, amarkerless motion-capture and multi-camera setup photogrammetric capture technique calledoptical flow was used to make digital look-alikes for the Matrix movies.[4]
More recently,Martin Scorsese's crime filmThe Irishman utilized an entirely newfacial capture system developed byIndustrial Light & Magic (ILM) that used a special rig consisting of twodigital cameras positioned on both sides of the main camera to capture motion data in real time with the main performances. In post-production, this data was used to digitally render computer-generated versions of the actors.[11][12]
Virtual camera rigs give cinematographers the ability to manipulate a virtual camera within a 3-D world and photograph thecomputer-generated 3-D models. Once the virtual content has been assembled into a scene within a3-D engine, the images can be creatively written, relighted, and re-photographed from other angles as if the action was happening for the first time. The virtual "filming" of this realistic CGI also allows for physically impossible camera movements such as thebullet-time scenes inThe Matrix.[4]
Virtual cinematography can also be used to build completevirtual worlds from scratch. More advancedmotion controllers and tablet interfaces have made such visualization techniques possible within the budget constraints of smaller film productions.[13]
The widespread adoption of visual effects spawned a desire to produce these effects directly on-set in ways that did not detract from the actors' performances.[14] Effects artists began to implement virtual cinematographic techniques on-set, making computer-generated elements of a given shot visible to the actors and cinematographers responsible for capturing it.[13]
Techniques such asreal-time rendering, which allow an effect to be created before a scene is filmed rather than inserting it digitally afterward, utilize previously unrelated technologies, including video game engines, projectors, and advanced cameras to fuse conventional cinematography with its virtual counterpart.[15][16][17]
The first real-time motion picture effect was developed byIndustrial Light & Magic (ILM) in collaboration withEpic Games, utilizing theUnreal Engine to display the classicStar Wars "light-speed" effect for the 2018 filmSolo: A Star Wars Story.[15][18] The technology used for the film, dubbed "Stagecraft" by its creators, was subsequently used by ILM for various Star Wars projects as well as its parent companyDisney's 2019 photorealistic animatedremake ofThe Lion King.[19][20]
Rather than scanning and representing an existing image with virtual cinematographic techniques, real-time effects require minimal extra work in post-production. Shots including on-set virtual cinematography do not require any of the advanced post-production methods; the effects can be achieved using traditional CGI animation.