Thehistory ofcomputer animation began as early as the 1940s and 1950s, when people began to experiment withcomputer graphics – most notably byJohn Whitney. It was only by the early 1960s whendigital computers had become widely established, that new avenues for innovative computer graphics blossomed. Initially, uses were mainly for scientific, engineering and other research purposes, but artistic experimentation began to make its appearance by the mid-1960s – most notably by Dr. Thomas Calvert. By the mid-1970s, many such efforts were beginning to enter into public media. Much computer graphics at this time involved2-D imagery, though increasingly as computer power improved, efforts to achieve 3-D realism became the emphasis. By the late 1980s, photo-realistic3-D was beginning to appear in film movies, and by mid-1990s had developed to the point where 3-D animation could be used for entire feature film production.
John Whitney Sr. (1917–1995) was an American animator, composer and inventor, widely considered to be one of the fathers of computer animation.[1] In the 1940s and 1950s, he and his brother James created a series of experimental films made with a custom-built device based on old anti-aircraft analog computers (Kerrison Predictors) connected byservomechanisms to control the motion of lights and lit objects – the first example ofmotion control photography. One of Whitney's best known works from this early period was the animated title sequence fromAlfred Hitchcock's 1958 filmVertigo,[2] which he collaborated on with graphic designerSaul Bass. In 1960, Whitney established his company Motion Graphics Inc., which largely focused on producing titles for film and television, while continuing further experimental works. In 1968, his pioneeringmotion control model photography was used onStanley Kubrick's film2001: A Space Odyssey, and also for theslit-scan photography technique used in the film's "Star Gate" finale.
One of the first programmable digital computers wasSEAC (the Standards Eastern Automatic Computer), which entered service in 1950 at theNational Bureau of Standards (NBS) in Maryland, USA.[3][4] In 1957, computer pioneerRussell Kirsch and his team unveiled adrum scanner for SEAC, to "trace variations of intensity over the surfaces of photographs", and so doing made the firstdigital image by scanning a photograph. The image, picturing Kirsch's three-month-old son, consisted of just 176×176pixels. They used the computer to extract line drawings, count objects, recognize types of characters and display digital images on anoscilloscope screen. This breakthrough can be seen as the forerunner of all subsequent computer imaging, and recognising the importance of this first digital photograph,Life magazine in 2003 credited this image as one of the "100 Photographs That Changed the World".[5][6]
In 1960, a 49-second vector animation of a car traveling down a planned highway was created at the SwedishRoyal Institute of Technology on theBESK computer. The consulting firm Nordisk ADB, which was a provider of software for the Royal Swedish Road and Water Construction Agency realized that they had all the coordinates to be able to draw perspective from the driver's seat for a motorway from Stockholm towards Nacka. In front of a specially designed digital oscilloscope with a resolution of about 1 megapixel a 35 mm camera with an extended magazine was mounted on a specially made stand. The camera was automatically controlled by the computer, which sent a signal to the camera when a new image was fed on the oscilloscope. It took an image every twenty meters (yards) of the virtual path. The result of this was a fictional journey on the virtual highway at a speed of 110 km/h (70 mph). The short animation was broadcast on November 9, 1961, at primetime in the national television newscast Aktuellt.[7][8]
Bell Labs in Murray Hill, New Jersey, was a leading research contributor in computer graphics, computer animation and electronic music from its beginnings in the early 1960s. Initially, researchers were interested in what the computer could be made to do, but the results of the visual work produced by the computer during this period established people like Edward Zajac,Michael Noll andKen Knowlton as pioneering computer artists.
Edward Zajac produced one of the first computer generated films at Bell Labs in 1963, titledA Two Gyro Gravity Gradientattitude control System, which demonstrated that a satellite could be stabilized to always have a side facing the Earth as it orbited.[9]
Ken Knowlton developed theBeflix (Bell Flicks) animation system in 1963, which was used to produce dozens of artistic films by artistsStan VanDerBeek, Knowlton andLillian Schwartz.[10] Instead of raw programming, Beflix worked using simple "graphic primitives", like draw a line, copy a region, fill an area, zoom an area, and the like.
In 1965, Michael Noll created computer-generated stereographic 3-D movies, including a ballet of stick figures moving on a stage.[11] Some movies also showed four-dimensional hyper-objects projected to three dimensions.[12] Around 1967, Noll used the 4-D animation technique to produce computer-animated title sequences for the commercial film shortIncredible Machine (produced by Bell Labs) and the TV specialThe Unexplained (produced by Walt DeFaria).[13] Many projects in other fields were also undertaken at this time.
In the 1960s,William Fetter was a graphic designer forBoeing atWichita, and was credited with coining the phrase "Computer Graphics" to describe what he was doing at Boeing at the time (though Fetter himself credited this to colleague Verne Hudson).[14][15] Fetter's work included the 1964 development of ergonomic descriptions of the human body that are both accurate and adaptable to different environments, and this resulted in the first 3-D animatedwire-frame figures.[16][17]Such human figures became one of the most iconic images of the early history of computer graphics, and often were referred to as the "Boeing Man". Fetter died in 2002.
Ivan Sutherland is considered by many to be the creator of Interactive Computer Graphics, and an internet pioneer. He worked at the Lincoln Laboratory at MIT (Massachusetts Institute of Technology) in 1962, where he developed a program calledSketchpad I, which allowed the user to interact directly with the image on the screen. This was the firstgraphical user interface, and is considered one of the most influential computer programs an individual has ever written.[18]
Utah was a major center for computer animation in this period. The computer science faculty was founded byDavid Evans in 1965, and many of the basic techniques of 3-D computer graphics were developed here in the early 1970s withARPA funding (Advanced Research Projects Agency). Research results included Gouraud, Phong, and Blinn shading, texture mapping,hidden surface algorithms, curvedsurface subdivision, real-time line-drawing and raster image display hardware, and early virtual reality work.[19] In the words of Robert Rivlin in his 1986 bookThe Algorithmic Image: Graphic Visions of the Computer Age, "almost every influential person in the modern computer-graphics community either passed through the University of Utah or came into contact with it in some way".[20]

In the mid-1960s, one of the most difficult problems in computer graphics was the"hidden-line" problem – how to render a 3D model while properly removing the lines that should not be visible to the observer.[21] One of the first successful approaches to this was published at the 1967Fall Joint Computer Conference by Chris Wylie, David Evans, and Gordon Romney, and demonstrated shaded 3D objects such as cubes andtetrahedra.[22] An improved version of this algorithm was demonstrated in 1968, including shaded renderings of 3D text, spheres, and buildings.[23]
A shaded 3D computer animation of a coloredSoma cube exploding into pieces was created at the University of Utah as part of Gordon Romney's 1969 PhD dissertation, along with shaded renderings of 3D text, 3D graphs, trucks, ships, and buildings.[24] This paper also coined the term "rendering" in reference to computer drawings of 3D objects. Another 3D shading algorithm was implemented byJohn Warnock for his 1969 dissertation.[25]

A truly real-time shading algorithm was developed by Gary Watkins for his 1970 PhD dissertation, and was the basis of theGouraud shading technique, developed the following year.[26][27] Robert Mahl's 1970 dissertation at the University of Utah described smooth shading ofquadric surfaces.[28]
Further innovations in shaded 3D graphics at the University of Utah included a more realistic shading technique byBui Tuong Phong for his dissertation in 1973 and texture mapping byEdwin Catmull for his 1974 dissertation.[29][30]
Around 1972, avirtual reality headset known as the "Sorcerer's Apprentice" became operational at the University of Utah, which usedhead tracking and a device similar toMIT's Lincoln Wand to track the user's hand in 3D space.[31] This headset, like Ivan Sutherland's"Sword of Damocles", was capable of simple, unshadedwireframe 3D graphics; however, the Sorcerer's Apprentice added the capability to create and manipulate 3D objects in real-time through the hand tracking device, termed the "wand". Commands to be performed by the 3D wand could be chosen by pointing the wand at a physical wall chart.[32]
An important innovation in computer animation at the University of Utah was the creation of the program "KEYFRAME", which would allow a user to pose andkeyframe arigged humanoid 3D character, createwalk cycles and other movements,lip-sync the character, all using amouse-basedgraphical interface, and then render a shaded animation of the rigged character performing the walk cycle, hand movement, or other animation. This program, as well as one for creating a 3D animation of a football match, were created by Barry Wessler for his 1973 PhD dissertation.[33] The capabilities of the "KEYFRAME" program were demonstrated in a short film,Not Just Reality, which featured walk cycles, lip syncing, facial expressions, and further movement of a shaded humanoid 3D character.[34]
In 1968, Ivan Sutherland teamed up with David Evans to found the companyEvans & Sutherland—both were professors in the Computer Science Department at the University of Utah, and the company was formed to produce new hardware designed to run the systems being developed in the University. Many such algorithms have later resulted in the generation of significant hardware implementation, including theGeometry Engine, theHead-mounted display, theFrame buffer, andFlight simulators.[35] Most of the employees were active or former students, and included Jim Clark, who startedSilicon Graphics in 1981,Ed Catmull, co-founder ofPixar in 1979, andJohn Warnock ofAdobe Systems in 1982.
In 1968, a group of Soviet physicists and mathematicians with N. Konstantinov as its head created a mathematical model for the motion of a cat. On aBESM-4 computer they devised a programme for solving the ordinary differential equations for this model. The Computer printed hundreds of frames on paper using alphabet symbols that were later filmed in sequence thus creating the first computer animation of a character, a walking cat.[36][37]
Charles Csuri, an artist at TheOhio State University (OSU), started experimenting with the application of computer graphics to art in 1963. His efforts resulted in a prominent CGI research laboratory that received funding from theNational Science Foundation and other government and private agencies. The work at OSU revolved around animation languages, complex modeling environments, user-centric interfaces, human and creature motion descriptions, and other areas of interest to the discipline.[38][39][40]
In July 1968, the arts journalStudio International published a special issue titledCybernetic Serendipity – The Computer and the Arts, which catalogued a comprehensive collection of items and examples of work being done in the field of computer art in organisations all over the world, and shown in exhibitions in London, UK, San Francisco, CA. and Washington, DC.[41][42] This marked a milestone in the development of the medium, and was considered by many to be of widespread influence and inspiration. Apart from all the examples mentioned above, two other particularly well known iconic images from this includeChaos to Order[43] by Charles Csuri (often referred to as theHummingbird), created at Ohio State University in 1967,[44] andRunning Cola is Africa[45] by Masao Komura and Koji Fujino created at the Computer Technique Group, Japan, also in 1967.[46]
The first machine to achieve widespread public attention in the media wasScanimate, an analogcomputer animation system designed and built by Lee Harrison of the Computer Image Corporation in Denver. From around 1969 onward, Scanimate systems were used to produce much of the video-based animation seen on television in commercials, show titles, and other graphics. It could create animations inreal time, a great advantage over digital systems at the time.[47] American animation studioHanna-Barbera experimented with using Scanimate to create an early form of digitalcutout style. A clip of artists using the machine to manipulate scanned images ofScooby-Doo characters, scaling and warping the artwork to simulate animation, is available at theInternet Archive.[48]
TheNational Film Board of Canada, already a world center for animation art, also began experimentation with computer techniques in 1969.[49] Most well-known of the early pioneers with this was artistPeter Foldes, who completedMetadata in 1971. This film comprised drawings animated by gradually changing from one image to the next, a technique known as "interpolating" (also known as "inbetweening" or "morphing"), which also featured in a number of earlier art examples during the 1960s.[50] In 1974, Foldes completedHunger / La Faim, which was one of the first films to show solid filled (raster scanned) rendering, and was awarded the Jury Prize in the short film category at1974 Cannes Film Festival, as well as an Academy Award nomination. Foldes and the National Film Board of Canada employed pioneering keyframe computer technology developed at theNational Research Council of Canada (NRC) by scientist Nestor Burtnyk in 1969. Burtnyk and his collaborator Marceli Wein received the Academy Award in 1997 in recognition of their role in the field.[51] The NRC team also contributed high-profile animation sequences to the celebrated BBC documentary series The Ascent of Man (1973).[52]
TheAtlas Computer Laboratory near Oxford was for many years a major facility for computer animation in Britain.[53] The first entertainment cartoon made wasThe Flexipede, by Tony Pritchett, which was first shown publicly at the Cybernetic Serendipity exhibition in 1968.[54] Artist Colin Emmett and animatorAlan Kitching first developed solid filled colour rendering in 1972, notably for the title animation for theBBC'sThe Burke Special TV program.
In 1973, Kitching went on to develop a software called "Antics", which allowed users to create animation without needing any programming.[55][56] The package was broadly based on conventional "cel" (celluloid) techniques, but with a wide range of tools including camera and graphics effects, interpolation ("inbetweening"/"morphing"), use of skeleton figures and grid overlays. Any number of drawings or cels could be animated at once by "choreographing" them in limitless ways using various types of "movements". At the time, only black & white plotter output was available, but Antics was able to produce full-color output by using theTechnicolor Three-strip Process. Hence the name Antics was coined as an acronym forANimatedTechnicolor-ImageComputerSystem.[57] Antics was used for many animation works, including the first complete documentary movieFinite Elements, made for the Atlas Lab itself in 1975.[58]
The first feature film to usedigital image processing was the 1973 filmWestworld, a science-fiction film written and directed by novelistMichael Crichton, in which humanoid robots live amongst the humans.[59] John Whitney, Jr., and Gary Demos atInformation International, Inc. digitally processed motion picture photography to appearpixelized to portray the Gunslinger android'spoint of view. The cinegraphic block portraiture was accomplished using the Technicolor Three-strip Process to color-separate each frame of the source images, then scanning them to convert into rectangular blocks according to its tone values, and finally outputting the result back to film. The process was covered in theAmerican Cinematographer article "Behind the scenes of Westworld".[60]
Sam Matsa whose background in graphics started with the APT project at MIT with Doug Ross and Andy Van Dam petitionedAssociation for Computing Machinery (ACM) to form SIGGRAPH (Special Interest Committee on Computer Graphics), the forerunner ofACM SIGGRAPH in 1967.[61] In 1974, the firstSIGGRAPH conference on computer graphics opened. This annual conference soon became the dominant venue for presenting innovations in the field.[62][63]
The first use of 3-D wireframe imagery in mainstream cinema was in the sequel toWestworld,Futureworld (1976), directed byRichard T. Heffron. This featured a computer-generated hand and face created by University of Utah graduate studentsEdwin Catmull andFred Parke which had initially appeared in their 1972 experimental shortA Computer Animated Hand.[64] The same film also featured snippets from 1974 experimental shortFaces and Body Parts. TheAcademy Award-winning 1975 short animated filmGreat, about the life of theVictorian engineerIsambard Kingdom Brunel, contains a brief sequence of a rotating wireframe model of Brunel's final project, the iron steam shipSS Great Eastern.The third film to use this technology wasStar Wars (1977), written and directed byGeorge Lucas, with wireframe imagery in the scenes with the Death Star plans, the targeting computers in theX-wing fighters and in theMillennium Falcon spacecraft.
TheWalt Disney filmThe Black Hole (1979, directed byGary Nelson) used wireframe rendering to depict the titular black hole, using equipment from Disney's engineers. In the same year, the science-fiction horror filmAlien, directed byRidley Scott, also used wire-frame model graphics, in this case to render the navigation monitors in the spaceship. The footage was produced by Colin Emmett at the Atlas Computer Laboratory.[65]
AlthoughLawrence Livermore Labs in California is mainly known as a centre for high-level research in science, it continued producing significant advances in computer animation throughout this period. Notably, Nelson Max, who joined the Lab in 1971, and whose 1976 filmTurning a sphere inside out is regarded as one of the classic early films in the medium (International Film Bureau, Chicago, 1976).[66] He also produced a series of "realistic-looking" molecular model animations that served to demonstrate the future role of CGI (Computer-generated imagery) in scientific visualization. His research interests focused on realism in nature images, molecular graphics, computer animation, and 3D scientific visualization. He later served as computer graphics director for the Fujitsu pavilions at Expo 85 and 90 in Japan.[67][68]
In 1974, Alex Schure, a wealthy New York entrepreneur, established the Computer Graphics Laboratory (CGL) at theNew York Institute of Technology (NYIT). He put together the most sophisticated studio of the time, with state of the art computers, film and graphic equipment, and hired top technology experts and artists to run it –Ed Catmull, Malcolm Blanchard,Fred Parke and others all from Utah, plus others from around the country includingRalph Guggenheim,Alvy Ray Smith andEd Emshwiller. During the late 1970s, the staff made numerous innovative contributions to image rendering techniques, and produced many influential software, including the animation programTween, the paint programPaint, and the animation programSoftCel. Several videos from NYIT become quite famous:Sunstone, byEd Emshwiller,Inside a Quark, by Ned Greene, andThe Works. The latter, written byLance Williams, was begun in 1978, and was intended to be the first full-lengthCGI film, but it was never completed, though a trailer for it was shown at SIGGRAPH 1982. In these years, many people regarded NYIT CGI Lab as the top computer animation research and development group in the world.[69][70]
The quality of NYIT's work attracted the attention of George Lucas, who was interested in developing aCGI visual effects facility at his companyLucasfilm. In 1979, he recruited the top talent from NYIT, including Catmull, Smith and Guggenheim to start his division, which later spun off asPixar, founded in 1986 with funding byApple Inc. co-founderSteve Jobs.
Theframebuffer orframestore is a graphics screen configured with a memorybuffer that contains data for a complete screen image. Typically, it is a rectangular array (raster) ofpixels, and the number of pixels in the width and the height is its "resolution". Color values stored in the pixels can be from 1-bit (monochrome), to 24-bit (true color, 8-bits each forRGB—Red, Green, & Blue), or also 32-bit, with an extra 8-bits used as a transparency mask (alpha channel). Before the framebuffer, graphics displays were allvector-based, tracing straight lines from one co-ordinate to another. In 1948, theManchester Baby computer used aWilliams tube, where the 1-bit display was also the memory. An early (perhaps first known) example of a framebuffer was designed in 1969 by A. Michael Noll atBell Labs,[71] This early system had just 2-bits, giving it 4 levels of gray scale. A later design had color, using more bits.[72][73]Laurie Spiegel implemented a simple paint program at Bell Labs to allow users to "paint" directly on the framebuffer.
The development ofMOS memory (metal–oxide–semiconductor memory)integrated-circuit chips, particularlyhigh-densityDRAM (dynamicrandom-access memory) chips with at least 1 kb memory, made it practical to create adigital memory system with framebuffers capable of holding astandard-definition (SD) video image.[74][75] This led to the development of theSuperPaint system byRichard Shoup atXerox PARC during 1972–1973.[74] It used a framebuffer displaying 640×480 pixels (standardNTSC video resolution) with eight-bit depth (256 colors). The SuperPaint software contained all the essential elements of later paint packages, allowing the user to paint and modify pixels, using a palette of tools and effects, and thereby making it the first complete computer hardware and software solution for painting and editing images. Shoup also experimented with modifying the output signal using color tables, to allow the system to produce a wider variety of colors than the limited 8-bit range it contained. This scheme would later become commonplace in computer framebuffers. The SuperPaint framebuffer could also be used to capture input images from video.[76][77]
The first commercial framebuffer was produced in 1974 byEvans & Sutherland. It cost about $15,000, with a resolution of 512 by 512 pixels in 8-bit grayscale color, and sold well to graphics researchers without the resources to build their own framebuffer.[78] A little later,NYIT created the first full-color 24-bitRGB framebuffer by using three of the Evans & Sutherland framebuffers linked together as one device by a minicomputer. Many of the "firsts" that happened at NYIT were based on the development of this first raster graphics system.[69]
In 1975, the UK companyQuantel, founded in 1973 by Peter Michael,[79] produced the first commercial full-color broadcast framebuffer, the Quantel DFS 3000. It was first used in TV coverage of the1976 Montreal Olympics to generate apicture-in-picture inset of the Olympic flaming torch while the rest of the picture featured the runner entering the stadium. Framebuffer technology provided the cornerstone for the future development of digital television products.[80]
By the late 1970s, it became possible for personal computers (such as theApple II) to contain low-color framebuffers. However, it was not until the 1980s that a real revolution in the field was seen, and framebuffers capable of holding a standard video image were incorporated into standalone workstations. By the 1990s, framebuffers eventually became the standard for all personal computers.
At this time, a major step forward to the goal of increased realism in 3-D animation came with the development of "fractals". The term was coined in 1975 by mathematicianBenoit Mandelbrot, who used it to extend the theoretical concept of fractional dimensions to geometric patterns in nature, and published in English translation of his bookFractals: Form, Chance and Dimension in 1977.[81][82]
In 1979–80, the first film using fractals to generate the graphics was made byLoren Carpenter of Boeing. TitledVol Libre, it showed a flight over a fractal landscape, and was presented at SIGGRAPH 1980.[83] Carpenter was subsequently hired by Pixar to create the fractal planet in theGenesis Effect sequence ofStar Trek II: The Wrath of Khan in June 1982.[84]
Bob Holzman ofNASA'sJet Propulsion Laboratory in California established JPL's Computer Graphics Lab in 1977 as a group with technology expertise in visualizing data being returned from NASA missions. On the advice of Ivan Sutherland, Holzman hired a graduate student from Utah namedJim Blinn.[85][86] Blinn had worked with imaging techniques at Utah, and developed them into a system for NASA's visualization tasks. He produced a series of widely seen "fly-by" simulations, including theVoyager,Pioneer andGalileo spacecraft fly-bys of Jupiter, Saturn and their moons. He also worked withCarl Sagan, creating animations for hisCosmos: A Personal Voyage TV series. Blinn developed many influential new modelling techniques, and wrote papers on them for theIEEE (Institute of Electrical and Electronics Engineers), in their journalComputer Graphics and Applications. Some of these included environment mapping, improved highlight modelling, "blobby" modelling, simulation of wrinkled surfaces, and simulation of butts and dusty surfaces.
Later in the 1980s, Blinn developed CGI animations for anAnnenberg/CPB TV series,The Mechanical Universe, which consisted of over 500 scenes for 52 half-hour programs describing physics and mathematics concepts for college students. This he followed with production of another series devoted to mathematical concepts, calledProject Mathematics!.[87]
Motion control photography is a technique that uses a computer to record (or specify) the exact motion of a film camera during a shot, so that the motion can be precisely duplicated again, or alternatively on another computer, and combined with the movement of other sources, such as CGI elements. Early forms of motion control go back toJohn Whitney's 1968 work on2001: A Space Odyssey, and the effects on the 1977 filmStar Wars Episode IV: A New Hope, byGeorge Lucas' newly created companyIndustrial Light & Magic in California (ILM). ILM created a digitally controlled camera known as theDykstraflex, which performed complex and repeatable motions around stationary spaceship models, enabling separately filmed elements (spaceships, backgrounds, etc.) to be coordinated more accurately with one another. However, neither of these was actually computer-based—Dykstraflex was essentially a custom-built hard-wired collection of knobs and switches.[88] The first commercial computer-based motion control and CGI system was developed in 1981 in the UK byMoving Picture Company designerBill Mather.[89]
3D computer graphics software began appearing forhome computers in the late 1970s. The earliest known example is3D Art Graphics, a set of3D computer graphics effects, written by Kazumasa Mitazawa and released in June 1978 for theApple II.[90][91]
Silicon Graphics, Inc (SGI) was a manufacturer of high-performance computer hardware and software, founded in 1981 byJim Clark. His idea, called theGeometry Engine, was to create a series of components in aVLSI processor that would accomplish the main operations required in image synthesis—the matrix transforms, clipping, and the scaling operations that provided the transformation to view space. Clark attempted to shop his design around to computer companies, and finding no takers, he and colleagues atStanford University, California, started their own company, Silicon Graphics.[92]
SGI's first product (1984) was theIRIS (Integrated Raster Imaging System). It used the 8 MHz M68000 processor with up to 2 MB memory, a custom 1024×1024 frame buffer, and the Geometry Engine to give the workstation its impressive image generation power. Its initial market was 3D graphics display terminals, but SGI's products, strategies and market positions evolved significantly over time, and for many years were a favoured choice for CGI companies in film, TV, and other fields.[93]
In 1981, Quantel released the "Paintbox", the first broadcast-quality turnkey system designed for creation and composition of television video and graphics. Its design emphasized the studio workflow efficiency required for live news production. Essentially, it was a framebuffer packaged with innovative user software, and it rapidly found applications in news, weather, station promos, commercials, and the like. Although it was essentially a design tool for still images, it was also sometimes used for frame-by-frame animations. Following its initial launch, it revolutionised the production of television graphics, and some Paintboxes are still in use today due to their image quality, and versatility.[94]
This was followed in 1982 by theQuantel Mirage, or DVM8000/1 "Digital Video Manipulator", a digital real-time video effects processor. This was based on Quantel's own hardware, plus aHewlett-Packard computer for custom program effects. It was capable of warping a live video stream by texture mapping it onto an arbitrary three-dimensional shape, around which the viewer could freely rotate or zoom in real-time. It could also interpolate, or morph, between two different shapes. It was considered the first real-time 3D video effects processor, and the progenitor of subsequentDVE (Digital video effect) machines. In 1985, Quantel went on to produce "Harry", the first all-digitalnon-linear editing and effects compositing system.[95]
In 1982, Japan'sOsaka University developed theLINKS-1 Computer Graphics System, asupercomputer that used up to 257Zilog Z8001microprocessors, used for rendering realistic3Dcomputer graphics. According to the Information Processing Society of Japan: "The core of 3D image rendering is calculating the luminance of each pixel making up a rendered surface from the given viewpoint,light source, and object position. The LINKS-1 system was developed to realize an image rendering methodology in which each pixel could be parallel processed independently usingray tracing. By developing a new software methodology specifically for high-speed image rendering, LINKS-1 was able to rapidly render highly realistic images." It was "used to create the world's first 3Dplanetarium-like video of the entireheavens that was made completely with computer graphics. The video was presented at theFujitsu pavilion at the 1985 International Exposition inTsukuba."[96] The LINKS-1 was the world's most powerful computer, as of 1984.[97]
In the '80s,University of Montreal was at the front run of Computer Animation with three successful short 3-D animated films with 3-D characters.
In 1983, Philippe Bergeron,Nadia Magnenat Thalmann, andDaniel Thalmann directedDream Flight, considered as the first 3-D generated film telling a story. The film was completely programmed using the MIRA graphical language,[98] an extension of thePascal programming language based onAbstract Graphical Data Types.[99] The film got several awards and was shown at theSIGGRAPH '83 Film Show.
In 1985, Pierre Lachapelle, Philippe Bergeron, Pierre Robidoux andDaniel Langlois directedTony de Peltrie, which shows the first animated human character to express emotion throughfacial expressions and body movements, which touched the feelings of the audience.[100][101]Tony de Peltrie premiered as the closing film ofSIGGRAPH '85.
In 1987, theEngineering Institute of Canada celebrated its 100th anniversary. A major event, sponsored byBell Canada and Northern Telecom (nowNortel), was planned for the Place des Arts in Montreal. For this event,Nadia Magnenat Thalmann andDaniel Thalmann simulatedMarilyn Monroe andHumphrey Bogart meeting in a café in the old town section of Montreal. The short movie, calledRendez-vous in Montreal[102] was shown in numerous festivals and TV channels all over the world.
TheSun Microsystems company was founded in 1982 byAndy Bechtolsheim with other fellow graduate students atStanford University. Bechtolsheim originally designed the SUN computer as a personalCAD workstation for the Stanford University Network (hence the acronym "SUN"). It was designed around the Motorola 68000 processor with the Unix operating system and virtual memory, and, like SGI, had an embedded frame buffer.[103] Later developments included computer servers and workstations built on its own RISC-based processor architecture and a suite of software products such as the Solaris operating system, and the Java platform. By the '90s, Sun workstations were popular for rendering in 3-D CGI filmmaking—for example,Disney-Pixar's 1995 movieToy Story used arender farm of 117 Sun workstations.[104] Sun was a proponent ofopen systems in general andUnix in particular, and a major contributor toopen source software.[105]
The NFB's French-language animation studio founded its Centre d'animatique in 1980, at a cost of $1 million CAD, with a team of six computer graphics specialists. The unit was initially tasked with creating stereoscopic CGI sequences for the NFB's 3-DIMAX filmTransitions forExpo 86. Staff at the Centre d'animatique includedDaniel Langlois, who left in 1986 to formSoftimage.[106][107]
Also in 1982, the first complete turnkey system designed specifically for creating broadcast-standard animation was produced by the Japanese company Nippon Univac Kaisha ("NUK", later merged withBurroughs), and incorporated theAntics 2-D computer animation software developed by Alan Kitching from his earlier versions. The configuration was based on theVAX 11/780 computer, linked to aBosch 1-inch VTR, via NUK's own framebuffer. This framebuffer also showed realtime instant replays of animated vector sequences ("line test"), though finished full-color recording would take many seconds per frame.[108][109][110] The full system was successfully sold to broadcasters and animation production companies across Japan. Later in the '80s, Kitching developed versions of Antics forSGI andApple Mac platforms, and these achieved a wider global distribution.[111]
The first cinema feature movie to make extensive use of solid 3-DCGI wasWalt Disney'sTron, directed bySteven Lisberger, in 1982. The film is celebrated as a milestone in the industry, though less than twenty minutes of this animation were actually used—mainly the scenes that show digital "terrain", or include vehicles such asLight Cycles, tanks and ships. To create the CGI scenes, Disney turned to the four leading computer graphics firms of the day:Information International Inc,Robert Abel and Associates (both in California),MAGI, andDigital Effects (both in New York). Each worked on a separate aspect of the movie, without any particular collaboration.[112]Tron was a box office success, grossing $33 million on a budget of $17 million.[113]
In 1984,Tron was followed byThe Last Starfighter, aUniversal Pictures /Lorimar production, directed byNick Castle, and was one of cinema's earliest films to use extensiveCGI to depict its many starships, environments and battle scenes. This was a great step forward compared with other films of the day, such asReturn of the Jedi, which still used conventional physical models.[114] The computer graphics for the film were designed by artistRon Cobb, and rendered byDigital Productions on aCray X-MP supercomputer. A total of 27 minutes of finished CGI footage was produced—considered an enormous quantity at the time. The company estimated that using computer animation required only half the time, and one half to one third the cost of traditional visual effects.[115] The movie was a financial success, earning over $28 million on an estimated budget of $15 million.[116]
The termsinbetweening andmorphing are often used interchangeably, and signify the creating of a sequence of images where one image transforms gradually into another image smoothly by small steps. Graphically, an early example would beCharles Philipon's famous 1831 caricature of French King Louis Philippe turning into a pear (metamorphosis).[117] "Inbetweening" (AKA "tweening") is a term specifically coined for traditional animation technique, an early example being in E.G.Lutz's 1920 bookAnimated Cartoons.[118] In computer animation, inbetweening was used from the beginning (e.g.,John Whitney in the '50s,Charles Csuri and Masao Komura in the '60s).[41] These pioneering examples were vector-based, comprising only outline drawings (as was also usual in conventional animation technique), and would often be described mathematically as "interpolation". Inbetweening with solid-filled colors appeared in the early '70s, (e.g., Alan Kitching'sAntics atAtlas Lab, 1973,[57] andPeter Foldes'La Faim atNFBC, 1974[50]), but these were still entirely vector-based.
The term "morphing" did not become current until the late '80s, when it specifically applied to computer inbetweening with photographic images—for example, to make one face transform smoothly into another. The technique uses grids (or "meshes") overlaid on the images, to delineate the shape of key features (eyes, nose, mouth, etc.). Morphing then inbetweens one mesh to the next, and uses the resulting mesh to distort the image and simultaneouslydissolve one to another, thereby preserving a coherent internal structure throughout. Thus, several different digital techniques come together in morphing.[119] Computer distortion of photographic images was first done byNASA, in the mid-1960s, to alignLandsat andSkylab satellite images with each other.Texture mapping, which applies a photographic image to a 3D surface in another image, was first defined byJim Blinn and Martin Newell in 1976. A 1980 paper byEd Catmull andAlvy Ray Smith on geometric transformations, introduced a mesh-warping algorithm.[120] The earliest full demonstration of morphing was at the 1982SIGGRAPH conference, where Tom Brigham ofNYIT presented a short film sequence in which a woman transformed, or "morphed", into a lynx.
The first cinema movie to use morphing wasRon Howard's 1988 fantasy filmWillow, where the main character, Willow, uses a magic wand to transform animal to animal to animal and finally, to a sorceress.
With 3-DCGI, the inbetweening of photo-realistic computer models can also produce results similar to morphing, though technically, it is an entirely different process (but is nevertheless often also referred to as "morphing"). An early example is Nelson Max's 1977 filmTurning a sphere inside out.[67] The first cinema feature film to use this technique was the 1986Star Trek IV: The Voyage Home, directed byLeonard Nimoy, with visual effects byGeorge Lucas's companyIndustrial Light & Magic (ILM). The movie includes a dream sequence where the crew travel back in time, and images of their faces transform into one another. To create it, ILM employed a new3D scanning technology developed byCyberware to digitize the cast members' heads, and used the resulting data for the computer models. Because each head model had the same number of key points, transforming one character into another was a relatively simple inbetweening.[121]
In 1989James Cameron's underwater action movieThe Abyss was released. This was one of the first cinema movies to include photo-realisticCGI integrated seamlessly into live-action scenes. A five-minute sequence featuring an animated tentacle or "pseudopod" was created by ILM, who designed a program to produce surface waves of differing sizes and kinetic properties for the pseudopod, including reflection, refraction and amorphing sequence. Although short, this successful blend of CGI and live-action is widely considered a milestone in setting the direction for further future development in the field.[122]
The Great Mouse Detective (1986) was the firstDisney film to extensively use computer animation, a fact that Disney used to promote the film during marketing. CGI was used during a two-minute climax scene on theBig Ben, inspired by a similar climax scene inHayao Miyazaki'sThe Castle of Cagliostro (1979).The Great Mouse Detective, in turn, paved the way for theDisney Renaissance.[123][124]
The late 1980s saw another milestone in computer animation, this time in 2-D: the development ofDisney's "Computer Animation Production System", known as "CAPS/ink & paint". This was a custom collection of software, scanners and networked workstations developed byThe Walt Disney Company in collaboration withPixar. Its purpose was to computerize the ink-and-paint and post-production processes of traditionally animated films, to allow more efficient and sophisticated post-production by making the practice of hand-paintingcels obsolete. The animators' drawings and background paintings are scanned into the computer, and animation drawings are inked and painted by digital artists. The drawings and backgrounds are then combined, using software that allows for camera movements,multiplane effects, and other techniques—including compositing with 3-D image material. The system's first feature film use was inThe Little Mermaid (1989), for the "farewell rainbow" scene near the end, but the first full-scale use was forThe Rescuers Down Under (1990), which therefore became the first traditionally animated film to be entirely produced on computer—or indeed, the first 100% digital feature film of any kind ever produced.[125][126]
The 1980s saw the appearance of many notable new commercial software products:
The decade saw some of the first computer-animated television series. For exampleQuarxs, created by media artistMaurice Benayoun and comic book artistFrançois Schuiten, was an early example of a CGI series based on a real screenplay and not animated solely for demonstrative purposes.[135]VeggieTales, an AmericanChristian media, is also one of the first computer-animated series.Phil Vischer came up with the idea for VeggieTales while testing animation software as a medium for children's videos in the early 1990s.
The 1990s began with much ofCGI technology now sufficiently developed to allow a major expansion into film and TV production. 1991 is widely considered the "breakout year", with two major box-office successes, both making heavy use of CGI.
The first of these wasJames Cameron's movieTerminator 2: Judgment Day,[136] and was the one that first brought CGI to widespread public attention. The technique was used to animate the two "Terminator" robots. The "T-1000" robot was given a "mimetic poly-alloy" (liquid metal) structure, which enabled this shapeshifting character to morph into almost anything it touched. Most of the key Terminator effects were provided byIndustrial Light & Magic, and this film was the most ambitious CGI project since the 1982 filmTron.[137]
The other wasDisney'sBeauty and the Beast,[138] the second traditional 2-D animated film to be entirely made usingCAPS. The system also allowed easier combination of hand-drawn art with 3-DCGI material, notably in the "waltz sequence", where Belle and Beast dance through a computer-generated ballroom as the camera "dollies" around them in simulated 3-D space.[139] Notably,Beauty and the Beast was the first animated film ever to be nominated for a Best Picture Academy Award.[140]
Another significant step came in 1993, withSteven Spielberg'sJurassic Park,[141] where 3-DCGI dinosaurs were integrated with life-sizedanimatronic counterparts. The CGI animals were created by ILM, and in a test scene to make a direct comparison of both techniques, Spielberg chose the CGI. Also watching wasGeorge Lucas who remarked "a major gap had been crossed, and things were never going to be the same."[142][143][144]
Flocking is the behavior exhibited when a group of birds (or other animals) move together in a flock. A mathematical model of flocking behavior was first simulated on a computer in 1986 byCraig Reynolds, and soon found its use in animation, beginning withStanley and Stella in: Breaking the Ice.Jurassic Park notably featured flocking, and brought it to widespread attention by mentioning it in the actual script[citation needed]. Other early uses were the flocking bats inTim Burton'sBatman Returns (1992), and the wildebeest stampede inDisney'sThe Lion King (1994).[145]
With improving hardware, lower costs, and an ever-increasing range of software tools,CGI techniques were soon rapidly taken up in both film and television production.
In 1993,J. Michael Straczynski'sBabylon 5 became the first major television series to useCGI as the primary method for their visual effects (rather than using hand-built models), followed later the same year byRockne S. O'Bannon'sSeaQuest DSV.
Also the same year, the French companyStudio Fantome produced the first full-length completely computer-animated TV series,Insektors (26×13'),[146][147] though they also produced an even earlier all 3-D short series,Geometric Fables (50 x 5') in 1991.[148] A little later, in 1994, the Canadian TV CGI seriesReBoot (48×23') was aired, produced byMainframe Entertainment andAlliance Atlantis Communications, two companies that also createdBeast Wars: Transformers which was released 2 years after ReBoot.[149]
In 1995, there came the first fully computer-animated feature film,Disney-Pixar'sToy Story, which was a huge commercial success.[150] This film was directed byJohn Lasseter, a co-founder of Pixar, and former Disney animator, who started at Pixar with short movies such asLuxo Jr. (1986),Red's Dream (1987), andTin Toy (1988), which was also the first computer-generated animated short film to win an Academy Award. Then, after some long negotiations between Disney and Pixar, a partnership deal was agreed in 1991 with the aim of producing a full feature movie, andToy Story was the result.[151]
The following years saw a greatly increased uptake of digital animation techniques, with many new studios going into production, and existing companies making a transition from traditional techniques to CGI. Between 1995 and 2005 in the US, the average effects budget for a wide-release feature film leapt from $5 million to $40 million. According to Hutch Parker, President of Production at20th Century Fox, as of 2005[update], "50 percent of feature films have significant effects. They're a character in the movie." However, CGI has made up for the expenditures by grossing over 20% more than their real-life counterparts, and by the early 2000s, computer-generated imagery had become the dominant form of special effects.[152]
Warner Bros' 1999The Iron Giant was the first traditionally animated feature to have a major character, the title character, to be fully CGI.[153]
Motion-capture, or "mo-cap", records the movement of external objects or people, and has applications for medicine, sports, robotics, and the military, as well as for animation in film, TV and games. The earliest example would be in 1878, with the pioneering photographic work ofEadweard Muybridge on human and animal locomotion, which is still a source for animators today.[154] Before computer graphics, capturing movements to use in animation would be done usingRotoscoping, where the motion of an actor was filmed, then the film used as a guide for the frame-by-frame motion of a hand-drawn animated character. The first example of this wasMax Fleischer'sOut of the Inkwell series in 1915, and a more recent notable example is the 1978Ralph Bakshi 2-D animated movieThe Lord of the Rings.
Computer-based motion-capture started as aphotogrammetric analysis tool inbiomechanics research in the 1970s and 1980s.[155] A performer wears markers near each joint to identify the motion by the positions or angles between the markers. Many different types of markers can be used—lights, reflective markers, LEDs, infra-red, inertial, mechanical, or wireless RF—and may be worn as a form of suit, or attached direct to a performer's body. Some systems include details of face and fingers to capture subtle expressions, and such is often referred to as "performance-capture". The computer records the data from the markers, and uses it to animate digital character models in 2-D or 3-D computer animation, and in some cases this can include camera movement as well. In the 1990s, these techniques became widely used for visual effects.
Video games also began to use motion-capture to animate in-game characters. As early as 1988, an early form of motion-capture was used to animate the2-D main character of theMartech video gameVixen, which was performed by modelCorinne Russell.[156] Motion-capture was later notably used to animate the3-D character models in theSega Model 2arcade gameVirtua Fighter 2 in 1994.[157] In 1995, examples included theAtari Jaguar CD-based gameHighlander: The Last of the MacLeods,[158][159] and the arcadefighting gameSoul Edge, which was the first video game to usepassive optical motion-capture technology.[160]
Another breakthrough where a cinema film used motion-capture was creating hundreds of digital characters for the filmTitanic in 1997. The technique was used extensively in 1999 to create Jar-Jar Binks and other digital characters inStar Wars: Episode I – The Phantom Menace.
Match moving (also known as motion tracking or camera tracking), although related to motion capture, is a completely different technique. Instead of using special cameras and sensors to record the motion of subjects, match moving works with pre-existing live-action footage, and uses computer software alone to track specific points in the scene through multiple frames, and thereby allow the insertion of CGI elements into the shot with correct position, scale, orientation, and motion relative to the existing material. The terms are used loosely to describe several different methods of extracting subject or camera motion information from a motion picture. The technique can be 2D or 3D, and can also include matching for camera movements. The earliest commercial software examples being3D-Equalizer from Science.D.Visions[161] andrastrack from Hammerhead Productions,[162] both starting mid-90s.
The first step is identifying suitable features that the software tracking algorithm can lock onto and follow. Typically, features are chosen because they are bright or dark spots, edges or corners, or a facial feature—depending on the particular tracking algorithm being used. When a feature is tracked it becomes a series of 2-D coordinates that represent the position of the feature across the series of frames. Such tracks can be used immediately for 2-D motion tracking, or then be used to calculate 3-D information. In 3-D tracking, a process known as "calibration" derives the motion of the camera from the inverse-projection of the 2-D paths, and from this a "reconstruction" process is used to recreate the photographed subject from the tracked data, and also any camera movement. This then allows an identical virtual camera to be moved in a 3-D animation program, so that new animated elements can be composited back into the original live-action shot in perfectly matched perspective.[163]
In the 1990s, the technology progressed to the point that it became possible to include virtual stunt doubles. Camera tracking software was refined to allow increasingly complex visual effects developments that were previously impossible. Computer-generated extras also became used extensively in crowd scenes with advanced flocking and crowd simulation software. Being mainly software-based, match moving has become increasingly affordable as computers become cheaper and more powerful. It has become an essential visual effects tool and is even used providing effects in live television broadcasts.[164]
In television, avirtual studio, or virtual set, is a studio that allows the real-time combination of people or other real objects and computer generated environments and objects in a seamless manner. It requires that the 3-D CGI environment is automatically locked to follow any movements of the live camera and lens precisely. The essence of such system is that it uses some form of camera tracking to create a live stream of data describing the exact camera movement, plus some realtime CGI rendering software that uses the camera tracking data and generates a synthetic image of the virtual set exactly linked to the camera motion. Both streams are then combined with a video mixer, typically usingchroma key. Such virtual sets became common in TV programs in the 1990s, with the first practical system of this kind being theSynthevision virtual studio developed by the Japanese broadcasting corporationNHK (Nippon Hoso Kyokai) in 1991, and first used in their science special,Nano-space.[165][166] Virtual studio techniques are also used in filmmaking, but this medium does not have the same requirement to operate entirely in realtime. Motion control or camera tracking can be used separately to generate the CGI elements later, and then combine with the live-action as apost-production process. However, by the 2000s, computer power had improved sufficiently to allow many virtual film sets to be generated in realtime, as in TV, so it was unnecessary to composite anything in post-production.
Machinima uses realtime 3-D computer graphics rendering engines to create a cinematic production. Most often, video games machines are used for this. TheAcademy of Machinima Arts & Sciences (AMAS), a non-profit organization formed 2002, and dedicated to promoting machinima, defines machinima as "animated filmmaking within a real-time virtual 3-D environment". AMAS recognizes exemplary productions through awards given at its annual[167][168] The practice of using graphics engines from video games arose from the animated software introductions of the '80s "demoscene",Disney Interactive Studios' 1992 video gameStunt Island, and '90s recordings of gameplay infirst-person shooter video games, such asid Software'sDoom andQuake. Machinima Film Festival. Machinima-based artists are sometimes called machinimists or machinimators.
There were many developments, mergers and deals in the 3-D software industry in the '90s and later.
In 2000, a team led byPaul Debevec managed to adequately capture (and simulate) thereflectance field over thehuman face using the simplest oflight stages.[183] which was the last missing piece of the puzzle to makedigital look-alikes of known actors.
The first mainstream cinema film fully made withmotion-capture was the 2001 Japanese-AmericanFinal Fantasy: The Spirits Within directed byHironobu Sakaguchi, which was also the first to use photorealistic CGI characters.[184] The film was not a box-office success.[185] Some commentators have suggested this may be partly because the lead CGI characters had facial features that fell into the "uncanny valley".[186] In 2002, Peter Jackson'sThe Lord of the Rings: The Two Towers was the first feature film to use a realtime motion-capture system, which allowed the actions of actorAndy Serkis to be fed direct into the 3-D CGI model ofGollum as it was being performed.[187]
Motion capture is seen by many as replacing the skills of the animator, and lacking the animator's ability to create exaggerated movements that are impossible to perform live. The end credits ofPixar's filmRatatouille (2007) carry a stamp certifying it as "100% Pure Animation — No Motion Capture!" However, proponents point out that the technique usually includes a good deal of adjustment work by animators as well. Nevertheless, in 2010, the US Film Academy (AMPAS) announced that motion-capture films will no longer be considered eligible for "Best Animated Feature Film" Oscars, stating "Motion capture by itself is not an animation technique."[188][189]
The early 2000s saw the advent offully virtual cinematography with its audience debut considered to be in the 2003 filmsThe Matrix Reloaded andThe Matrix Revolutions with its digital look-alikes so convincing that it is often impossible to know if some image is a human imaged with a camera or a digital look-alike shot with asimulation of a camera. The scenes built and imaged within virtual cinematography are the"Burly brawl" and the end showdown betweenNeo andAgent Smith. Withconventionalcinematographic methods the burly brawl would have been prohibitively time-consuming to make with years ofcompositing required for a scene of few minutes. Also a human actor could not have been used for the end showdown in Matrix Revolutions: Agent Smith'scheekbone gets punched in by Neo leaving the digital look-alike naturally unhurt.
This section needs to beupdated. Please help update this article to reflect recent events or newly available information.(October 2022) |
In SIGGRAPH 2013Activision andUSC presented areal-time digital face look-alike of "Ira" using the USC light stage X by Ghosh et al. for bothreflectance field and motion capture.[190][191] The result, both precomputed andreal-time rendered with the state-of-the-artGraphics processing unit:Digital Ira,[190] looks fairly realistic. Techniques previously confined to high-end virtual cinematography systems are rapidly moving into the video games andleisureapplications.
New developments in computer animation technologies are reported each year in the United States atSIGGRAPH, the largest annual conference on computer graphics and interactive techniques, and also atEurographics, and at other conferences around the world.[192]
In front of the oscilloscope mounted a 35 mm camera with extended magazine on a custom-made stand. The camera was controlled automatically by computer, which sent a signal to the camera when a new image has been fed on the oscilloscope. In the Nordic ADB, who counted a lot and release data stewed, they had realized that they had all the coordinates to draw perspective from the driver's seat. They took as an example of this in the future how the then nyprojekterade motorway towards Nacka, outside Stockholm, would look like. With the camera in front of the oscilloscope, they could snap a picture every twenty meters of the virtual road. The result was a fictitious trip in the virtual highway at a speed of 110 km/h. The film was transferred to 16 mm format and made in 100 copies. Technical Museum is the only known surviving copy of the film in the collections. On the film roll box says that it is the first computer-drawn film in the world. There is little other evidence that this is actually true, and that this is the world's first computer animation. The film aired on November 9, 1961 at primetime in the national television newscast Aktuellt.
In 1964, William Fetter, a Boeing technical illustrator, created the first digital model of a human body to evaluate engineering designs for ergonomic quality. Exploring reach and visual field issues, he plotted a series of individual models of "The Boeing Man," which later came to be known simply as "Boeman," and produced early computer animation sequences.
William Fetter (1928–2002), a Boeing art director, was the first person to draw a human figure using a computer. This figure is known as the "Boeing Man." In 1960, Fetter coined the term "computer graphics" in a description of his work on cockpit design for the Boeing Company.
{{citation}}: CS1 maint: work parameter with ISBN (link){{cite book}}:|journal= ignored (help)