CROSS-REFERENCE TO RELATED APPLICATIONS The present disclosure may be related to the following commonly assigned applications/patents:
This application claims priority from co-pending U.S. Provisional Patent Application No. 60/623,414 filed Oct. 28, 2004 by Alvarez et al. and entitled “Client/Sever-Based Animation Software.”
This application also claims priority from co-pending U.S. Provisional Patent Application No. 60/623,415 filed Oct. 28, 2004 by Alvarez et al. and entitled “Control Having Interchangeable Coordinate Control Systems.”
This application is also related to co-pending U.S. patent application Ser. No. ______, filed on a date even herewith by Alvarez et al. and entitled “Camera and Animation Controller, Systems and Methods.”
The respective disclosures of these applications/patents are incorporated herein by reference in their entirety for all purposes.
COPYRIGHT STATEMENT A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
FIELD OF THE INVENTION The present invention relates to the field of animation and filmmaking in general and, in particular, to software, systems and methods for creating and/or editing animations and/or films, including any type of film-based and/or digital still and/or video image production.
BACKGROUND OF THE INVENTION Animated films have long been favorites of both children and adults. More recently, advances in computer animation have facilitated the process of making animated films (and in storyboarding and/or adding computer effects to live-action films). Generally, animation software has been used on a PC with display, keyboard, mouse, animation software and rendering software. Each PC is a standalone unit that contains the animation data to be worked on and has the animation software that will provide to the animation data the added contributions and movements imparted by the programmer or artist at the PC.
In some cases, especially in large organizations, networked computers have been used in the animation process. Referring toFIG. 1, a typical network might comprise a central server system S withversion tracking software100, which stores the animation data files inbulk storage101. When a user wishes to perform an animation task, she accesses the server, checks out the relevant data files, and alters the animation data files withanimation software111 and actuators115 (here represented as a keyboard and mouse) resident in the PC. In order to check her work, the artist replays the altered animation data locally through renderingsoftware112 resident on the PC, viewing the animation data movements at the PC'slocal display114. Finally, and at the end of a working session, the user often will check in the altered data files as a new version added tobulk storage101 and tracked byversion tracking software100.
Other systems perform animation and rendering at a server (and/or a server farm). These systems generally require very powerful servers, because the server(s) have to render (i.e., generate an image or sequence of images for) animations produced by many client workstations. Moreover, users may suffer long delays while waiting for a scene to be rendered by the server(s).
Such systems present significant limitations. For instance, the traditional server-based animation systems fail to take maximum advantage of available computing resources. While most artists have relatively high-performance workstations, systems that render animations at a server often fail to take full advantage of the processing available on the workstation. Conversely, systems that rely on the workstation to perform the animation and rendering fail to take advantage of a principle strength of server-class computers: high performance file input/output. While rendering generally is a very processor-intensive task, animating generally is less processor-intensive but involves accessing a large amount of data, a task at which severs generally excel.
Moreover, client-based animation systems make the management of data (including version control, security of intellectual property, etc.) quite cumbersome. Merely by way of example, if two or more programmers or artists are working upon otherwise substantially identical portions of animation, incompatible variations can be introduced. These incompatible variations are a direct result of the local temporary storage of the modified data. When it is remembered that the final animation project is typically composed of many man years of effort, the presence of incompatible variations can present severe complication.
Additionally, modern animation includes the use of expensive tools and processes to generate models of the three-dimensional shapes describing objects or characters used in the animation. Local unsupervised and inconsistent modification of the models in checked out animation data can occur. Further, presuming that model modification is made during the animation process, animation data previously recorded must, in the usual case, be completely reworked.
Furthermore, both animation software and the work product of the animators is subject to a high risk of piracy. By storing the suite of animation software at thelocal PC110 and/or allowing a user to obtain all relevant files related to an animation, the producer of a movie exposes these assets to unauthorized copying.
Hence, existing systems, which generally concentrate the animation and rendering tasks together on either a server or a client, suffer significant drawbacks.
Definition of Terms
Certain terms, as used in this disclosure, have the following defined meanings:
Model. A three-dimensional shape, usually described in terms of coordinates and mathematical data, describing the shape of any character or object. Examples of characters include actors, animals, or other beings whose animation can tell or portray the story. In the usual case, the model is typically provided in a neutral pose (known in the art as a “da Vinci pose”), in which the model is shown standing with limbs spread apart and head looking forward. It is understood in the art that in, many situations, the generation of the model can be extraordinarily expensive. In some cases, the model is generated, scanned or otherwise digitized with recorded spatial coordinates of numerous points on its surface. A virtual representation of the model can occur when the data is reconstructed. Furthermore, the model may include connectivity data, such that the collection of points defining the model can be treated as the vertices of polygonal approximations of the surface shape of the model. The model may include various mathematical smoothing and/or interpolation algorithms. Such models can include collections of spatial points ranging from hundreds of points to hundreds of thousands or more points.
Render. To make a model viewable as an image, such as by applying textures to a model and/or imaging the model using a real or virtual camera or by photographing a real object.
Rig. In general, the term “rig” is used to refer to a deformation engine that specifies how movement of the model should translate into animation of a character based on the model. This is the software and data used to deform or transform the “neutral pose” of the model into a specific “active pose” variation of the model. Taking the example of the human figure, the rig would impart to the model the skeletal joint movement including shoulder, elbow, hand, finger, neck, head, hip, knee, and foot movement and the like. By having animation software manipulate a rig incorporated to a model, animated movement of the model is achieved.
Texture. In the usual modern case, one or more are mapped onto the surface of a model to provide a digital image portrayed by the model as manipulated by the rig.
Virtual Character. The model as deformed by the rig and presented by the texture in animation.
Virtual Set. The vicinity or fiducial reference point and coordinate system with respect to which the location of any element may be specified.
Prop. An object on the virtual set usually comprising a model without a rig.
Scene. A virtual set, one or more props, and one or more virtual characters.
Action. Animation associated with a scene. It should be noted that upon editing of the final animation story, portions of an action may be distributed without regard to time for example at the beginning, middle and end of the animation story.
Editing. The process by which portions of actions are assembled to construct a story, narrative, or other product.
Actuator. A device such as a mouse or keyboard on a personal computer enabling input to the animation software. This term includes our novel adaptation of a “game controller” for imparting animation to characters.
BRIEF SUMMARY OF THE INVENTION Various embodiments of the invention provide novel software, systems and methods for animation and/or filmmaking. In a set of embodiments, for example, a client-server system provides the ability to control various aspects of a live-action and/or an animated scene, including cameras and/or light sources (either real and/or virtual), animated characters, and other objects. This can include, merely by way of example, moving cameras, lights and/or the like, as well as rendering animated objects (e.g., based on movements of the objects themselves and/or based on movements of cameras, lights, etc.).
One set of embodiments, for example, provides systems that can be used in the filmmaking process and/or systems for producing animated works. An exemplary system, in accordance with some embodiments, includes an animation client computer, which may comprise a first processor, a display device, at least one input device, and/or animation client software. The system may further include an animation server computer comprising a processor and animation server software.
In certain embodiments, the animation client software may comprise instructions executable by the first processor to accept a set of input data from the at least one input device. The set of input data may indicate a desired position for an animated object, which might comprise a set of one or more polygons and/or a set of one or more textures to be applied to the set of one or more polygons. The animation client software might further comprise instructions executable by the first processor to transmit the set of input data for reception by the animation server computer.
The animation server software can comprise instructions executable by the second processor to receive the set of input data from the animation client computer and/or to process the input data to determine the desired position of the animated object. The animation server software may also comprise additional instructions executable by the second processor to calculate a set of joint rotations defining the desired position of the animated object and/or to transmit the set of joint rotations for reception by the animation client computer.
The animation client software, then, may comprise further instructions executable by the first processor to receive the set of joint rotations defining the position of the animated object and/or to calculate (perhaps based on the set of joint rotations) a set of positions for the set of one or more polygons. There may also be additional instructions executable by the first processor to apply to the set of one or more polygons at least one of the textures from the set of one or more textures to render the animated object in the desired position. The rendered animated object then may be displayed by the animation client, and/or the set of joint rotations may be stored at a data store associated with the animation server computer.
In a particular embodiment, the animation client computer may be a plurality of animation client computers including a first animation client computer and a second animation client computer. The first animation client computer might comprise the input device(s), while the second animation client computer might comprise the display device(s). The animation server computer then, might receive the set of input data from the first animation client computer and/or transmit the set of joint rotations for reception by the second animation client computer, which might be configured to receive the set of joint rotations, calculate a set of positions for the set of one or more polygons based on the set of joint rotations, apply to the set of one or more polygons at least one of the textures from the set of one or more textures to render the animated object in the desired position, and/or display on the display device the rendered animated object.
In another set of embodiments, the animation client software may comprise instructions executable by the first processor to accept a set of input data (which might indicate a desired position for an object) from the at least one input device and/or instructions executable by the first processor to transmit the set of input data for reception by the animation server computer. In some embodiments, the animation server software comprises instructions executable by the second processor to receive the set of input data from the animation client computer and/or to transmit for reception by the animation client computer a set of position data, perhaps based on the set of input data received from the animation client computer. The animation client software might further comprise instructions executable by the first processor to receive the set of position data from the animation server computer and/or to place the object in the desired position, based at least in part on the set of position data.
In various embodiments, the object can be a virtual object (including without limitation a virtual camera, a virtual light source, etc.) and/or a physical object (including without limitation a device, such as a camera, a light source, etc., in communication with the animation client computer, and/or any other appropriate object). Merely by way of example, the object may be an animated character, which might comprise a set of polygons and at least one texture, such that placing the object in the desired position comprises rendering the animated character in the desired position.
In one aspect, the set of position data might comprise data (such as joint rotations, joint angles, etc.) defining a position of the object and/or defining a deformation of a rig describing the object. In another aspect, the set of position data might comprise a position and/or orientation of a real or virtual camera; the position of the object in the scene may be affected by the position and/or orientation of the real or virtual camera, such that the placement of the object depends on the position and/or orientation of the real or virtual camera.
In certain embodiments, the animation server has an associated data store configured to hold a set of one or more object definition files for the animated object, the set of one or more object definition files collectively specifying a set of polygons and textures that define the object (e.g., the object definition files may comprise one or more textures associated with the object). Hence, the animation client software may comprise instructions executable by the first processor to download from the animation server computer at least a portion of the set of one or more object definition files necessary to render the object. In some cases, however, the downloaded portion of the set of one or more object definition files may be insufficient to independently recreate the animated object without additional data, which might be resident on the animation server computer. Similarly, in some configurations, the animation client computer might be unable to upload to the animation server computer any modifications of the at least a portion of the set of one or more object definition files.
In other configurations, the animation client software comprises further instructions executable by the first processor to modify the object definition files to produce a set of modified object definition files. Optionally, the animation server software comprises instructions executable by the second processor to receive the set of modified object definition files and/or to track changes to the set of object definition files. In some cases, the animation server computer may be configured to identify a user of the animation client computer and/or to determine whether to accept the set of modified object definition files, perhaps based on an identity of the user of the animation client computer. In other cases, the animation server computer may be configured to distribute the set of modified object definition files to a set of animation client computers comprising at least a second animation client computer.
In some embodiments, the data store is configured to hold a plurality of sets of one or more object definition files for a plurality of animated objects. Optionally, the animation server software might comprise further instructions executable by the second processor to determine whether to provide to the animation client computer one or more of the sets of the object definition files, based on, for example, a set of payment or billing information and/or an identity of a user of the animation client computer.
In other embodiments, the animation server software further comprises instructions executable by the second processor to identify a user of the animation client computer and/or to determine, (e.g., based on an identification of the user and/or a set of payment or billing information) whether to allow the animation client computer to interact with the animation server software.
In further embodiments, the animation server software comprises instructions executable by the second processor to store the set of position data at a data store (which might be associated with the animation server computer). In an aspect, the animation server software comprises instructions to store a plurality of sets of position data (each of which may be, but need not be, based on a separate set of input data) and/or to track a series of changes to a position of the object, based on the plurality of sets of position data.
In a particular set of embodiments, the animation client computer is a first animation client computer, and the system comprises a second animation client computer in communication with the animation server computer. The second animation client computer may comprise a third processor, a second display device, a second input device, and/or second animation client software.
The second animation software may comprise instructions executable by the third processor to accept a second set of input data (which may indicate a desired position for a second object) from the second input device and/or to transmit the second set of input data for reception by the animation server computer. The animation server software may comprise instructions executable by the second processor to receive the second set of input data from the animation client computer and/or to transmit (e.g., for reception by the second animation client computer) a second set of position data, which may be based on the second set of input data received from the second animation client computer. The second animation client software may further comprise instructions executable by the third processor to receive the second set of position data from the animation server computer and/or to place the second object in the desired position, perhaps based on the second set of position data.
The first object and the second object may be the same object. Accordingly, in some cases, the animation server software might comprise instructions to transmit the second set of position data for reception by the first animation client computer, and the animation client software on the first animation client computer might further comprise instructions to place the object in a position defined by the second set of position data, such that the first display displays the object in a position desired by a user of the second animation client computer. In other cases (e.g., if the first object and the second object are not the same object), the second set of position data might have no impact on a rendering of the first object on the first client computer, and/or the first set of position data might have no impact on a rendering of the second object on the second client computer.
A variety of input devices may be used. Exemplary devices include a joystick, a game controller, a mouse, a keyboard, a steering wheel, an inertial control system, an optical control system, a full or partial body motion capture unit, an optical, mechanical or electromagnetic system configured to capture the position or motion of an actor, puppet or prop, and/or the like.
In another set of embodiments, a system for producing animated works comprises a first animation client computer comprising a first processor, a first display device, at least one first input device, and first animation client software. The system further comprises an animation server computer in communication with the animation client computer and comprising a second processor and animation server software.
The first animation client software comprises instructions executable by the first processor to accept a first set of input data from the at least one input device; the first set of input data indicates a desired position for a first object. The first animation client software also comprises instructions to transmit the first set of input data for reception by the animation server computer. The animation server software comprises instructions executable by the second processor to receive the first set of input data from the first animation client computer, to calculate a first set of position data (perhaps based on the first set of input data received from the first animation client computer) and to render the first object, based at least in part on the first set of position data. The first animation client software further comprises instructions to display the first object in the desired position.
The system may further comprise a second animation client computer comprising a third processor, a second display device, at least one second input device, and second animation client software. The second animation client software can comprise instructions executable by the third processor to accept a second set of input data from the input device, the set of input data indicating a desired position for a second object and/or to transmit the second set of input data for reception by the animation server computer. The animation server software may further comprise instructions to receive the second set of input data from the second animation client computer and/or instructions to transmit for reception by the second animation client computer a second set of position data, based on the second set of input data received from the second animation client computer.
In some cases, the second animation client software comprises instructions to receive the second set of position data from the animation server computer. The second animation client software may also comprise instructions to place the second object in the desired position that object, based at least in part on the second set of position data.
Another set of embodiments provides animation client computers and/or animation server computers, which may be similar to those described above.
A further set of embodiments provides animation software, including software that can be used to operate the systems described above. An exemplary animation software package may be embodied on at least one computer readable medium and may comprise an animation client component and an animation server component. The animation client component might comprise instructions executable by a first computer to accept a set of input data from at least one input device at the first computer and/or to transmit the set of input data for reception by a second computer. The input data may indicate a desired position for an object.
The animation server component may comprise instructions executable by a second computer to receive the set of input data from the first computer and/or to transmit for reception by the first computer a set of position data, based on the set of input data received from the first computer. The animation client component, then, may comprise further instructions executable by the first computer to receive the set of position data from the second computer and/or to place the animated object in the desired position, based at least in part on the set of position data.
Still another set of embodiments provides methods, including without limitation methods that can be implemented by the systems and/or software described above. An exemplary method of creating an animated work comprises accepting at an animation client computer a set of input data (which might indicate a desired position for an object) from at least one input device, and/or transmitting the set of input data for reception by an animation server computer. In some cases, the method further comprises receiving at the animation server computer the set of input data from the animation client computer and/or transmitting for reception by the animation client computer a set of position data, based on the set of input data received from the animation client computer. The set of position data from the animation server computer may be received at the client computer. The method can further include placing the object in the desired position, based at least in part on the set of position data.
BRIEF DESCRIPTION OF THE DRAWINGS A further understanding of the nature and advantages of the present invention may be realized by reference to the remaining portions of the specification and the drawings wherein like reference numerals are used throughout the several drawings to refer to similar components. In some instances, a sublabel is associated with a reference numeral to denote one of multiple similar components. When reference is made to a reference numeral without specification to an existing sublabel, it is intended to refer to all such multiple similar components.
FIG. 1 is a block diagram of the prior art animation software design illustrating artist and/or programmer PCs connected to a server system for checking out animation data files, processing the animation data files, and returning the animation data files to bulk storage of the animation data at the server, the exemplary server here being shown with version tracking software;
FIG. 2 is a block diagram of an animation system in accordance with one set of embodiments;
FIG. 3 is a block diagram an animation system in accordance with another set of embodiments;
FIG. 4A is a representation of a model that can be animated by various embodiments of the invention;
FIG. 4B is a schematic representation of a rig suitable for deforming the model ofFIG. 4A, the rig here having manipulation at the neck, shoulders, elbow, hand, hips, knees, and ankles;
FIG. 4C is a schematic representation of texture for placement over the model ofFIG. 4A to impart a texture to a portion of the exterior of the model in the form of a man's suit;
FIG. 5 is a representation of a scene;
FIG. 6 is a generalized schematic drawing illustrating various components of a client/server animation system, in accordance with embodiments of the invention;
FIG. 7 is a flow diagram illustrating a method of creating an animated work, in accordance with various embodiments of the invention; and
FIG. 8 is a generalized schematic drawing of a computer architecture that can be used in various embodiments of the invention.
DETAILED DESCRIPTION Various embodiments of the invention provide novel software, systems and methods for animation and/or filmmaking (the term “filmmaking” is used broadly herein to connote creating and/or producing any type of film-based and/or digital still and/or video image production, including without limitation feature-length films, short films, television programs, etc.). In a set of embodiments, for example, a client-server system provides the ability to control various aspects of a live-action and/or an animated scene, including cameras and/or light sources (either real and/or virtual), animated characters, and other objects. This can include, merely by way of example, moving cameras, lights and/or the like, as well as rendering animated objects (e.g., based on movements of the objects themselves and/or based on movements of cameras, lights, etc.).
Merely by way of example, in a set of embodiments, a client animation computer accepts input (e.g., via one or more input devices) and provides that input to an animation server computer. (In some cases, the client animation may provide raw input from the input device.) The input indicates a desired movement and/or position of an animated character, relative to other objects in a virtual scene. The animation server computer, after receiving the input, calculates a set of data (including, merely by way of example, data describing a deformation of a model, such as joint rotations and/or joint angles) that describe the desired movement and/or position of the character. (The use of joint rotations in animation is described below.) After calculating the set of joint angles, the animation server computer transmits the set of joint angles to the animation client computer. The animation client computer then renders the animated character in the desired position, based on the set of joint angles, as well as a set of polygons and one or more textures defining the animated character. (As used herein, the term “polygons” broadly refers not only to the traditional polygons used to form a model of an object, but also to any other structures that commonly are used to form a model of an object, including merely by way of example, NURBS surfaces, subdivision surfaces, level sets, volumetric representations, and point sets, among others.)
In this way, the animation client computer can store some of the files necessary to render the character, and can in fact render the character if provided the proper joint angles. This is beneficial, in many situations, because it relieves the animation server of the relatively processor-intensive task of rendering the animation. This arrangement, however, also allows the server to perform the joint calculations, which, while generally not as processor-intensive as the rendering process, often impose relatively high file input/output (“I/O”) requirements, due to the extensive size of the databases used to hold data for performing the calculation of joint angles.
This exemplary system, then, provides a distribution of work that takes advantage of the strength of the animation client (that is, the ability to provide a plurality of animation client computers for performing the processor-intensive rendering tasks for various animation projects), while also taking advantage of the strength of typical server computers (that is, the ability to accommodate relatively high file I/O requirements). By contrast, in systems where a central server provides rendering services, extremely powerful (and therefore expensive) servers (and in many cases, server farms) are required to provide the rendering services. Ironically, such systems often also feature relatively powerful workstations as animation clients, but the processing power of the workstations is not harnessed for the rendering.
This exemplary system provides additional advantages, especially when compared with systems on which the animation (i.e., joint rotation calculation) and rendering processes occur on the animation client. Merely by way of example, the exemplary system described above facilitates the maintenance of data. For instance, since the joint rotations for a particular animation are calculated at the animation server, they can easily be stored there as well, and a variety of version-tracking and change-management protocols may be employed. By contrast, when individual clients (as opposed to an animation server) calculate joint rotations (and/or other position data), such data must either be stored at the several client machines or uploaded to the server after calculation, and management of that data therefore becomes much more burdensome.
Moreover, because, in the exemplary system described above, the physics engine that calculates the joint rotations remains on the server, the system can be configured to prevent the animation client from accessing sufficient data to independently perform the animation process, preventing unauthorized copying of animations and thereby providing greater security for that intellectual property.
Because various embodiments of the invention can be used to create animated works, it is helpful to provide a brief overview of the animation process. Referring first toFIG. 4A, amodel10 the form of a human figure is disclosed.Model10 includesface11,neck12,arms14 withelbow15 andwrist16 leading tohand17. The model further includeship18knees19 andankles20. In a virtual character, the “model” (or “virtual model”) is a geometric description of the shape of the character in one specific pose (commonly called the “model pose,” “neutral pose,” or “reference pose.” The neutral pose used in the model is commonly a variation on the so called “da Vinci pose” in which the model is shown standing with eyes and head looking forward, arms outstretched, legs straight with feet approximately shoulder width apart.
The model can be duplicated in any number of ways. In one common prior art process, a clay model or human model is scanned or digitized, recording the spatial coordinates of a numerous points on the surface of the physical model so that a virtual representation of the model may be reconstructed from the data. It is to be understood that such models can be the product of great effort, taking man years to construct.
The model also includes connectivity data (also called an “edge list”). This data is recorded at the time of scanning or inferred from the locations of the points, so that the collection of points can be treated as the vertices of a polygonal approximation of the surface shape of the original physical model. It is common, but not required, in the prior art for various mathematical smoothing and interpolation algorithms to be performed on the virtual model, so as to provide for a smoother surface representation than is achieved with a pure polygonal representation. One skilled in the art will appreciate that virtual models commonly include collections of spatial coordinates ranging from hundreds of points to hundreds of thousands or more points.
Referring toFIG. 4B, arig30 is illustrated which is compatible withmodel10 shown inFIG. 4A.Rig30 includeshead31,neck32, eyes33, shoulders34elbows35 andwrist36. Further,hips38knees39 andankles40 are also disclosed. Simply statedrig30 is mathematically disposed onmodel10 so that animation can move therig30 atneck32, shoulders34elbows35 andwrist36. Further, movement ofhips38,knees39, andankles40 can also occur through manipulation of therig30.
Therig30 enables themodel10 to move with realistic changes of shape. Therig30 thus turns themodel10 into a virtual character commonly required to move and bend, such as at the knees or elbows, in order to convey a virtual performance. The software and data used to deform (or transform) the “neutral pose” model data into a specific “active pose” variation of the model is commonly called a “rig” or “IK rig” where (“IK” is a shortened form of “Inverse Kinematics”).
“Inverse Kinematics” (as in “IK Rig”) is a body of mathematics that enables the computation of joint angles (or joint rotations) from joint locations and skeletal relationships. “Forward Kinematics” is the term of art for computing joint locations based on a collection of joint angles and skeletal relationships.
To a programmer skilled in the art, a rig is a piece of software which has as its inputs a collection of joint rotations, joint angles and/or joint locations (“the right elbow is bent 30 degrees” or “the tip of the left index finger is positioned 2 cm above the center of the light switch”), the skeletal relationships between the joints (“the head bone is connected to the neck bone”) and a neutral pose representation of the virtual model, and has as its output a collection of spatial coordinates and connectivity data describing the shape that the virtual actor's body takes when posed as described by the input date.
To an artist skilled in the prior art, a rig is a visual representation of the skeleton of the virtual actor, with graphical or other controls which allow the artist to manipulate the virtual actor. In the case of an “Inverse Kinematics Rig” the artist might place a mouse on the left index finger of the virtual actor and dragging the left index finger across the screen so as to cause the virtual actor's arm to extend in a pointing motion. In the case of a “Forward Kinematics Rig” the artist might click on the elbow of the virtual character and bend or straighten the rotation of the elbow joint by dragging the mouse across the screen or by typing a numeric angle on the keyboard.
Referring toFIG. 4C,texture50 is illustrated here in the form of only a man's suit having acoat51 and pants52. In actual fact,texture50 would include many other surfaces. For example, a human face, socks, shoes, hands (possibly with gloves) would all be part of the illustratedtexture50. It is common for a virtual actor to be drawn (or “rendered”) to the screen with, for example, blue eyes and a red jacket. The virtual model described earlier contains purely spatial data. Additional data and/or software, commonly called “Textures,” “Maps,” “Shaders,” or “Shading” is employed to control the colors used to render the various parts of the model.
In the earliest forms of the prior art, the color (or “texture”) information was encoded directly into the model, in the form of a “Vertex Color.” A Vertex Color is commonly an RGB triplet (Red 0-255, Green 0-255, Blue 0-255) assigned to a specific vertex in the virtual model. By assigning different RGB triplets to different vertices, the model may be colored in such a way as to convey blue eyes and a red dress.
While vertex colors are still used in the creation of virtual characters, in the current state of the art it is more common to make use of “texture coordinates” and “texture maps.” “Texture coordinates” are additional data that is recorded with the vertex location data and connectivity data in order to allow an image (or “texture”) to be “mapped” onto the surface.
In order to provide realistic coloring to the eyes of the virtual character (perhaps including veins in the whites of the eyes and/or color striations in the iris), a digital image of an eye will be acquired (possibly via a digital camera or possibly by an artist who paints such a picture using computer software). The digital image of the eye is a “texture map” (or “texture”). The vertices in the model that comprise the surface of the eye will be tagged with additional data (“texture coordinates”) that is analogous to latitude and longitude coordinates on a globe. The texture is “mapped” onto the surface by use of the texture coordinates.
Referring back toFIG. 4B, the virtual character is instructed to look to the left (for example by rotating the neck controls32 or eye controls33 in the rig), the virtual model is deformed in a manner which rotates all of the vertices making up thehead11 to the left. The head texture is then rendered in the desired location on the screen based upon the vertex locations and the texture coordinates of those vertices.
Referring toFIG. 5, an animated scene comprises some or all of the following: a virtual set (set60 being shown), one or more props (floor69,wall70,chair61 and table62 being shown), and one or more virtual characters (model10 manipulated byrig30 havingtexture50 being shown [here designated only by the10]).
Each of the virtual sets, props, and characters has a fiducial reference point with respect to which the location of the element may be specified. Here the fiducial reference point is shown at65. Thevirtual set60, props (seechair61 and table62), andcharacter10 are assembled together by specifying their spatial locations using a shared coordinate system from fiducial65. The choice of coordinate system is arbitrary, but a common practice is to locate the virtual set at the origin of the coordinate system.
Often, the background (or “virtual set”) is essentially a virtual character with either no rig (in the case of a purely static virtual set) or what is commonly a very simple rig (where the joint angles might control the opening angles of adoor67 or the joint locations might control the opening height of a window68). It is common to embellish the scene used in an action with a variety of props. As with the background, props are again essentially virtual characters which are used to represent inanimate objects.
For the purposes of creating a motion picture sequence, animation data is associated with the elements of a scene to create an action. A sequence of images may be constructed by providing the rig ofcharacter10 with a sequence of input data (“animation data”) such as 24 sets of joint angles per second, so as to produce a 24 frame per second movie. In the prior art, the animation data provided to the rigs are commonly compressed through the use of various interpolation techniques.
For example, it is common in the prior art to compress the animation data into “key frames.” A key frame is typically associated with a specific point in the timeline of the sequence of images (“t=2.4 seconds”) and specifies joint angles or joint locations for some or all of the joints in the rig. Any joints (or more generally input parameters) whose values are not specified in this key frame interpolate their values at t=2.4 seconds from other preceding and following key frames that do specify values for those joints or input parameters. The various families of mathematical formulas used to interpolate between key frame values (such as “Bezier curves” and “b-Splines”) are well known to practitioners of the art.
Several methods are commonly used by artists to specify the input data provided to the rig. Merely by way of example, in the “animation” method, the artist indicates a specific point in the timeline (“t=2.4 seconds”), adjusts one or more joint angles or locations (for example using the keyboard or by manipulating on-screen controls using a mouse), and “sets a key frame” on those joint angles or locations. The artist then moves to a different point on the timeline (“t=3.0 seconds”) and again adjusts joint angles or locations before “setting a key frame.”
Once the artist has set two or more key frames, the artist can move an indicator on the timeline or press a “play” button to watch the animation that she has created. By repeatedly adding, modifying, or moving key frames while repeatedly watching the playback of the animation, the artist can create the desired performance.
In the “motion capture,” method the artist performs the motion in some manner while the computer records the motion. Common input devices for use with motion capture include a full-body suit equipped with sensing devices to record the physical joint angles of a human actor and so-called “Waldo” devices which allow a skilled puppeteer to control a large number of switches and knobs with their hands (Waldo devices are most commonly used for recording facial animations). It is common to perform multiple captures of the same motion, during which sequence of captures the actor repeatedly reenacts the same motions until data is collected which is satisfactory both artistically and technically.
In the “procedural” method, custom software is developed which generates animation data from high-level input data. Procedural animation is commonly used when animating non-human actors such as flocks of birds or falling rocks. InFIG. 5, bird63 illustrates this technique.
In the “hybrid” method, the motion capture and/or procedural method is used to specify the initial data. For example, the initial movement of bird63 would be used in the procedural method. The data obtained via the motion capture or procedural method is then compressed in a manner that makes it technically similar (or compatible with) data obtained via the animation method. For example, presuming that bird63 was going to interact withcharacter10 inscene60, modification of the procedural image of bird63 would occur. Once the initial data has been compressed, it is then in the hybrid method manipulated, re-timed, and/or extended through the use of the animation method.
The animation software often plays back previously specified animations by interpolating animation data at a specific point in time, providing the interpolated animation data to the rigs, making use of the rigs to deform the models, applying textures to the models, and presenting a rendered image on the display. The animation software then advances to a different point in the timeline and repeats the process.
Based on this general description of the animation process, we turn now toFIG. 2, which illustrates a client/server animation system in accordance with a set of embodiments. The system comprises a plurality ofanimation client computers200 in communication (e.g., via a network) withserver computer210 as shown inFIG. 2. (The network can be any suitable network, including without limitation a local area network, wide area network, wired and/or wireless network, the Internet, an intranet or extranet, etc. Those skilled in the art will appreciate that any of a variety of connection facilities, including without limitation such networks, can provide communication between theserver210 and theclients200.) In accordance with some embodiments,models10, rigs30 and/or textures50 (and/or portions thereof) may be stored at theclient200, e.g., at model, rig, andtexture storage201. In this particular embodiment, such storage has advantages.
Presuming that a major studio is either subscribing or alternately maintainingmodels10, rigs30, andtextures50, there can be great expense and effort in developing these discrete components of a character. Further, and because of the effort and expense required, the owner or operator of theclient200 may not choose to share the contents oftexture storage201 with anyone, including the provider ofanimation server210.
The animation client computer may, in some embodiments, includerendering software203 operatively connected to model, rig, andtexture storage201. The rendering software may be part of an animation client application. Furthermore, a controller202 (here shown as a keyboard and mouse) operates throughnetwork connection205. It should be noted that any suitable controller, including those described in U.S. patent application Ser. No. ______, (attorney docket number 020071-000210), already incorporated by reference, can be used in accordance with various embodiments.
Theanimation server computer210 includesanimation data storage211,animation software212, and/orversion tracking software214. Presuming that the artist or programmer has created themodels10, rigs30, andtextures50, manipulation of an Action on ascene60 can either occur from the beginning (de novo) or, alternately, the artist and/or programmer may check out a previous version of the Action through thenetwork connection205 by accessinganimation server210 and retrieving fromanimation data storage211 the desired data toanimation software212.
In either event, utilizing the animation techniques described above, input data (e.g., based from thecontroller202 and/or an actuator thereof) is received by theclient200. In some cases, the input data may be described by an auxiliary coordinate system, in which case the input data may be processed as described in U.S. patent application Ser. No. ______ (attorney docket number 020071-000210), already incorporated by reference. Other processing may be provided as well, as necessary to format the raw input data received from thecontroller202.
Theanimation client200 in turn transmits the input (either as raw input data and/or after processing by the animation client computer200), is transmitted (e.g., via network connection205) to theanimation server computer210, and, more particularly, animation software212 (which might be incorporated in an animation server software). Processing of the selected Action will occur atanimation software212 withinanimation server210. Such processing will utilize the techniques above described. In particular, a set of joint rotations may be calculated, based on the input data. The joint rotations will describe the position and/or motion desired in the Action.
Playback will occur by havinganimation software212 emit return animation information throughnetwork connection205 and then torendering software203.Rendering software203 will access model, rig, andtexture storage201 to display atdisplay204 the end result of modifications introduced by the artist and or programmer at theclient200.
Thus, it will be seen that when an Action is modified at aclient200, if the models, rigs and/or textures are only available atclient200 then replay can only occur atclient200. This replay is not possible with the information possessed byserver210 because the model, rig and/ortexture storage201 is not resident in or available toanimation server210. (It should be noted, however, that in various embodiments, some or all portions of the models, rigs, and/or textures may be stored at the animation server in addition to—or instead of—at the animation client). Alternatively, if the models rigs and textures are available atclient200 andPC200A then replay can occur at bothclient200 andclient200A, allowing for cooperative work activities between two users.
It should be understood that animation server210 (and/or another server in communication therewith) can provide a number of services. For example, theserver210 can provide access control; for instance,client200 is required to log in toserver210. Furthermore, subscription may be required as a prerequisite for access toserver210. In some cases,server210 can deliver different capabilities for different users. By way of example,PC200A can be restricted to modification of character motion whilePC200 modifies animation ofbird68.
Generally, theclient200 controls the time when playback starts and stops for any individual Action. Moreover, theclient200 may arbitrarily change the portion of the Action being worked on by simply referring to that Action at a specified time period.
It will further be realized that with therendering software203 and the model, rig, and/or texture software instorage201, the data transmitted over the network throughnetwork connection205 is maintained at a minimum. Specifically, just a small section of data need be transmitted. This data will include that which is needed to play the animation (e.g., a set of joint rotations, etc.). As therendering software203 and some or all of the model, rig, and/ortexture storage201 may be resident atPC200, only small batches of data need be transmitted over the Internet.
It will be understood that theserver210 is useful in serving multiple clients. Further, theserver210 can act as a studio, providing the artist and/or programmer atclient200 with a full range of services including storage and delivery of updated model, texture, and rig data toclient200 andclient200A. In a set of embodiments, theserver210 will store all animation data. Furthermore, throughversion tracking software214,animation data storage211 will provide animation data (such as joint rotations, etc.) to therespective client200A on an as needed basis.
Referring now toFIG. 3, a system in accordance with another set of embodiments is illustrated. Specifically,client300 includes anetwork connection305, acontroller302, and adisplay304.Server310 includesanimation data storage211,version tracking software214, andanimation software212. Additionally,server310 includesrendering software303.
In this case, the manipulation of the animation software fromcontroller302 throughnetwork connection305 of theclient300 is identical to that shown inFIG. 2. As well, the animation software for calculating joint rotations, etc. is resident on theserver310. In the embodiments illustrated byFIG. 3, however, the rendering component also resides on theserver310. Specifically,rendering software303 will generate actual images (e.g., bitmaps, etc.), which images will be sent through the network to networkconnection305 and may be displayed thereafter atdisplay304.
FIG. 6 provides a generalized schematic diagram of a client/server system in accordance with some embodiments of the invention. Thesystem600 includes ananimation server computer605, which may be a PC server, minicomputer, mainframe, etc. running any of a variety of available operating systems including UNIX™ (and/or any of its derivatives, such as Linux, BSD, etc.), various varieties of Microsoft Windows™ (e.g., NT™, XP™, 2003, Vista™, Mobile™, CE™, etc.), Apple's Macintosh OS™ and/or any other appropriate server operating system. The animation server computer also includesanimation server software610, which provides animation services in accordance with embodiments of the invention. Theanimation server computer605 may also comprise (and/or have associated therewith) one or more storage media615, which can include storage for theanimation server software610, as well as a variety of associated databases (such as a database ofanimation data615a, adata store615bfor model data, such as the polygons and textures that describe an animated character, adata store615cfor scene data, and any other appropriate data stores).
Thesystem600 further comprises one or more animation client computers620, one or more of which may include local storage (not shown), as well as animation client software625. (In some cases, such as a case in which the animation client computer620 is designed only to provide input and display a rendered image, the rendering subsystem may reside on the animation server620, as described with respect toFIG. 3, for example. In this way, thin clients, such as wireless phones, PDAs, etc. may be used to provide input even if they have insufficient processing power to render the objects).
The animation client computer thus620 may be, inter alia, a PC, workstation, laptop, tablet computer, PDA, wireless phone, etc. running any appropriate operating system (such as Apple's Macintosh OS™, UNIX and/or its derivatives, Microsoft Windows™, etc.) Each animation client620 may also include one or more display devices630 (such as monitors, LCD panels, projectors, etc.) and/or one or more input devices635 (such as the controllers described above and in U.S. patent application Ser. No. ______ (attorney docket number 020071-000210), already incorporated by reference, as well as, to name but a few examples, a telephone keypad, a stylus, etc.).
In accordance with a set of embodiments, thesystem600 may operate in the following exemplary manner, which is described by additional reference toFIG. 7, which illustrates amethod700 of creating an animated work in accordance with some embodiments of the invention. (It should be noted that, while themethod700 ofFIG. 7 is described in conjunction with thesystem600 ofFIG. 6, that description is provided for exemplary purposes only, and the methods of the invention are not limited to any particular hardware or software implementation. Likewise, the operation of thesystem600 ofFIG. 6 is not limited to the described methods.)
The animation client software625 comprises instructions executable by the animation client computer620 to accept a set of input data from one or more input devices (block705). The input data may, for example, indicate a desired position of an object in a scene (which may be a virtual scene, a physical set, etc.) In particular embodiments, the object may be an animated object, which may comprise a plurality of polygons and/or textures, as described above. The animation client software optionally may process the input data, for example as described above. The animation client software then transmits the set of input data for reception by the animation server computer (block710).
The animation server computer605 (and, more particularly in some cases, the animation server software610) receives the input data (block715). Theanimation server software610 calculates a set of position data (block720), based on the received input data. In some cases, calculating the set of position data can include processing the input data to determine a desired position of an animated object and/or calculating a set of joint rotations defining that desired position (and/or defining the deformation of a rig defining the character, in order to place the character in the desired position). In other cases, including merely by way of example, if the object is a light or a camera on a physical set, there may be no need to calculate any animation data—the position can be determined based solely on the input data, perhaps in conjunction with a current position of the object.
In yet other cases, the object may be an animated character (or other object in a virtual scene), and the position of the object in the scene may be affected by the position of a virtual camera and/or light source. In these cases, the position data might comprise data about the position and/or orientation of the virtual camera/light.
The animation server computer605 (perhaps based on instructions from the server software610) then transmits the set of position data (e.g., joint rotations, etc.) for reception by the animation client620 (block725). When the animation client computer receives the set of position data (block730), the animation client software625 is responsible for placing the object in the desired position (block735). This procedure necessarily will vary according to the nature of the object. Merely by way of example, if the object is an animated character, placing the object in the desired position generally will comprise rendering the animated character in the desired position, for example by calculating a set of positions for the polygons that describe the character and/or by applying any necessary textures to the model. If the object is a physical object, such as a light, placing the object in the desired position may require interfacing with a movement system, which is not illustrated onFIG. 6 but examples of which are described in detail in U.S. patent application Ser. No. ______ (attorney docket number 020071-000210), already incorporated by reference).
In some cases, the object (for instance, if the object is a virtual object) may be displayed on a display device630 (block740). In particular, the object may be displayed in the desired position. In other cases, the client620 may be configured to upload the rendered object to theanimation server605 for storage and/or distribution to other computers (which might be, inter alia, other animation servers and/or clients).
Thesystem600 may provide a number of other features, some of which are described above. In some cases, theanimation server605 can provide animation services to a plurality of animation client computers (e.g.,620a,620b). In an exemplary embodiment, input may be received at afirst client620a, and the position data may be transmitted to asecond client620bfor rendering and/or display. (Optionally, the plurality of client computers620 may perform rendering tasks in parallel for a given scene). In another embodiment, eachclient620a,620baccepts input and receives position data, such that two artists may collaborate on a given character and/or scene, each being able to view changes made by the other. In yet another embodiment, eachclient620a,620bmay interact individually with theserver605, with each client620 providing its own input and receiving position data based on that input. (That is, the position data received by one client has no impact on the rendering of an object on another client.)
As noted above, in some cases, theanimation server software610 may be configured not only to calculate the position data, but also to render the object (which can include, merely by way of example, not only applying one or more textures to a model of the object, but also to calculating the positions of the polygons that make up the model, based on the position data). Hence, in such cases, the rendered object may be provided to an animation client computer620 (which may or may not be the same client computer that provided the input on which the position data is based), which then can display the object in the desired position. In some cases, theanimation server605 might render a first object for a first client620 and might merely provide to a second client a set of position data describing a desired position of a second object.
In some embodiments, one or more of the data stores (e.g.,data store615c) may be used to store object definition files, which can include some or all of the information necessary for rending a given object, such as the model, rig, polygons, textures, etc. describing that object. An animation client620 then can be configured to download from theserver605 the object definition files (and/or a subset therefore) to perform the rendering of the object in accordance with embodiments of the invention. It should be noted, however, that for security, the downloaded object definition files (and/or portions thereof) may be insufficient to allow a user of the client620 to independently recreate the object without additional data resident on the server.
Thesystem600 may be configured such that a user of the client620 is not allowed to modify these object definition files locally at the client620 and/or, if local modification is allowed, the client620 may not be allowed to upload modified object definition files. In this way, thesystem600 can prevent the unauthorized modification of a “master copy” of the object definition files. Alternatively, theserver software610 may be configured to allow modified object definition files to be uploaded (and thus to receive such files), perhaps based on an identification of the user of the animation client computer—that is, the server620 may be configured to identify the user and determine whether the user has sufficient privileges to upload modified files. (It should be noted that the identification, authentication and/or authorization of users may be performed either by theanimation server605 and/or by another server, which might communicate such identification, authorization and/or authentication data to theanimation server605.)
Similarly, in other embodiments, theanimation server software610 may be configured to determine whether to allow an animation client620 to interact with theserver software610. Merely by way of example, the animation server software620 may control access to rendered objects, object definition files, the position data, and/or the software components used to create either of these, based on any number of factors. For instance, the server software610 (and/or another component) may be configured to identify, authenticate and/or authorize a user of the animation client620. Based on an identity of the user of a client computer620 (as well, in some cases, as the authentication status of the user and/or the user's authorization) theanimation server software610 may determine whether it will receive input from the client computer620, whether it will provide position data to the animation client computer620 and/or whether it will allow the animation client computer620 to access files and/or animation services on theanimation server605.
Alternatively and/or in addition, theanimation server605 may be configured to provide for-fee services. Hence, the animation server software (and/or another component) may be configured to evaluate a set of payment and/or billing information (which may be, but is not necessarily, associated with an identity of a user of the animation client computer620), and based on the set of payment and/or billing information, determine whether to allow the client620 to interact with the server software610 (including, as mentioned above, whether it will accept/provide data and/or allow access to files and/or services). The set of billing and/or payment data can include, without limitation, information about whether a user has a subscription for animation services and/or files, whether the user has paid a per-use fee, whether the user's account is current, and/or any other relevant information.
In some cases, various levels of interaction with theserver software610 may be allowed. Merely by way of example, if theanimation server computer605 stores a plurality of sets of rendered objects and/or object definition files (wherein, for example, each set of files comprises information describing a different animated character), theanimation server605 may allow an unregistered user to download files for a few “free” characters, while paid subscribers have access to files an entire library of characters (it should be appreciated that there may be various levels of subscription, with access to files for corresponding various numbers of characters). Similarly, a user may be allowed to pay a per-character fee for a particular character, upon which the user is allowed to download the set of files for that character. (Such commerce functionality may be provided by a separate server, third-party service, etc.) In some cases, if a user has a subscription (and/or pays a per-use fee), the user (and/or an animation client computer operated by the user) may be given access to services and/or data on the animation server. Merely by way of example, if the user pays a per-use fee to obtain object definition files for a given animated character, that user may use animation services (including without limitation those described above) for that animated character. As another example, a user may have a monthly subscription to use files for a set of animated characters, and the user may use the animation server as part of the monthly subscription. Other for-fee uses are possible as well. A user may pay, for example, a per-use and/or subscription fee for access to the services of an animation server, apart from any fees that might be paid for the use of object definition files.
The animation server software610 (and/or another software component) may also be configured to perform change tracking and/or version management of object definition files (and/or rendered objects, position data, etc.). In some embodiments, any of several known methods of change tracking and/or version management may be used for this purpose. In particular, the change tracking/version management functions may be configured to allow various levels of access to files based on an identity of a user and/or a project that the identified user is working on. Merely by way of example, an artist in a group working on a particular character, scene, film, etc. may be authorized to access (as well, perhaps, as download) files related to that character, scene, film, while a manager or senior artist might be authorized to modify such files. An artist working on another project might not have access to any such files.
Theanimation server software605 may also be configured to distribute (e.g., to other clients and/or servers) a set of modified object definition files, such that each user has access to the most recent version of these files. As described above, access to a distribution of these modified files may be controlled based on an identity of the user, various payment or billing information, etc.
Embodiments of the invention can be configured to protect stored and/or transmitted data, including without limitation object definition files, rendered objects, input data, position data, and the like. Such data can be protected in a variety of ways. As but one example, data may be protected with access control mechanisms, such as those described above. In addition, other protection measures may be implemented as well. Merely by way of example, such data may be encrypted prior to being stored at an animation server and/or prior to being transmitted between an animation server and an animation client, to prevent unauthorized access to such data. As another example, data may be digitally signed and/or certified before storage and/or before transmission between computers. Such signatures and/or certifications can be used, inter alia, to verify the identification of an entity that created and/or modified such data, which can also facilitate change tracking and/or version management of various data used by embodiments of the invention.
While a few examples of the data management services that can be provided by various embodiments of the invention are described above, one skilled in the art should appreciate, based on the disclosure herein, that a variety of additional services may be enabled by certain features of the disclosed embodiments.
FIG. 8 provides a generalized schematic illustration of one embodiment of acomputer system800 that can perform the methods of the invention and/or the functions of computer, such as the animation server and client computers described above.FIG. 8 is meant only to provide a generalized illustration of various components, any of which may be utilized as appropriate. Thecomputer system800 can include hardware components that can be coupled electrically via abus805, including one ormore processors810; one ormore storage devices815, which can include without limitation a disk drive, an optical storage device, solid-state storage device such as a random access memory (“RAM”) and/or a read-only memory (“ROM”), which can be programmable, flash-updateable and/or the like (and which can function as a data store, as described above). Also in communication with thebus805 can be one ormore input devices820, which can include without limitation a mouse, a keyboard and/or the like; one ormore output devices825, which can include without limitation a display device, a printer and/or the like; and acommunications subsystem830; which can include without limitation a modem, a network card (wireless or wired), an infra-red communication device, and/or the like).
Thecomputer system800 also can comprise software elements, shown as being currently located within a workingmemory835, including anoperating system840 and/orother code845, such as the application programs (including without limitation the animation server and client software) described above and/or designed to implement methods of the invention. Those skilled in the art will appreciate that substantial variations may be made in accordance with specific embodiments and/or requirements. For example, customized hardware might also be used, and/or particular elements might be implemented in hardware, software (including portable software, such as applets), or both.
While the invention has been described with respect to exemplary embodiments, one skilled in the art will recognize that numerous modifications are possible. For example, the processes described herein may be implemented using hardware components, software components, and/or any combination thereof. Thus, although the invention has been described with respect to exemplary embodiments, it will be appreciated that the invention is intended to cover all modifications and equivalents within the scope of the following claims.