This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Virtual human" – news ·newspapers ·books ·scholar ·JSTOR(August 2021) (Learn how and when to remove this message) |

Avirtual human (or also known asmeta human ordigital human)[1] is asoftwarefictional character orhuman being. Virtual humans have been created as tools andartificial companions insimulation,video games,film production,human factors and ergonomic and usability studies in various industries (aerospace, automobile, machinery, furniture etc.), clothing industry,telecommunications (avatars), medicine, etc. These applications require domain-dependent simulation fidelity. A medical application might require an exact simulation of specific internal organs; film industry requires highest aesthetic standards, natural movements, and facial expressions; ergonomic studies require faithful body proportions for a particular population segment and realistic locomotion with constraints, etc.
Game engines such asUnreal Engine via metahuman[2] andUnity by acquiringWētā FX[3] have enabled real-time interactions with digital humans usingphysically based rendering.
We see the virtual human as more than a useful artifact. We see it as a tool for understanding ourselves. If we can simulate a virtual human in a virtual world behaving in ways that are indistinguishable from a real human, then we assert that we have captured something about what it means to be human..
— Perceiving Systems, Max Planck Institute for Intelligent Systems
Research on virtual humans involves interdisciplinary collaboration of activities such asmachine learning,game development, andartificial neuroscience.

There are several ways of categorising virtual humans.
- Influencing audiences on digital channels (Virtual influencer).[13] Example:Lil Miquela.
- Assisting audiences on digital channels (Virtual assistant).[13] Example:Lu is both the face of Magazine Luiza and assists users by addressing their queries and resolving problems through their Instagram channel.
- Providing a graphical representation of the user in a virtual environment (Avatar). Avatars have been popularized by online worlds likeThe Palace,Second Life, Active Worlds, IMWU, Zepeto and others.
- Starring in media (Virtual actor). This media most commonly involves movies or series. These virtual actors can either be digital representations of real actors or entirely computer-generated characters.
Ergonomic analysis provided some of the earliest applications in computer graphics for modeling a human figure and its motion.William Fetter, aBoeing art director in early 20th Century, was the first person to draw a human figure using a computer. This figure is known as the "Boeing Man." The seven jointed "First Man", used for studying the instrument panel of a Boeing 747, enabled many pilot motions to be displayed by articulating the figure's pelvis, neck, shoulders, and elbows. The addition of twelve extra joints to "First Man" produced "Second Man". This figure was used to generate a set of animation film sequences based on a series of photographs produced byEadweard Muybridge.
Then several models were developed by various companies: Cyberman (Cybernetic man-model) was developed byChrysler Corporation for modeling human activity in and around a car.[14] It is based on 15 joints; the position of the observer is predefined. Combiman (Computerized biomechanical man-model) was specifically designed to test how easily a human can reach objects in a cockpit;[15] it is defined using a 35 internal-link skeletal system.Boeman was designed in 1969 by Boeing Corporation.[16] It is based on a 50th-percentile three-dimensional human model. He can reach for objects like baskets, collisions are detected, and visual interferences are identified. Boeman is built as a 23-joint figure with variable link lengths.Sammie (System for Aiding Man Machine Interaction Evaluation) was designed in 1972 at the University of Nottingham for general ergonometric design and analysis.[17] This was, so far, the best parameterized human model and it presents a choice of physical types: slim, fat, muscled, etc. The vision system was very developed and complex objects have been manipulated by Sammie, based on 21 rigid links with 17 joints. Another interesting Virtual Human,Buford was developed atRockwell International to find reach and clearance areas around a model positioned by the operator.[18] The figure represented a 50th-percentile human model and was covered by CAD-generated polygons. Buford is composed of 15 independent links that must be redefined at each modification.
In facial modelling,Parke produced a representation of the head and face at theUniversity of Utah, and three years later, he proposed parametric models to produce a more realistic face.[19]
Some researchers have also used elementary volumes to create virtual human models e.g. cylinders by Poter and Willmert[20] or ellipsoids by Herbison-Evans.[21] Badler and Smoliar[22] proposed Bubbleman as a three-dimensional human figure consisting of a number of spheres or bubbles. The model was based on overlap of spheres, and the intensity and size of the spheres varied depending on the distance from the observer.
In the early 1980s, Tom Calvert, a professor of kinesiology and computer science atSimon Fraser University, attached potentiometers to a body and used the output to drive computer-animated figures for choreographic studies and clinical assessment of movement abnormalities. Calvert's animation system used the motion capture apparatus together withLabanotation and kinematic specifications to fully specify character motion.[23]
In the same time, theJack software package was developed at the Center for Human Modeling and Simulation at theUniversity of Pennsylvania, and was made commercially available fromTecnomatix, Jack provided a 3D interactive environment for controlling articulated figures. It featured a detailed human model and included realistic behavioral controls, anthropometric scaling, task animation and evaluation systems, view analysis, automatic reach and grasp, collision detection and avoidance, and many other useful tools for a wide range of applications. "
In the beginning of the Eighties, several companies and research groups produced short films and demos involving Virtual Humans. In particular,Information International Inc, commonly called Triple-I or III showed the potential for computer graphics to do amazing things, by producing a 3D scan of Peter Fonda's head, and the ultimate demo, “Adam Powers, the Juggler".
In 1982, Philippe Bergeron,Nadia Magnenat-Thalmann andDaniel Thalmann producedDream Flight, a film depicting a person (articulated stick figure) transported over the Atlantic Ocean from Paris to New York. The film was completely programmed using the MIRA graphical language, an extension of thePascal language based ongraphical abstract data types. The film got several awards and was shown at theSIGGRAPH ‘83 Film Show. Another film became a breakthrough in 1985, the film "Tony de Peltrie" that used for the first time facial animation techniques to tell a story. During the same year, the Hard Woman video for theMick Jagger's song was developed byDigital Productions that showed a nice animation of a stylized woman. At the same time, "The Making Of Brilliance" was created byRobert Abel & Associates as a TV commercial, and showed incredible motion and rendering for its time.
In 1987, theEngineering Institute of Canada celebrated its 100th anniversary. A major event, sponsored byBell Canada andNorthern Telecom, took place at thePlace des Arts inMontreal. For this event,Nadia Magnenat-Thalmann andDaniel Thalmann simulatedMarilyn Monroe andHumphrey Bogart meeting in a cafe in the old town of Montreal. This filmRendez-vous in Montreal was the first film that has modelled 3D legendary stars. The film is a result of an extensive research on the 3D cloning aspect of real humans as well as the modelling of their behaviour.[24]
In 1988 "Tin Toy" was the first film made by computer to obtain anOscar (as Best Animated Short Film). It is the story of a tinone-man band toy, attempting to escape from Billy, a silly infant. The same year, deGraf/Wahrman developed "Mike the Talking Head" forSilicon Graphics to demonstrate the real-time capabilities of their new 4D machines. Mike was driven by a specially built controller that allowed a single puppeteer to handle many parameters of the character's face, including mouth, eyes, expression, and head position. TheSilicon Graphics hardware provided real-time interpolation between facial expressions and head geometry as controlled by the performer. Mike was performed live in that year'sSIGGRAPH film and video show.
In 1989, Kleiser-Walczak produced Dozo, a computer animation of a woman dancing in front of a microphone while singing a song for a music video. They captured the motion using an optically-based solution fromMotion Analysis with multiple cameras to triangulate the images of small pieces of reflective tape placed on the body. The resulting output is the 3-D trajectory of each reflector in the space.
In 1989, in the film "The Abyss", a particular sequence shows a watery pseudopod acquiring a human face. This represented an important step for future synthetic characters as it was then possible to transform one shape to another human face. In 1989, Lotta Desire, actress of "The Little Death" and "Virtually Yours" demonstrated advanced facial animation and first computer-animated kiss. Then, "Terminator II" movie marked in 1991 a milestone in the animation of virtual humans mixed with real people and decors.
In the nineties, several short movies were produced, the most well-known is “Geri's Game” fromPixar which received theAcademy Award for Animated Short films.
Behavioral animation was introduced and developed byCraig Reynolds.[25] He had simulated flocks of birds alongside schools of fish for the purpose of studying group intuition and movement. By integrating numerous virtual humans to inhabit virtual worlds, Musse and Thalmann then initiated the field ofcrowd simulation.
Starting in the nineties, researchers have shifted to real-time animation and to the interaction with virtual worlds. The merge ofVirtual Reality, Human Animation andVideo Analysis techniques has led to the integration of Virtual Humans in Virtual Reality, the interaction with these virtual humans, and the self-representation as a clone or avatar or participant in theVirtual World.Interaction with Virtual Environments was planned to be at various level of user configuration. A high-end configuration could involve animmersive environment where users would interact byvoice,gesture and physiological signals with virtual humans that would help them explore their digital data environment, both locally and over theWeb. For this, Virtual Humans started to be able to recognize gestures, speech and expressions of the user and answer by speech and animation.[26] The ultimate objective of this development is to create realistic and believable virtual humans withadaptation,perception andmemory. These virtual humans paved the way of today research to produce virtual humans that can act freely while simulating emotions.[27] Ideally, the goal is to have them aware of the environment and unpredictable.