Disclosure of Invention
The invention aims to provide a system for interactive control of virtual characters in virtual rehearsal, aiming at the technical defects that in the existing virtual movie production process, the control mode of the virtual characters is single, an external interface is closed, a character motion database needs to be produced in the early stage, and the operability and practicability of a rehearsal producer are poor.
The system for controlling the interaction of the virtual roles in the virtual rehearsal comprises an information acquisition module, a data processing module, an interaction control module and a visualization module;
the information acquisition module comprises traditional interaction equipment, a gesture recognition unit, a motion capture unit, an external interface unit and an information acquisition priority switching unit; the data processing module comprises a dimension-ascending mapping unit; the interactive control module comprises a switching unit, a direct control unit and a mapping control unit; the visualization module comprises a visual optimization unit;
the connection relationship of each component module and unit in the system for the interactive control of the virtual roles in the virtual rehearsal is as follows:
the information acquisition module is connected with the data processing module; the information acquisition module and the data processing module are respectively connected with the interactive control module; the interaction control module is connected with the visualization module;
the method specifically comprises the following steps: the traditional interaction equipment, the gesture recognition unit, the motion capture unit and the external interface unit are respectively connected with the direct control unit, the ascending-dimension mapping unit and the information acquisition priority switching unit; the ascending dimension mapping unit is connected with the mapping control unit; the direct control unit and the mapping control unit are respectively connected with the switching unit; the direct control unit and the mapping control unit are respectively connected with the visual optimization unit;
the functions of each component module in the system for controlling the interaction of the virtual roles in the virtual rehearsal are as follows:
the information acquisition module acquires and acquires low-dimensional control data information; the data processing module generates high-dimensional character joint movement data based on the low-dimensional control data acquired by the information acquisition module; the interaction control module is used for generating virtual character motion data in the virtual preview system; the visualization module functions as a preview display of the virtual character.
The working process of the system for controlling the interaction of the virtual characters in the virtual rehearsal comprises the following steps:
step 1, an information acquisition module acquires low-dimensional control data information;
the low-dimensional control data information is derived from traditional interaction equipment, a gesture recognition unit, a motion capture unit and an external interface unit in the information acquisition module;
step 2, after the interactive control low-dimensional data is obtained, mapping the low-dimensional data into high-dimensional data, namely mapping the obtained control information to generate the whole body skeleton movement and the whole body movement of the virtual character;
the whole body skeleton movement and the whole body movement of the virtual role generated by mapping are realized in the ascending-dimension mapping unit; and the mapping of the low-dimensional data to the high-dimensional data requires operation at each time step;
the whole body movement is based on the whole body skeleton movement of the mapping virtual role, real-time optimization is carried out to obtain whole body joint data, and then the whole body movement of the virtual role is generated; and the generation of the whole body movement is controlled by the mapping control unit;
step 3, interactive control and visual presentation of the virtual roles;
the interactive control refers to the control of the virtual character, is realized in an interactive control module, distributes control data acquired by an information acquisition module to different control units through a switching unit, and generates motion data mapped to the whole body of the character in the control units;
the visual presentation means that a virtual role is drawn and rendered on a computer, the virtual role is presented to a user, the virtual role is realized in a visual module, the motion data generated in the interactive control module is required to be transmitted to a visual optimization unit, the motion data is further edited by the visual optimization unit, and the edited data is transmitted to a virtual engine for drawing, so that a virtual role image is generated.
Advantageous effects
Compared with the existing virtual rehearsal interaction control system, the system for controlling the virtual role interaction in the virtual rehearsal has the following beneficial effects:
1. the interactive control system interactively controls the movement of the virtual character by using simple equipment, is convenient for a movie creator to use, has a simple model, is convenient to use, is quick to process, and does not need to use a character movement database;
2. the interactive control system has the advantages of being friendly and high in openness, and performs human-computer interaction control on virtual character movement based on the acquired low-dimensional control information;
3. the interactive control system has the advantages of low cost, small volume, various control modes and good portability;
4. the system is flexible in application, and based on selection of specific modules and functions, the system can be practically applied to small and medium-sized play sets and can be applied to large play sets in an expanded mode.
Example 1
This embodiment explains that the virtual character interaction control system according to the present invention is applied to a virtual preview scene.
The virtual character interaction control system for virtual preview is shown in fig. 1. In fig. 1, a conventional interaction device, a gesture recognition unit, a motion capture unit, and an external interface unit are respectively connected to a direct control unit, an ascending-dimension mapping unit, and an information acquisition priority switching unit; the ascending dimension mapping unit is connected with the mapping control unit; the direct control unit and the mapping control unit are respectively connected with the switching unit; the direct control unit and the mapping control unit are respectively connected with the visual optimization unit.
The information acquisition module not only utilizes common keyboard and mouse input, but also utilizes OptiTrack to capture optical Motion and adopts a Leap Motion gesture recognition mode to acquire and acquireTaking low-dimensional control data information: and optical motion capture based on OptiTrack adopts a multi-rigid-body combination mode to perform space positioning. Each rigid body uses 4-12 space catching mark points, no more than 20 catching mark points in a single rigid body, and the mark points of the rigid body are asymmetrically placed to avoid interference and collision, thereby ensuring the uniqueness of the rigid body. In this way, each rigid body collected data contains spatially captured information such as position, attitude angle, and the like as control data. By adopting a Leap Motion gesture recognition mode and using a body sensing controller, information of each finger and each wrist can be obtained, the information comprises data of 25 joints/hand, and a plurality of independent data such as spatial position, state, direction, vector, depth and the like can be extracted from a single joint to serve as control information; the data processing module maps the motion state of the role into a second-order inverted pendulum model based on the low-dimensional control data acquired by the information acquisition module and based on the second-order inverted pendulum model. And then performing motion planning on the second-order inverted pendulum according to the gait control parameters and the environmental constraints. Based on the motion of the second-order inverted pendulum, joint moment is optimized and calculated according to the motion rule and the requirements of the creators, and the whole-body motion of the character is synthesized. The character motion can be controlled in real time by setting different parameters such as stride frequency and the like, so that physical and real whole body motion is generated, and high-dimensional character joint motion data are generated. And performing secondary optimization on partial joint data according to other constraint information to drive the virtual character model. In addition, some motion editing modes can be adopted[5]Controlling the role to turn, reducing the advancing speed, stopping advancing and the like; the interaction control module is used for generating virtual character motion data in the virtual preview system; the visualization module functions as a preview display of the virtual character.
The full flow of the solution implementation is shown in fig. 2. In fig. 2, in the preprocessing stage, character resources and scene resources can be produced by using three-dimensional modeling software, such as Maya and C4D. And (4) making a role model with skeleton binding in three-dimensional modeling software, exporting to a fbx file, and generating role resources. Modeling and mapping work is carried out on building models and object models which are needed to be used in virtual scene construction in three-dimensional modeling software, obj files are exported, and model resources in the virtual scene are generated. The character motion is created by means of, for example, motion capture or animators creating key frames, and the open-source motion database can also be called to create a character motion database. After the resource making work is finished, the made resource is put into a virtual asset database so as to be used in the following work. And (3) putting the scene model resources in the virtual asset database into a virtual scene by combining a virtual engine, so that a scene in future virtual rehearsal can be constructed, and the scene is put into the virtual asset database, namely, the scene can be called in the virtual rehearsal interaction control stage.
In the virtual preview interactive control stage, the information acquisition module can obtain low-dimensional control information by using two simple and friendly ways except for a traditional interactive control unit represented by a mouse and a keyboard: the gesture recognition device comprises a motion capture unit for capturing and acquiring the position and the gesture of the rigid system as control parameters through optical motion and a gesture recognition unit for acquiring gesture information as control parameters based on gesture recognition. After low-dimensional control data information is collected, a role model is driven by the low-dimensional data to generate role walking motion. According to the role movement information, the roles in the engine are driven in real time, the data processing module is used for mapping, the leg actions are used for driving the hand actions, meanwhile, the movement database data of the roles can be used, the role movement details are enriched, and the generated movement is more natural and vivid. Meanwhile, the scheme can be matched with an external expansion plug-in, such as a film and television photography semi-physical simulation system, to perform virtual control.
And in the virtual preview stage, a semi-physical simulation system for film and television photography is used, and the position and the posture of the virtual camera are controlled through the semi-physical model to carry out interactive preview on the virtual role. In the post-processing stage of virtual preview, the video material of preview is cut by using nonlinear editing software, and then the preview film is generated by using synthesis software for reference of scene shooting.
It should be emphasized that those skilled in the art could make several modifications without departing from the spirit of the present invention, which should also be considered as falling within the scope of the present invention.
The interactive preview system for virtual characters is divided into a live real-time service module and an offline service module, as shown in fig. 3.
The off-line service module creates virtual roles and virtual scene assets in asset creating software to form a virtual asset database. When the system is needed, data in the virtual asset database, such as a scene model and a skeleton animation, is called.
In the field real-time service module, a motion capture unit, a gesture recognition unit, an external interface unit and a traditional control equipment unit acquire control data. And performing ascending-dimension mapping processing on the acquired data to generate a result as preliminary bone data, and performing further optimization on the visual optimization unit by combining the control data of the mapping control unit or the direct control unit specified by the mode switching unit. Meanwhile, a visual preview interface can be presented in an engine by combining virtual asset data called by an offline service and matching with an external expansion plug-in, such as a film and television photography semi-physical simulation system.
The information acquisition module mainly realizes acquisition of control data. In virtual rehearsal, the interaction mode of controlling virtual characters by creators needs to meet the characteristics of convenience, rapidness and easy operation. In the system of this embodiment, the specific module units include the following:
besides the traditional control equipment represented by a basic mouse and keyboard, a plurality of data acquisition modes are used, such as a motion capture unit, a gesture recognition unit and an external interface unit, and the acquisition modes can be expanded and increased along with the diversification of the acquisition modes in the future.
A motion capture unit for spatial localization using optical motion capture. The collected data in this way contains spatial capture information such as position, attitude angle, etc. as low-dimensional control data.
The gesture recognition unit obtains information of each finger and each wrist by using the body sensing controller, and extracts a plurality of independent data such as spatial position, state, direction, vector, depth and the like as control information.
In addition to this, it is also possible to use various, external interface units for low-dimensional data acquisition, such as: sensor, handle, berth.
And the data processing module performs core processing to complete the motion of the low-dimensional data driven virtual character. The collected low-dimensional control data is used for driving the whole body of the virtual character to move, and the key point of the interactive control of the virtual character is. The process is that after the low-dimensional control data is collected, the movement planning is carried out according to the gait control parameters and the environmental constraints. And according to the motion rule and the requirements of the creators, optimizing and calculating the joint moment and synthesizing the whole-body motion of the role. Wherein, the role movement can be controlled in real time by setting different parameters, such as stride step frequency and the like, and the physical and real whole body movement is generated.
And finally, remapping the generated data, and carrying out secondary optimization on part of joint data according to other constraint information to drive the virtual character model.
The interaction control module and the visualization module realize the visual presentation of the virtual character movement. The visualization of the movement of the virtual character is a very important link in virtual rehearsal, and in order to achieve the effect of real-time rehearsal, rendering is generally performed in a virtual engine.
Because the character models are generally different, the use of the generated joint data to directly drive the character models can cause that the motion of the virtual character is not natural and real enough, and the visual optimization unit redirects the motion of the joint data before using the data.
Due to the time complexity limitation, the number of the character joints generated by real-time calculation is small, and the virtual character model in actual preview is complex. In order to control the virtual character to generate natural whole-body movement, the movement details of the virtual character model are enriched in the visual optimization unit by combining the existing data in the movement database. Based on the calculated joint movement data, other joints are driven to perform some processing on artistic expression, thereby generating a desired virtual character movement.
In the interaction control module of the virtual character, the feedback largely affects the control of the controller on the virtual character, and the virtual preview system must give a reaction to the state of the character. The feedback modes are various, a controller can directly and rapidly know the state of the virtual character in the virtual scene through vibration, voice and other modes by utilizing a touch feedback technology and a sound feedback technology, and the control on the character is adjusted in time, so that the manufacturing efficiency is improved.
In addition to the feedback of the avatar to the controller, the feedback of the avatar to objects in the virtual scene is also important, such as the avatar picking up an object. To achieve this interaction, we need to use additional data to record the type of action that the virtual character is performing. By utilizing the physical engine, the object of the virtual scene can be subjected to physical simulation and interacts with the virtual character, so that the workload of a film creator is reduced, and the virtual rehearsal efficiency is improved.
In order to complete interactive preview of a virtual character, a virtual camera needs to be added into an engine, a shooting process is simulated, and pictures in the camera are rendered and exported to form a video file. To this end, we have established an interactive system with cameras that can operate precisely in real time with virtual cameras. Based on a film and television photography semi-physical simulation system, a camera in an engine can be controlled in real time to conduct interactive previewing of virtual characters.
The above detailed description is intended to illustrate the objects, aspects and advantages of the present invention, and it should be understood that the above detailed description is only exemplary of the present invention and is not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.