Naked eye 3D resource processing method and device, storage medium and electronic equipmentTechnical FieldThe embodiment of the disclosure relates to the technical field of computers, in particular to a naked eye 3D resource processing method, a naked eye 3D resource processing device, a computer readable storage medium and electronic equipment.
BackgroundIn the existing method, when the naked eye 3D resource is generated, the naked eye 3D resource is needed to be obtained through modeling, animation production and animation rendering, and the method enables the generation efficiency of the naked eye 3D resource to be low.
It should be noted that the information of the present invention in the above background section is only for enhancing understanding of the background of the present disclosure, and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
The disclosure aims to provide a processing method of naked eye 3D resources, a processing device of naked eye 3D resources, a computer readable storage medium and electronic equipment, so as to overcome the problem of low generation efficiency of naked eye 3D resources caused by the limitations and defects of related technologies at least to a certain extent.
According to one aspect of the present disclosure, there is provided a method for processing naked eye 3D resources, including:
Creating a new generation scene corresponding to a target interaction model, and loading the target interaction model in the new generation scene to obtain an original resource scene;
Calling a preset resource library to adjust the original resource scene to obtain a target resource scene, and determining the viewpoint number of the target resource scene according to the target equipment parameters in the newly-added generation scene;
Configuring lens parameters of a virtual lens group corresponding to the target resource scene according to the viewpoint number, and configuring the virtual lens group according to the lens parameters;
And generating naked eye 3D resources corresponding to the target interaction model based on the target resource scene, the virtual lens group and the zero point position.
In one exemplary embodiment of the present disclosure, creating a newly added generation scenario corresponding to a target interaction model includes:
Responding to touch operation of a first preset interactable control on a display interface of the resource creation terminal, and displaying an engineering creation sub-interface;
Determining target equipment parameters and naked eye 3D interleaving parameters of target display equipment for displaying naked eye 3D resources corresponding to the target interaction model in response to input operation of the engineering creation sub-interface;
and responding to touch operation of a second preset interactable control in the engineering creation sub-interface, and creating a newly added generation scene corresponding to the target interaction model.
In an exemplary embodiment of the present disclosure, loading the target interaction model in the newly generated scenario to obtain an original resource scenario includes:
Acquiring the target interaction model from a preset model library according to the target model name of the target interaction model and/or
Importing the target interaction model from an external file according to the target model name of the target interaction model;
And carrying out self-adaptive adjustment on the model size of the target interaction model in the newly-added generation scene to obtain the original resource scene.
In an exemplary embodiment of the present disclosure, adaptively adjusting a model size of the target interaction model in the newly generated scene to obtain the original resource scene includes:
constructing a first rectangle according to the newly added generation scene, and constructing a second rectangle according to the target interaction model;
Calculating a model scaling coefficient of the target interaction model in the newly-added generation scene according to the first rectangle and the second rectangle;
And carrying out self-adaptive adjustment on the model size of the target interaction model based on the model scaling coefficient to obtain the original resource scene.
In an exemplary embodiment of the present disclosure, constructing a first rectangle according to the newly generated scene includes:
displaying the newly added generation scene on a display interface of the resource creation terminal;
taking the central point of the display interface as the central point of a first rectangle, and determining the first rectangle length and the first rectangle width of the first rectangle according to the interface length and the interface width occupied by the newly-added generation scene on the display interface;
and constructing the first rectangle according to the center point of the first rectangle, the length of the first rectangle and the width of the first rectangle.
In an exemplary embodiment of the present disclosure, constructing a second rectangle according to the target interaction model includes:
acquiring pixel point coordinates of a pixel point in the target interaction model, and acquiring a maximum abscissa value, a maximum ordinate value, a minimum abscissa value and a minimum ordinate value in the pixel point coordinates;
Determining the height of the second rectangle according to the maximum abscissa value and the minimum abscissa value, and determining the length of the second rectangle according to the maximum ordinate value and the minimum ordinate value;
And taking the center point of the target interaction model as the center point of a second rectangle, and constructing the second rectangle according to the center point of the second rectangle, the height of the second rectangle and the length of the second rectangle.
In an exemplary embodiment of the present disclosure, calculating a model scaling factor of the target interaction model in the newly generated scene according to the first rectangle and the second rectangle includes:
Mapping the first rectangle to a three-dimensional coordinate where a resource generating engine is located to obtain a rectangle mapping result;
Calculating a first ratio between a rectangular mapping height and a second rectangular height in the rectangular mapping result, and calculating a second ratio between a rectangular mapping length and a second rectangular length in the rectangular mapping result;
And determining a model scaling coefficient of the target interaction model in the newly-added generation scene based on the first ratio and the second ratio.
In an exemplary embodiment of the present disclosure, the method for processing naked eye 3D resources further includes:
In response to a model loading operation, loading a newly added virtual model in the original resource scene and/or importing the newly added virtual model in the original resource scene;
Generating a model label corresponding to the newly added virtual model, and displaying the model label in the original resource scene.
In an exemplary embodiment of the present disclosure, the method for processing naked eye 3D resources further includes:
And responding to the touch operation of the model label, displaying a newly added virtual model corresponding to the model label in the original resource scene, and switching a target interaction model in the original resource scene based on the newly added virtual model.
In an exemplary embodiment of the present disclosure, the method for processing naked eye 3D resources further includes:
In response to the touch operation of the model labels, the display order of the model labels in the original resource scene is adjusted, and/or
Displaying the mode time sequence setting interface of the newly added virtual model and/or the target interaction model;
And responding to the input operation of the mode time sequence setting interface, and determining the model display duration of the newly added virtual model and/or the target interaction model.
In an exemplary embodiment of the present disclosure, the preset resource library includes at least one of a scene library, a model library, an animation library, a material library, a light library, and a sound library.
In an exemplary embodiment of the present disclosure, invoking a preset resource library to adjust the original resource scenario to obtain a target resource scenario includes:
Loading an original three-dimensional scene corresponding to the target interaction model from the scene library, adding the original three-dimensional scene into the newly generated scene, and/or
Loading an original three-dimensional animation from said animation library and acting said original three-dimensional animation on said target interactive model, and/or
Loading original lamplight from the lamplight library, adding the original lamplight into the newly generated scene, and/or
Loading model materials corresponding to the target interaction model from the material library, and acting the model materials on the target interaction model, and/or
Loading audio data corresponding to the target interaction model from the sound library, and adding the audio data into the newly-added generation scene;
And adjusting the model attribute and/or animation attribute and/or light attribute and/or material attribute and/or sound attribute of the target interaction model in the newly generated scene to obtain the target resource scene.
In one exemplary embodiment of the present disclosure, the original three-dimensional animation includes a program animation and/or a keyframe animation;
Wherein acting the original three-dimensional animation on the target interaction model comprises:
Adding the program animation to the target interaction model, and/or
And mounting the target interaction model under an animation object in the key frame animation so as to take the target interaction model as a sub-object of the animation object.
In an exemplary embodiment of the present disclosure, the acting the model material on the target interaction model includes:
And in response to dragging the model material to the target interaction model, replacing the original material in the target interaction model based on the model material.
In one exemplary embodiment of the present disclosure, the model attributes include structure level attributes and/or location attributes;
Adjusting model attributes of the target interaction model, including:
responding to touch operation of the model attribute interaction control, and displaying a model adjustment interface of the model attribute of the target interaction model;
And responding to the input operation of the model adjustment interface, and adjusting the attribute value of the structural hierarchy attribute and/or the attribute value of the position attribute of the target interaction model.
In an exemplary embodiment of the present disclosure, adjusting the model attribute of the target interaction model further includes:
in response to a movement event acting on the target interaction model, a current model position of the target interaction model in the newly generated scene is adjusted and/or the target interaction model is rotated.
In one exemplary embodiment of the present disclosure, adjusting the animation properties includes:
Responding to the touch operation of the animation setting interaction control, and displaying an animation adjustment interface corresponding to the animation attribute;
And responding to the input operation of the animation adjustment interface, adjusting the animation period time length and/or the animation amplitude in the animation attribute, and/or adjusting the offset of the animation track in the animation attribute in the newly-added generation scene.
In one exemplary embodiment of the present disclosure, the original lamp light includes at least one of a parallel light, a point light source, a spotlight, and a combined lamp light composed of the point light source and the spotlight;
Wherein, adjust light attribute, include:
Responding to touch operation of the lamplight setting interaction control, and displaying a lamplight adjustment interface corresponding to the lamplight attribute;
And responding to the input operation of the light adjustment interface, and adjusting the light position and/or the light intensity and/or the light color of the parallel light and/or the point light and/or the spotlight and/or the combined light in the newly generated scene.
In an exemplary embodiment of the present disclosure, the material properties include at least one of model color, texture map, normal map, transparency, gloss, and refraction;
wherein, adjust the material attribute, include:
responding to touch operation of the interactive control for the material setting, and displaying a material adjustment interface corresponding to the material attribute;
And responding to the input operation of the material adjustment interface, and adjusting the model color and/or texture mapping and/or normal mapping and/or transparency and/or glossiness and/or refraction of the target interaction model.
In one exemplary embodiment of the present disclosure, adjusting sound properties includes:
Responding to touch operation of the voice setting interaction control, and displaying a voice adjustment interface corresponding to the voice attribute;
and adjusting the volume of the audio data in response to the input operation of the sound adjusting interface.
In an exemplary embodiment of the present disclosure, determining, according to the target device parameter in the newly generated scene, the number of viewpoints of the target resource scene includes:
determining equipment attribute information of target display equipment corresponding to the target equipment parameters according to the target equipment parameters in the newly-added generation scene;
and determining the number of viewpoints required by the target display equipment when displaying the target resource scene according to the equipment attribute information.
In one exemplary embodiment of the present disclosure, determining lens parameters of a virtual lens group corresponding to the target resource scene according to the viewpoint number includes determining a lens number of the virtual lens group corresponding to the target resource scene, a zero point position of the target resource scene, lens distances of each virtual lens in the virtual lens group, and a distance difference between the virtual lens group and a zero plane according to the viewpoint number.
In an exemplary embodiment of the present disclosure, configuring the virtual lens group according to the lens parameters includes:
And placing the virtual lens at the original lens position, and adjusting the lens parameters of the virtual lens at the original lens position to obtain the virtual lens group according to the virtual lens with the adjusted parameters.
In one exemplary embodiment of the present disclosure, adjusting lens parameters of the virtual lens at the original lens position includes:
Responding to touch operation of the naked eye setting interaction control, and displaying a naked eye parameter setting interface;
And responding to input operation in the naked eye parameter setting interface, adjusting the lens spacing and/or the lens posture information and/or the lens visual angle information of the virtual lens at the original lens position, and/or adjusting the original lens position of the virtual lens.
In an exemplary embodiment of the present disclosure, generating a naked eye 3D resource corresponding to the target interaction model based on the target resource scene, the virtual lens group, and the zero point position includes:
determining a zero plane position according to the zero position, and adjusting the zero plane position;
Determining a three-dimensional display area and a plane display area of a target interaction model in the target resource scene according to the adjusted zero plane position;
Determining a model placement area of the target interaction model in the target resource scene according to the three-dimensional display area and the plane display area, and adjusting the target model position of the target interaction model based on the model placement area;
And publishing the target resource scene with the adjusted positions and the virtual lens group to obtain naked eye 3D resources corresponding to the target interaction model.
In an exemplary embodiment of the present disclosure, publishing the target resource scene and the virtual lens group after the position adjustment to obtain the naked eye 3D resource corresponding to the target interaction model includes:
responding to touch operation of the resource release interaction control, and displaying a resource release interface;
Responding to touch operation of a resource release interface, and determining a resource release type;
and based on the resource release type, releasing the target resource scene with the adjusted position and the virtual lens group to obtain naked eye 3D resources corresponding to the target interaction model.
In an exemplary embodiment of the present disclosure, the resource release type includes at least one of a program resource category, a video resource category, and a sequence frame resource category.
In an exemplary embodiment of the present disclosure, when the resource release type is a program resource type, releasing the target resource scene and the virtual lens group after the position adjustment based on the resource release type to obtain an naked eye 3D resource corresponding to the target interaction model, including:
Displaying a resource release interface corresponding to the program resource category;
Responding to input operation of a resource release interface corresponding to the program resource category, and determining a storage path of the naked eye 3D resource;
And packaging the target resource scene with the adjusted positions and the virtual lens group to obtain naked eye 3D resources with program resource categories.
In an exemplary embodiment of the present disclosure, when the resource release type is a video resource type and/or a sequential frame resource type, releasing the target resource scene and the virtual lens group after position adjustment based on the resource release type to obtain an naked eye 3D resource corresponding to the target interaction model, including:
Displaying a resource parameter adjustment interface corresponding to the video resource category and/or the sequence frame resource category;
determining target resource parameters corresponding to the video resource categories and/or the sequence frame resource categories in response to input operation of the resource parameter adjustment interface;
And storing the target resource parameters, the target resource scene with the adjusted positions and the virtual lens group to obtain naked eye 3D resources with video resource categories and/or sequence frame resource categories.
In an exemplary embodiment of the present disclosure, the target resource parameter includes at least one of a rendering style parameter, a resolution parameter, and an output type parameter;
The method includes the steps of storing the target resource parameters, the target resource scene after position adjustment and the virtual lens group to obtain naked eye 3D resources with video resource categories and/or sequence frame resource categories, and comprises the following steps:
determining a target rendering style of the naked eye 3D resource according to the rendering style parameters in the target resource parameters;
Determining a target picture type of the output picture based on the input type parameter in the target resource parameters, and determining a target resolution of the output picture based on the resolution parameter in the target resource parameters;
And responding to the touch operation of a third preset interaction control in the resource parameter adjustment interface, and outputting naked eye 3D resources with target rendering patterns and target resolutions and video resource categories and/or sequence frame resource categories.
In an exemplary embodiment of the present disclosure, the target rendering style includes a multi-view stitching mode or a rendering result mode, and the target picture type includes a video picture type or a sequential frame picture type;
Wherein, responding to the touch operation of the third preset interaction control in the resource parameter adjustment interface, outputting naked eye 3D resources with target rendering style and target resolution and video resource category and/or sequence frame resource category, comprising:
Outputting naked eye 3D resources with a multi-view stitching mode and target resolution and video picture types in response to touch operation of a third preset interactive control in the resource parameter adjustment interface, or
Outputting naked eye 3D resources with rendering result modes and target resolutions and video picture types in response to touch operation of a third preset interaction control in the resource parameter adjustment interface, or
Outputting naked eye 3D resources with multi-view splicing mode and target resolution and sequence frame picture types in response to touch operation of a third preset interactive control in the resource parameter adjustment interface, or
And responding to the touch operation of a third preset interaction control in the resource parameter adjustment interface, and outputting naked eye 3D resources with rendering result modes and target resolution and sequence frame picture types.
In an exemplary embodiment of the present disclosure, the method for processing naked eye 3D resources further includes:
Outputting the naked eye 3D resources with the program resource categories to target display equipment, displaying the naked eye 3D resources with the program resource categories through the target display equipment, and/or
And outputting the naked eye 3D resources with the video resource categories and/or the sequence frame resource categories to target display equipment, and displaying the naked eye 3D resources with the video resource categories and/or the sequence frame resource categories through the target display equipment.
In an exemplary embodiment of the present disclosure, after displaying, by the target display device, the naked eye 3D resource having the program resource category, the method for processing the naked eye 3D resource further includes:
Responding to the input current interaction gesture, acquiring hand state information and finger movement direction, and determining a current interaction instruction required to be executed by a target interaction model in the naked eye 3D resource based on the hand state information and the finger movement direction;
And controlling a target interaction model in the naked eye 3D resource to execute the current interaction instruction, switching the target interaction model from an original model state to a target model state corresponding to the current interaction instruction, and displaying a model animation generated by executing the current interaction instruction.
In one exemplary embodiment of the present disclosure, the current interaction gesture includes at least one of a human interaction gesture, a somatosensory controller interaction gesture, an external device interaction gesture, and a handle interaction gesture.
In an exemplary embodiment of the present disclosure, controlling a target interaction model in the naked eye 3D resource to execute the current interaction instruction includes:
Controlling the target interaction model in the naked eye 3D resource to execute up-down movement instructions and/or left-right movement instructions, and/or
Controlling the target interaction model in the naked eye 3D resource to execute a rotation instruction, and/or
And controlling the target interaction model in the naked eye 3D resource to execute an explosion instruction.
In an exemplary embodiment of the present disclosure, controlling the target interaction model in the naked eye 3D resource to execute the explosion instruction includes:
Controlling a model composition sub-module of a target interaction model in the naked eye 3D resource to move according to a preset direction and a preset angle so as to achieve an explosion effect;
The preset direction comprises a free movement direction or a coordinate axis movement direction, and the preset angle comprises a local angle of the model composition sub-module relative to the target interaction model.
In an exemplary embodiment of the present disclosure, the method for processing naked eye 3D resources further includes:
and controlling the target interaction model to recover from the target model state to the original model state at intervals of preset time length.
According to one aspect of the present disclosure, there is provided a processing apparatus for naked eye 3D resources, including:
The original resource scene generation module is used for creating a newly-added generation scene corresponding to the target interaction model, and loading the target interaction model in the newly-added generation scene to obtain an original resource scene;
The viewpoint number determining module is used for calling a preset resource library to adjust the original resource scene to obtain a target resource scene, and determining the viewpoint number of the target resource scene according to the target equipment parameters in the newly-added generation scene;
The virtual lens group configuration module is used for configuring lens parameters of the virtual lens group corresponding to the target resource scene according to the viewpoint number and configuring the virtual lens group according to the lens parameters;
And the naked eye 3D resource generation module is used for generating naked eye 3D resources corresponding to the target interaction model based on the target resource scene, the virtual lens group and the zero point position.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of processing an open hole 3D resource as described in any one of the above.
According to one aspect of the present disclosure, there is provided an electronic device including:
Processor, and
A memory for storing executable instructions of the processor;
Wherein the processor is configured to execute the processing method of the naked eye 3D resource according to any one of the above via execution of the executable instructions.
According to the naked eye 3D resource processing method, on one hand, a newly-added generation scene corresponding to a target interaction model is created, a target interaction model is loaded in the newly-added generation scene to obtain an original resource scene, then a preset resource library is called to adjust the original resource scene to obtain a target resource scene, the viewpoint number of the target resource scene is determined according to target equipment parameters in the newly-added generation scene, then a virtual lens group corresponding to the target resource scene is configured according to the viewpoint number, the zero point position of the target resource scene is determined, finally naked eye 3D resources corresponding to the target interaction model are generated based on the target resource scene, the virtual lens group and the zero point position, automatic generation of the naked eye 3D resources is achieved, the problem that in the prior art, the naked eye 3D resources are low in generation efficiency due to the fact that modeling, animation and animation rendering are needed to obtain the naked eye 3D resources is solved, on the other hand, the original resource scene can be obtained due to the fact that the preset resource library is loaded in the newly-added generation scene, the original resource scene is further adjusted, the target resources can be obtained, on the other hand, the naked eye 3D resources can be obtained according to the viewpoint number of the newly-added resources can be matched with the newly-added resources, and the target resources can be further, and the naked eye 3D resources can be generated according to the newly-increased target resources can be obtained, and the naked eye 3D resources can be matched according to the target resources can be obtained, and the naked eye resources can be better.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
DrawingsThe accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure. It will be apparent to those of ordinary skill in the art that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived from them without undue effort.
Fig. 1 schematically illustrates a flowchart of a method for processing naked eye 3D resources according to an exemplary embodiment of the present disclosure.
Fig. 2 schematically illustrates an editor system block diagram for multi-view naked eye 3D content viewing and production according to an example embodiment of the present disclosure.
FIG. 3 schematically illustrates an interface example diagram of a created engineering list according to an example embodiment of the present disclosure.
FIG. 4 schematically illustrates an example diagram of an interface for deleting an engineering according to an example embodiment of the present disclosure.
Fig. 5 schematically illustrates an exemplary diagram of the structure of a repository according to an exemplary embodiment of the present disclosure.
FIG. 6 schematically illustrates an example diagram of an engineering creation sub-interface, according to an example embodiment of the present disclosure.
Fig. 7 schematically illustrates an example diagram of a scenario for a model tab list according to an example embodiment of the present disclosure.
Fig. 8 schematically illustrates an example diagram of a model timing setup interface according to an example embodiment of the present disclosure.
Fig. 9 schematically illustrates an example diagram of a scenario repository, according to an example embodiment of the present disclosure.
Fig. 10 schematically illustrates an example diagram of a scenario of an animation resource library, according to an example embodiment of the present disclosure.
Fig. 11 schematically illustrates an example view principle diagram of a process of adjusting model properties according to an example embodiment of the present disclosure.
Fig. 12 schematically illustrates an example diagram of an animation property adjustment interface, according to an example embodiment of the disclosure.
Fig. 13 schematically illustrates an example diagram of an animation property adjustment interface, according to an example embodiment of the disclosure.
Fig. 14 schematically illustrates an example diagram of a material property adjustment interface, according to an example embodiment of the present disclosure.
Fig. 15 schematically illustrates an example diagram of a sound attribute setting interface according to an example embodiment of the present disclosure.
Fig. 16 schematically illustrates an example diagram of a virtual lens group according to an example embodiment of the present disclosure.
Fig. 17 schematically illustrates an example diagram of a setting interface for lens parameters according to an example embodiment of the present disclosure.
Fig. 18 schematically illustrates an example diagram of the principle of the field angle of a virtual lens according to an example embodiment of the present disclosure.
Fig. 19 schematically illustrates a flowchart of a method for generating naked eye 3D resources corresponding to a target interaction model based on the target resource scene, the virtual lens group, and the zero point position according to an example embodiment of the present disclosure.
FIG. 20 schematically illustrates an example diagram of a scene resulting from an adjustment of a target model position of a target interaction model, according to an example embodiment of the present disclosure.
FIG. 21 schematically illustrates an example diagram of a resource publishing interface according to an example embodiment of the disclosure.
Fig. 22 schematically illustrates an example diagram of a resource parameter adjustment interface corresponding to a video resource category and/or a sequential frame resource category in accordance with an example embodiment of the present disclosure.
Fig. 23 schematically illustrates an example diagram of a multi-view video picture according to an example embodiment of the present disclosure.
Fig. 24 schematically illustrates an example diagram of a composite screen according to an example embodiment of the present disclosure.
Fig. 25 schematically illustrates an example diagram of the principle of a left-hand rectangular coordinate system according to an example embodiment of the present disclosure.
Fig. 26 schematically illustrates an example view of a scenario resulting from a single component of a control-target interaction model moving in a forward direction of itself, according to an example embodiment of the present disclosure.
FIG. 27 schematically illustrates an exemplary view of a scenario in which a single part of a control-target interaction model is deployed to both sides in a fixed axial direction, according to an exemplary embodiment of the present disclosure.
Fig. 28 schematically illustrates an example diagram of 21 3D keypoints of a hand according to an example embodiment of the disclosure.
Fig. 29 schematically illustrates a block diagram of a processing apparatus of naked eye 3D resources according to an example embodiment of the present disclosure.
Fig. 30 schematically illustrates an electronic device for implementing a processing method of the naked eye 3D resource according to an exemplary embodiment of the present disclosure.
Detailed DescriptionExample embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein, but rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the exemplary embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present disclosure. One skilled in the relevant art will recognize, however, that the aspects of the disclosure may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor devices and/or microcontroller devices.
In the multi-view naked eye 3D industry, if corresponding naked eye 3D resources are to be generated, the realization is needed through a professional software developer in a professional customization mode, namely in the practical application process, if naked eye 3D videos or naked eye 3D sequence frames are to be generated, professional art staff are needed to model and make animations through modeling software (such as 3 DMax) and finally render the output videos again, however, the manufacturing period of a single video obtained in the mode is about 1 month, and the naked eye 3D program is more needed to be matched with the art and the program together in the generation process, so that the purposes of developing content and adjusting effects are achieved, and the time required by the naked eye 3D program is more than that required for manufacturing the videos.
Based on this, the disclosed example embodiments provide a method for processing naked eye 3D resources, where in the method for processing naked eye 3D resources provided in the disclosed example embodiments, a model library, a scene library, an animation library, a light library, a material library, a sound library, and a naked eye 3D imaging and effect adjusting system may be built in a resource editor, so as to support a user to import a 3D model in any format, and through simple editing, the naked eye 3D content of the user may be output, thereby improving the situations of difficulty in manufacturing the naked eye 3D content with multiple viewpoints and lack of industrial content.
In an example embodiment, the present disclosure first provides a method for generating a naked eye 3D resource, where the method for generating a naked eye 3D resource may be operated on a terminal device, a server cluster, or a cloud server where a resource editor is located, and of course, a person skilled in the art may also operate the method of the present disclosure on other platforms according to requirements, which is not limited in particular in the present exemplary embodiment. Specifically, referring to fig. 1, the method for generating the naked eye 3D resource may include the following steps:
S110, creating a newly-added generation scene corresponding to a target interaction model, and loading the target interaction model in the newly-added generation scene to obtain an original resource scene;
S120, calling a preset resource library to adjust the original resource scene to obtain a target resource scene, and determining the viewpoint number of the target resource scene according to the target equipment parameters in the newly-added generation scene;
S130, configuring lens parameters of a virtual lens group corresponding to the target resource scene according to the viewpoint number, and configuring the virtual lens group according to the lens parameters;
and S140, generating naked eye 3D resources corresponding to the target interaction model based on the target resource scene, the virtual lens group and the zero point position.
According to the naked eye 3D resource processing method, on one hand, a newly-added generation scene corresponding to a target interaction model is created, a target interaction model is loaded in the newly-added generation scene to obtain an original resource scene, then a preset resource library is called to adjust the original resource scene to obtain a target resource scene, the viewpoint number of the target resource scene is determined according to target equipment parameters in the newly-added generation scene, further a virtual lens group corresponding to the target resource scene is configured according to the viewpoint number, the zero point position of the target resource scene is determined, finally naked eye 3D resources corresponding to the target interaction model are generated based on the target resource scene, the virtual lens group and the zero point position, automatic generation of the naked eye 3D resources is achieved, further the problem that in the prior art, due to the fact that the naked eye 3D resources are required to be obtained through modeling, animation production and animation rendering is solved, the generation efficiency of the naked eye 3D resources is low is improved, on the other hand, the preset resource library is called to adjust the original resource scene according to the viewpoint number of the target interaction model can be loaded in the newly-added generation scene, the original resource is obtained, the target resource is finally, the naked eye 3D resources are obtained according to the newly-added, the viewpoint number of the newly-added target resources can be configured according to the newly-added generation scene, and the naked eye 3D resources can be further the corresponding to the naked eye 3D resources can be generated according to the viewpoint.
The method for processing the naked eye 3D resource according to the exemplary embodiment of the present disclosure will be further explained and illustrated below with reference to the accompanying drawings.
First, proper nouns involved in the exemplary embodiments of the present disclosure are explained and explained.
3DMax, a three-dimensional digital modeling tool, can be used for making a three-dimensional model and an animation.
And C4D, a three-dimensional digital modeling tool, which can be used for making a three-dimensional model and an animation.
The Unity3D, three-dimensional game engine can be used for developing three-dimensional games and three-dimensional software, which is called Unity and U3D for short.
UE is a three-dimensional game engine, chinese is a illusion engine (Ureal Engine), which can comprise two large versions of UE4 and UE 5.
The Unity3D scene is a concept of a game engine, one virtual scene can be called a virtual stage, and meanwhile, in the actual application process, virtual objects can be placed in the virtual scene, and a plurality of scenes can be loaded at the same time.
The object refers to a virtual object in the game engine, each virtual can be used for manufacturing a corresponding virtual model through modeling software, and meanwhile, the object in the game engine can comprise lamplight, a camera and the like.
The camera refers to a virtual camera in the Unity3D engine, which is a virtual concept, and a picture presented by software is a virtual scene picture shot by the virtual camera.
The multi-view naked eye 3D refers to a grating naked eye 3D screen, and specific screen content can be displayed to a 3D effect through optical action.
The camera system comprises 9 virtual lenses (or virtual cameras) in the camera system, wherein the camera system can be formed by arranging 9 equidistant cameras into a group, meanwhile, the 9 cameras face one point, the obtained point can be called zero point, the zero point is the center point of a zero plane, in the practical application process, the angle of each camera can be automatically adaptive due to the fact that the angle of each camera faces the zero point, and of course, one camera is arranged in the camera system for editing, and a 2D mode is used when content is produced at ordinary times.
And setting a zero plane in a three-dimensional space of software to determine the demarcation point of the multi-view naked eye 3D screen outlet and screen inlet based on the zero plane.
The screen outputting and inputting refers to the feeling that an object goes out of a plane or goes into a screen when a person looks at a naked eye 3D plane, and an effective area between a zero plane and a camera is an screen outputting area in a corresponding software three-dimensional space, and an effective area after the zero plane is a screen inputting area.
The method comprises the steps of adjusting an interface, controlling and adjusting camera parameters by the aid of the interface, storing and reading, meanwhile, enabling naked eye 3D imaging to synthesize one picture by images of a plurality of cameras, enabling adjustment factors to influence differences of images of objects seen by the pictures, enabling the screen output effect to be stronger as the differences of the images are larger, enabling blurring and double images to be generated if the differences of the images are overlarge, and controlling the difference degree of the images to be within a certain range.
The multi-view naked eye 3D content viewing and manufacturing editor system related to the disclosed example embodiment is explained and described, specifically, the system is internally provided with core functions such as professional art scene resources, naked eye 3D image arrangement algorithm, models, animation, naked eye 3D parameter editing and the like, a non-professional user and a professional user are led in the models and simply edit the models to obtain naked eye 3D content with good effects, and further, the output naked eye 3D content (or the naked eye 3D resource) can be directly accessed into a naked eye 3D screen for viewing, naked eye 3D programs and naked eye 3D videos can be output, and the naked eye 3D content can be used on the naked eye 3D screen.
Specifically, referring to fig. 2, the editor system for viewing and making multi-viewpoint naked 3D content may include an engineering management component 210, a model import component 220, a resource library 230, a property editing component 240, a human-computer interaction component 250, a naked 3D imaging component 260, a resource preview publishing component 270, and so on. In the practical application process, the engineering management component can be used for creating a newly-added generation scene, the model import component can be used for importing an object model, the resource library can be used for storing various different resources, the attribute editing can be used for editing the model or the attributes of the resources, the man-machine interaction component can be used for interacting the displayed naked eye 3D program, the naked eye 3D imaging component can be used for determining a zero position, and the resource preview release component can be used for previewing and/or releasing the generated naked eye 3D resources.
In an example embodiment, the naked eye 3D imaging component described herein may be composed of a camera system, an adjustment interface and an interleaving algorithm, and meanwhile, the editor system for viewing and making the multi-viewpoint naked eye 3D content and the resource preview publishing component may include a naked eye 3D imaging component, and the naked eye 3D imaging component may support imaging in a naked eye 3D mode and may include imaging in a 2D mode. Further, the principle of naked eye 3D imaging is that in software, how many viewpoints correspond to how many cameras, taking 9 cameras as an example, 9 cameras can be arranged in a row at equal distance, the purpose of the method is that each camera 'sees' a target object from a certain angle, 9 pictures on 9 different angles of the object are obtained, finally, a picture is synthesized through an interleaving algorithm, and the picture can display 3D effects on a naked eye 3D screen.
In an example embodiment, in an editor system for viewing and making multi-viewpoint naked eye 3D content, user edited content can be saved in engineering concept, namely naked eye 3D resources can be described in engineering concept, meanwhile, engineering described herein can comprise resources imported and used by users and editing data of the users, further, engineering data are saved in a local disk in a file form, and the saved data and resources recorded by engineering can be used for constructing a Unity3D scene, the scene described herein can be used for representing a new generated scene, and scene library scenes need to be loaded simultaneously, so that the effect of loading scene library scenes of the generated scene is achieved. Further, in the practical application process, the project management described above includes operations such as project creation, project opening, project saving, project deletion, and project introduction, and the created multiple projects may be managed in a list form, and the specific obtained interface diagram may be shown in fig. 3.
In an exemplary embodiment, the project opening described above may be implemented by clicking an element icon of the project file list to open a project, and simultaneously, the process of opening the project is to construct a Unity scene, i.e. generate the scene, by using the project data and the resources. In an example embodiment, the specific implementation process of engineering preservation is to preserve the editing state of the current engineering, and the specific engineering preservation data and resources include used scenes, imported models, model positions/rotations/scaling, animation, illumination, sound, materials and naked eye 3D related information. In an example embodiment, the project deletion is implemented by clicking the deletion of the right lower corner of the project element on the project interface, and deleting all the data and resources of the project if the project deletion operation is performed, and deleting the file recording the project data, wherein an example diagram of a specific project deletion scenario can be shown in fig. 4. In an exemplary embodiment, the project importing described above may be implemented by clicking a "import project" button on the display interface shown in fig. 3, to import an external project file, and adding the external project to the project list of the current software after the external project is imported, where the importing process is to completely import the external project data under the catalog of the project data stored in the current software.
In an exemplary embodiment, referring to fig. 5, the resource library described herein may include, but is not limited to, a model resource library 501, a scene resource library 502, an animation resource library 503, a light resource library 504, a material resource library 505, and a sound resource library 506, where the model resource library includes object models, the scene resource library includes various scene resources such as indoor scenes, outdoor scenes, sunny scenes, rainy scenes, snowy scenes, and the like, the animation resource library includes various animation resources such as rotation, scaling, explosion, and the like, the light resource library includes light resources with various display effects such as strong light, weak light, point light, warm tone, and cool tone, and the like, the material resource library includes various materials associated with object models, which may be determined according to the object models, and the sound resource library includes sound resources under various scenes such as natural sound, noise, crying, laughter, car sound, and the like, without further limitation.
In an example embodiment, the editor system for viewing and making naked eye 3D content can adopt an offline stand-alone mode or a CS (Client/Server) Client Server mode in a specific implementation process, the two modes mainly differ in whether a resource library is stored locally or is a Server, in an actual application process, whether a stand-alone mode or a customer Server mode is adopted, the two modes can be called clients, meanwhile, the clients can be implemented by using Unity3D, and can be implemented by using UE engines, and the implementation is not limited to Unity3D and UE engines, and any other technology capable of implementing the system scheme is possible, so that the implementation is not limited in particular in the example. Furthermore, the editor system for viewing and manufacturing the naked eye 3D content of the viewpoint fills the blank of a 3D content manufacturing editor in the naked eye industry, greatly reduces the difficulty of manufacturing the naked eye 3D content, improves the manufacturing efficiency, and can improve the industry dilemma of lack of the naked eye 3D content.
Further, application scenarios of the exemplary embodiments of the present disclosure are explained and illustrated. Specifically, the processing method of the naked eye 3D resource, which is recorded in the example embodiment of the disclosure, can be applied to the scenes of three-dimensional twin simulation of data of a traffic simulator, three-dimensional visual simulation of traffic data and timing scheme verification of traffic lights. In an example embodiment, if the scheme is applied to a three-dimensional twin simulation scene of data of a traffic simulator, a naked eye 3D resource of the traffic simulator corresponding to the three-dimensional twin simulation scene of the traffic simulator can be generated based on the processing method of the naked eye 3D resource, and the naked eye 3D resource of the traffic simulator is displayed so as to perform simulation test on an actual traffic scene, and the actual traffic resource is allocated based on the simulation test result. In another example embodiment, if the scheme is applied to a three-dimensional visual simulation scene of traffic data, a traffic data naked eye 3D resource corresponding to the three-dimensional visual simulation scene of traffic data can be generated based on the processing method of the naked eye 3D resource, and the traffic data naked eye 3D resource is displayed so as to perform simulation analysis on actual traffic data, and traffic anomalies occurring in the actual traffic data are analyzed based on the simulation analysis result. In another example embodiment, if the scheme is applied to the scene of timing scheme verification of the traffic light, the naked eye 3D resource of the traffic light corresponding to the scene of timing scheme verification of the traffic light can be generated based on the processing method of the naked eye 3D resource, and the naked eye 3D resource of the traffic light is displayed, so that simulation analysis is performed on the actual timing scheme of the traffic light, and further adjustment is performed on the actual timing scheme of the traffic light based on the simulation analysis result.
The processing method of the naked eye 3D resource shown in fig. 1 will be further explained and described with reference to fig. 2 and 5. Specific:
In step S110, a new generation scene corresponding to the target interaction model is created, and the target interaction model is loaded in the new generation scene, so as to obtain an original resource scene.
In the embodiment of the invention, a newly-added generation scene corresponding to a target interaction model is created firstly, specifically, the newly-added generation scene corresponding to the target interaction model is created by firstly, displaying an engineering creation sub-interface in response to touch operation of a first preset interactable control on a display interface of a resource creation terminal, secondly, determining target equipment parameters of target display equipment for displaying naked eye 3D resources corresponding to the target interaction model and naked eye 3D interweaving parameters in response to input operation of the engineering creation sub-interface, and then, creating the newly-added generation scene corresponding to the target interaction model in response to touch operation of a second preset interactable control in the engineering creation sub-interface. The method comprises the steps of generating a newly added generation scene in the actual application process, touching an interactable control (a first preset interactable control) for creating engineering shown in fig. 3, namely displaying an engineering creation sub-interface shown in fig. 6, inputting target equipment parameters and naked eye 3D interleaving parameters of target display equipment in the engineering creation sub-interface, wherein the target equipment parameters can be directly selected from an editor and can be set in a self-defined mode, simultaneously, the naked eye 3D interleaving parameters can comprise interlacedX and INTERLACEDA, the naked eye 3D interleaving parameters can be set in an interleaving parameter associated with the target equipment parameters, and can also be set in a self-defined mode, the situation is not specially limited in the example, the default parameters of the target display equipment are built-in configuration in software according to equipment in the actual application process, a user can not modify the default parameters of corresponding equipment, simultaneously, the naked eye 3D interleaving parameters are parameters suitable for the naked eye 3D interleaving module and the naked eye imaging module, the two parameters are parameters, the naked eye 3D interleaving parameters can be set in a self-defined mode, the newly-built-interface is a newly-built-up mode, and the interaction scene can be further recorded in the newly-built-defined mode, and the newly-built-model is completed, and the newly-added interaction scene can be generated, and the newly-built model is a newly-built model, and the newly-built model is required.
The method comprises the steps of obtaining a target interaction model from a preset model library according to the target model name of the target interaction model, or importing the target interaction model from an external file according to the target model name of the target interaction model, and carrying out self-adaptive adjustment on the model size of the target interaction model in the newly-added generation scene to obtain the original resource scene. In the practical application process, the loading of the target interaction model can be realized by an external import mode or can be directly obtained from a model library, and meanwhile, the data format of the target interaction model can comprise common formats such as, but not limited to, a fbx format, a gltf format, an obj format, a 3mf format, a ply format, a stl format and the like, and can also support a AssetBundle resource format of Unity.
In an example embodiment, after a target interaction model is loaded into a newly-added generation scene, the model size is required to be adaptively adjusted, specifically, the model size of the target interaction model in the newly-added generation scene is adaptively adjusted to obtain the original resource scene, the method can be achieved by firstly constructing a first rectangle according to the newly-added generation scene and constructing a second rectangle according to the target interaction model, secondly, calculating a model scaling coefficient of the target interaction model in the newly-added generation scene according to the first rectangle and the second rectangle, and then, adaptively adjusting the model size of the target interaction model based on the model scaling coefficient to obtain the original resource scene.
In an example embodiment, the construction of the first rectangle according to the newly generated scene may be achieved by displaying the newly generated scene on a display interface of a resource creation terminal, taking a center point of the display interface as a center point of the first rectangle, determining a first rectangle length and a first rectangle width of the first rectangle according to an interface length and an interface width occupied by the newly generated scene on the display interface, and constructing the first rectangle according to the center point of the first rectangle, the first rectangle length and the first rectangle width.
In an example embodiment, the second rectangle is constructed according to the target interaction model by acquiring pixel coordinates of a pixel point in the target interaction model, acquiring a maximum abscissa value, a maximum ordinate value, a minimum abscissa value and a minimum ordinate value in the pixel point coordinates, determining a second rectangle height according to the maximum abscissa value and the minimum abscissa value, determining a second rectangle length according to the maximum ordinate value and the minimum ordinate value, and constructing the second rectangle by taking a center point of the target interaction model as a center point of the second rectangle and according to the center point of the second rectangle, the second rectangle height and the second rectangle length.
In an example embodiment, calculating a model scaling factor of the target interaction model in the newly generated scene according to the first rectangle and the second rectangle may be achieved by mapping the first rectangle to three-dimensional coordinates of a resource generating engine to obtain a rectangle mapping result, calculating a first ratio between a rectangle mapping height and a second rectangle height in the rectangle mapping result, calculating a second ratio between a rectangle mapping length and a second rectangle length in the rectangle mapping result, and determining a model scaling factor of the target interaction model in the newly generated scene based on the first ratio and the second ratio.
The adaptive adjustment process of the model size will be further explained and explained below. Specifically, in the process of adaptively adjusting the model size of the target interaction model, the method can be realized through a scaling automatic matching mechanism, wherein the scaling automatic matching medium described herein refers to that the target interaction model can be adaptively adjusted through an automatically set model scaling coefficient in the process of importing or loading, so that the loaded or imported target interaction model displays the optimal range of a system software interface, namely the optimal visual range of a camera.
Furthermore, during self-adaptive adjustment, the zero point (0, 0) position of the display interface can be used as the default importing position of the model, and meanwhile, on the basis that the zero point position is already determined, only the scaling factor of the model is required to be determined, and the size of the model is scaled based on the scaling factor of the model, so that the display size of the target interaction model in the display interface is more suitable. The specific calculation process of the model scaling factor can be realized by firstly determining a rectangle (first rectangle) with a given length and width and taking a screen center point as a center, secondly calculating the length w1 and the height h1 of a mapping rectangle (rectangle mapping result) of the first rectangle in a zero point (0, 0) world coordinate space, then calculating the leftmost and rightmost, uppermost and rightmost rectangles of a newly-imported model to obtain a rectangle (second rectangle) with the length w2 and the height h2, further respectively calculating w1/w2 and h1/h2 to obtain two factors, taking the small value of the two factors, namely Scalemin, and finally, obtaining a final model scaling factor = the coefficient X-Scalemin when the model is imported.
In an example embodiment, the editor system for viewing and making multi-viewpoint naked eye 3D content described in the example embodiment of the present disclosure may also support display of multiple models, where multiple models described in the present disclosure means that multiple models may be in one scene, only one model may appear in the same screen for simplicity, and when a current screen has a model, a new label may be created by importing the model again. Up to 10 models can be supported. Specifically, in the actual application process, the method can be realized by responding to a model loading operation, loading a new virtual model in the original resource scene and/or importing the new virtual model in the original resource scene, generating a model label corresponding to the new virtual model, and displaying the model label in the original resource scene. That is, in the actual application process, if other models (such as a new virtual model) are introduced in addition to the target interaction model, a corresponding model label may be generated according to the model name of each model, and displayed in a list manner, where the obtained scene example diagram may be shown in fig. 7.
In an example embodiment, the models can be switched and displayed in a multi-model display scene, and specifically, the method can be realized by responding to touch operation of the model labels, displaying new virtual models corresponding to the model labels in the original resource scene, and switching target interaction models in the original resource scene based on the new virtual models. In the practical application process, if the model is required to be displayed in a switching mode, the corresponding model label can be clicked to realize the switching view of the model, and meanwhile, if a cursor is moved to the corresponding model label, the thumbnail of the model corresponding to the model label can be popped up at the corresponding position.
In an example embodiment, in the context of multi-model presentation, the presentation order and presentation timing of the models may also be adjusted. The method comprises the steps of responding to touch operation of the model labels, adjusting the display sequence of the model labels in the original resource scene, displaying a mode time sequence setting interface of the newly added virtual model and/or the target interaction model, and responding to input operation of the mode time sequence setting interface, and determining model display duration of the newly added virtual model and/or the target interaction model. In the practical application process, if the display times of the model are required to be adjusted, the model label of the model can be directly dragged to the corresponding display position, and further, if the display time sequence of the model is required to be adjusted, the display time sequence can be realized through a mode time sequence setting interface.
In an example embodiment, the above-described model presentation time sequence refers to the sequence of model resource presentation and the duration of single model presentation in the process of previewing and publishing, meanwhile, the presentation time sequence is an automatic presentation time sequence without operation for the published program, and the timing of presentation is stopped when there is interactive operation, in the process of practical application, one model presentation time sequence is that animation with fixed duration is used, the other model presentation time sequence is that animation with fixed duration is used, the presentation time sequence is that animation with fixed duration is not used, the presentation time sequence is that the default time is 3 seconds (such as rotating animation), further, in the process of setting the presentation time sequence of multiple models, the model time sequence is set under the preview interface, and is not satisfied with the sequence and the presentation time, and the specific model time sequence setting interface can be set as shown in fig. 8. Furthermore, after the model time sequence setting interface is opened, the information of all models imported in the current engineering can be automatically displayed, each model can correspond to a piece of corresponding display time sequence information, when time sequence adjustment is needed, the display time can be adjusted by dragging the display time sequence information corresponding to the model, the display time length can also be adjusted by clicking a "-" + "symbol in the display time sequence information, and the supplementary explanation is that the minimum display time of the model is 1 second, and the model with animation in fixed time length can be used, so that the display time cannot be modified.
In step S120, a preset resource library is called to adjust the original resource scene, so as to obtain a target resource scene, and the viewpoint number of the target resource scene is determined according to the target equipment parameters in the newly-added generation scene.
In this example embodiment, a preset resource library is called to adjust an original resource scene to obtain a target resource scene, where contents in the preset resource library described herein may include, but are not limited to, data information of a resource and a resource file, where the data information of the resource refers to information of a specification of a recording resource, for example, in terms of a scene resource distance, the data information of the scene resource may include an english name, a chinese name, a resource loading address, a resource introduction picture address, and the like of the scene resource, and in terms of the system offline client, the resource and the data are stored locally, in terms of the system online client, the resource and the data are stored in a server, and need to be acquired and downloaded.
Further, the preset resource library described herein may include, but is not limited to, a scene resource library, a model resource library, an animation resource library, a material resource library, a light resource library, a sound resource library, etc., where the scene resource library described herein may be used to represent a three-dimensional scene library that is manufactured through art design, where the three-dimensional scene library uses AssetBundle of Unity3D to package a manufactured scene, and then the system software is loaded into a system to obtain the scene resource library, where a specific scene example diagram of the scene resource library may be shown in fig. 9. The animation implementation of the animation library described herein may include an implementation manner that an animation implemented by using a Unity3D program, such as rotation, floating up and down, is directly added on a model object and then acts on the model, and another implementation manner that a key frame animation manufactured by using an art tool (such as 3 DMax) is recorded as a track of movement, rotation and scaling of the object, and meanwhile, when the art tool is exported, the animation object is exported singly, and then the model object is hung under the animation object in a client as a child object of the animation object, wherein a scene diagram of an animation resource may be specifically shown in fig. 10. The lamplight in the lamplight resource library can be realized based on lamplight in the Unity3D engine, and the lamplight resource library can comprise parallel light, point light sources, spotlights, combined lamplight of the point light sources and the spotlights and the like which are basic in the Unity3D engine, wherein the combined lamplight can be a new light source consisting of the point light sources with different angles at two positions. The texture resource library described herein is a code for controlling the 3D object representation, wherein the texture of the texture library may be an object in some scenes, such as glass, metal wire drawing, metal frosting, diamond, etc., different models may correspond to different textures in the practical application process, the example is not particularly limited, model information is automatically identified by default in the model importing or loading process, and information such as a map and a color carried by the default texture and the model itself is used, and of course, if the user is not satisfied with the default effect, a response texture provided in the texture resource library may be used, and the use of the texture library requires selecting a texture icon in the texture library, then dragging the selected texture icon onto a corresponding position of the corresponding model, and when the control of the texture icon is released, the texture of the current position of the model may replace the original texture with the dragged texture, and the map and the color use the original map and color of the position. The sound resource library described herein can be used to set background music for edited content, which can provide audio data in audio format such as MP 3.
On the premise of the recorded content, a preset resource library is called to adjust the original resource scene to obtain a target resource scene, and the method can be realized by loading an original three-dimensional scene corresponding to the target interaction model from the scene library and adding the original three-dimensional scene to the newly-added generation scene, and/or loading an original three-dimensional animation from the animation library and acting the original three-dimensional animation on the target interaction model, and/or loading an original light from the light library and adding the original light to the newly-added generation scene, wherein the recorded original light can comprise, but is not limited to, parallel light, a point light source, a spotlight, a combined light formed by the point light source and the spotlight, and the like, and/or loading model materials corresponding to the target interaction model from the material library and acting the model materials on the target interaction model, and/or loading audio data corresponding to the target interaction model from the sound library and adding the audio data to the newly-added scene, and/or obtaining new-added model attributes and/or new-generated model attributes and/or properties and/or new sound-attribute and/or attribute of the newly-added model.
Hereinafter, a specific adjustment process of the original resource scene will be further explained and explained.
In an example embodiment, loading the original three-dimensional scene corresponding to the target interaction model from the scene library and adding the original three-dimensional scene to the newly generated scene may be achieved by first determining a scene name of the original three-dimensional scene corresponding to the target interaction model, for example, determining that the scene name of the corresponding original three-dimensional scene is an indoor decoration scene when the target interaction model is a packet, then loading the original three-dimensional scene from the scene library (i.e., the scene resource library) based on the scene name, and adding the original three-dimensional scene as a background to the newly generated scene to modify the packet in the scene, thereby facilitating better presentation.
In an example embodiment, the original three-dimensional animation is loaded from the animation library and acted on the target interaction model, and the method can be realized by firstly determining the original three-dimensional animation to be loaded corresponding to the target interaction model and further loading the original three-dimensional animation in the animation library (namely, the animation resource library), wherein the original three-dimensional animation can comprise program animation, key frame animation and the like, the program animation can also be called Unity program animation, the animation attribute of the Unity program animation can comprise the period time length of the animation and the animation amplitude of the animation, the key frame animation can also be called art key frame animation, the art key frame animation can comprise the offset attribute of up, down, left, right, front and rear, and the offset attribute can be used for representing the offset when the whole animation track is offset. In this scenario, the process of applying the original three-dimensional animation to the target interaction model may be accomplished by adding the program animation to the target interaction model and/or mounting the target interaction model under an animation object in the keyframe animation to take the target interaction model as a child of the animation object. In other words, for the program animation, a rotating program or an up-down floating program corresponding to the target interaction model can be directly added on a model object to act on the target interaction model, and for the key frame animation, the target interaction model can be directly mounted under an animation object in the key frame animation to be used as a sub-object of the animation object in the key frame animation, so that the operations of moving, rotating or zooming the target interaction model are realized.
In an example embodiment, loading model materials corresponding to the target interaction model from the material library and acting the model materials on the target interaction model may be achieved by first determining a material name of the model materials corresponding to the target interaction model, loading the model materials from the material library (i.e., the material resource library), and acting the model materials on the target interaction model, where in acting the model materials on the target interaction model, it may be achieved by replacing original materials in the target interaction model based on the model materials in response to dragging the model materials onto the target interaction model. That is, after the model material is determined, the model material can be directly dragged to the corresponding position of the target interaction model.
In an example embodiment, the adjustment of the model attribute and/or animation attribute and/or light attribute and/or material attribute and/or sound attribute of the target interaction model in the newly generated scene may be achieved by adjusting the model attribute of the target interaction model in the newly generated scene, wherein the model attribute may include a structure level attribute and a position attribute, adjusting the animation attribute of the target interaction model in the newly generated scene, adjusting the light attribute of the target interaction model in the newly generated scene, adjusting the material attribute of the target interaction model in the newly generated scene, and adjusting the material attribute of the target interaction model in the newly generated scene, wherein the material attribute may include, but is not limited to, a model color, a texture map, a normal map, a transparency, a glossiness, a refraction degree, and the like. Specific:
On the one hand, the model attribute of the target interaction model in the newly-added generation scene is adjusted by responding to touch operation of a model attribute interaction control, displaying a model adjustment interface of the model attribute of the target interaction model, and responding to input operation of the model adjustment interface, and adjusting attribute values of structure level attribute and/or attribute values of position attribute of the target interaction model. Another implementation is to adjust a current model position of the target interaction model in the newly generated scene and/or rotate the target interaction model in response to a movement event acting on the target interaction model. That is, in the actual application process, if the target interaction model is a target interaction model capable of performing position setting, when the target interaction model is loaded into a scene, the system interface can display the position attribute parameters corresponding to the target interaction model, and the user can directly modify the position attribute parameters of the target interaction model. In the practical application process, the position attribute of the model can support shortcut modification, for example, the left button of the mouse is pressed to move, the up, down, left and right positions of the target interaction model can be moved, for example, the right button of the mouse is pressed to move, the target interaction model can be rotated in a remembering way, the roller of the mouse is slid to scale the target interaction model, for example, the Ctrl+mouse roller is pressed, and the target interaction model can be moved back and forth. An exemplary diagram of a specific implementation principle of the adjustment process of the model attribute may be referred to as fig. 11. It should be noted that, in the process of adjusting the target interaction model, if the user is not satisfied with the adjustment result of the target interaction model, the attribute reset function control in the display interface may be used to perform a reset operation, so as to control the target interaction model to recover from the adjusted state to the state when the target interaction model is imported or loaded.
On the other hand, the adjustment of the animation attribute of the target interactive model in the newly generated scene can be realized by responding to the touch operation of the animation setting interactive control, displaying an animation adjustment interface corresponding to the animation attribute, responding to the input operation of the animation adjustment interface, adjusting the animation period time length and/or the animation amplitude in the animation attribute, and/or adjusting the offset of the animation track in the animation attribute in the newly generated scene. In the practical application process, when the animation attribute adjustment is needed, the animation setting control can be touched to display the animation attribute adjustment interface, wherein the animation attribute adjustment interface can be specifically shown with reference to fig. 12, and further, based on the animation attribute adjustment interface shown in fig. 12, the up-down offset position, the front-back offset position and the animation running speed of the target interaction model can be directly adjusted.
In still another aspect, the adjustment of the light attribute of the target interaction model in the newly generated scene can be achieved by displaying a light adjustment interface corresponding to the light attribute in response to a touch operation of a light setting interaction control, and adjusting the light position and/or light intensity and/or light color of parallel light and/or point light and/or spotlight and/or combined light in the newly generated scene in response to an input operation of the light adjustment interface. The light attribute may include a light position attribute and a light attribute, wherein the light attribute may include a light intensity, a light color, and the like, so that if the light attribute of the target interaction model needs to be adjusted, a control may be touched to display a light attribute adjustment interface, wherein the light attribute adjustment interface may specifically be shown with reference to fig. 13, and further, it may be known based on the light attribute adjustment interface shown in fig. 13 that the light intensity of the parallel light, the point light source, the spotlight, the combination light may be adjusted directly based on the light attribute adjustment interface, the light color of the parallel light, the point light source, the spotlight, the combination light may be adjusted, and the light positions and the light angles of the parallel light, the point light source, the spotlight, the combination light may be adjusted.
In still another aspect, the adjustment of the material properties of the target interaction model in the newly generated scene may be achieved by displaying a material adjustment interface corresponding to the material properties in response to a touch operation of a material setting interaction control, and adjusting the model color and/or texture map and/or normal map and/or transparency and/or glossiness and/or refraction of the target interaction model in response to an input operation of the material adjustment interface. In the practical application process, the material property can comprise parameters such as basic color, main texture, normal map and transparency, and parameters such as glossiness and refraction parameters, so that if the material property of the target interaction model needs to be adjusted, a control can be touched to display the material property adjusting interface, wherein the displayed material property interface can be specifically shown by referring to fig. 14, further, the material property adjusting interface shown by fig. 14 can be known, the main texture, the metallic luster map and the discovery map of the target interaction model can be directly adjusted based on the material property adjusting interface, and if the metallic luster of the target interaction model needs to be adjusted, the metallic luster and smoothness of the target interaction model can be directly adjusted, and meanwhile, if the target interaction model is a strawberry model, the model color, the main texture map and the normal map can be adjusted through the material property adjusting interface, for example, the strawberry model can be adjusted based on the surface of the discovery map, and then the strawberry model can be placed based on the surface.
Further, the adjustment of the sound attribute of the target interaction model in the newly generated scene can be achieved by responding to the touch operation of the sound setting interaction control, displaying a sound adjustment interface corresponding to the sound attribute, and responding to the input operation of the sound adjustment interface, and adjusting the volume of the audio data. In other words, if the sound attribute of the target interaction model needs to be adjusted in the actual application process, the sound setting control may be clicked to display the sound attribute setting interface, where the sound attribute setting interface may be specifically shown in fig. 15, and meanwhile, it may be known based on the sound attribute setting interface shown in fig. 15 that the play mode and the volume of the audio data may be adjusted directly based on the sound attribute setting interface.
Secondly, the viewpoint number of the target resource scene is determined according to the target equipment parameters in the newly-added generation scene, and the method is specific. The method can be realized by firstly determining the equipment attribute information of the target display equipment corresponding to the target equipment parameters according to the target equipment parameters in the newly-added generation scene, and secondly determining the number of viewpoints required by the target display equipment when the target resource scene is displayed according to the equipment attribute information. In the practical application process, different types of target display devices can support different viewpoint numbers, for example, some types of display devices can support 2 viewpoints, some types of display devices can support 9 viewpoints, some types of display devices can support 18 viewpoints, and when the corresponding viewpoint number is determined, the viewpoint number can be determined directly according to the device attribute information of the target display devices.
In step S130, lens parameters of a virtual lens group corresponding to the target resource scene are configured according to the viewpoint number, and the virtual lens group is configured according to the lens parameters.
In the present exemplary embodiment, first, lens parameters of a virtual lens group corresponding to a target resource scene are configured according to the number of viewpoints, where the virtual lens group described herein is composed of a plurality of virtual lenses, which may also be referred to as a virtual camera, and in the processing method of naked eye 3D resources described in the present exemplary embodiment, the number of lenses of the virtual lens included in the virtual lens group is determined according to the number of viewpoints, for example, the number of viewpoints is 3, the virtual lens group may be composed of 3 virtual lenses, and for example, the number of viewpoints is 9, the virtual lens group may be composed of 9 virtual lenses. Further, the lens parameters of the virtual lens group described herein include not only the number of lenses of the virtual lens group, but also the lens pitch of each virtual lens in the virtual lens group, the distance difference between the virtual lens group and the zero plane, and the like.
It should be noted that, when the number of viewpoints is 9, for example, the lens distances between 9 virtual lenses in the virtual lens group and the distance difference between the virtual lens group and the zero plane are all adjusted in advance, and of course, in order to obtain the zero plane, the zero point position needs to be determined first, and further, in order to obtain the zero point position of the target resource scene, the zero point needs to be determined first, where the zero point of the target resource scene described herein, that is, the intersection point of the lens orientations of the virtual lenses in the virtual lens group, and meanwhile, the zero point position of the zero point may be determined by the original coordinate positions (0, 0) where the virtual engine is located, or may be determined based on the intersection point position of the lens orientations of the virtual lenses. Based on the above, the specific determination process of the zero point position can be realized in two ways, namely, determining the original coordinate position of the three-dimensional coordinate where the resource generating engine is located, determining the original zero point position according to the original coordinate position, and adjusting the original zero point position to obtain the zero point position of the target resource scene. The adjustment of the original zero point position may include, but is not limited to, back and forth movement, left and right movement, up and down movement, and the like. According to the second implementation mode, the lens orientation intersection point of the virtual lens is calculated according to the lens orientation of the virtual lens in the virtual lens group, and the zero point position of the target resource scene is determined according to the lens orientation intersection point. That is, since the zero point position is the intersection point position Of the lens orientations, it is possible to calculate the intersection point Of the orientations Of the respective virtual lenses directly based on the orientations Of the virtual lenses in the virtual lens group, and obtain the zero point position based on the intersection point position Of the intersection point; of course, in the process Of calculating the lens orientation intersection point, the lens parameters Of each virtual lens in the virtual lens group may be adjusted, for example, the overall camera pitch Of the virtual lens may be adjusted, the front-back camera pitch, the up-down camera pitch, and the left-right camera pitch Of the virtual lens may be respectively adjusted, or the Field Of View (FOV) Of the virtual lens may be adjusted.
Further, after obtaining the lens parameters, the virtual lens group may be determined based on the lens parameters. The method can be realized by determining the original lens position of each virtual lens in the virtual lens group according to the lens parameters, placing the virtual lens at the original lens position, and adjusting the lens parameters of the virtual lens at the original lens position to obtain the virtual lens group according to the virtual lens with the parameters adjusted. In other words, in the practical application process, the original lens position can be determined directly according to the lens spacing, the zero position and the distance difference between the zero plane and the virtual lens group in the lens parameters, and the virtual lenses are sequentially arranged and placed based on the original lens position. The virtual lens group obtained may be specifically shown with reference to fig. 16.
In an example embodiment, adjusting lens parameters of a virtual lens at an original lens position may be achieved by firstly displaying a naked eye parameter setting interface in response to a touch operation on a naked eye setting interactive control, secondly adjusting lens spacing and/or lens attitude information and/or lens viewing angle information of the virtual lens at the original lens position in response to an input operation in the naked eye parameter setting interface, and/or adjusting an original lens position of the virtual lens. Specifically, in the practical application process, if the lens parameters Of the virtual lens need to be adjusted, a control can be set by naked eyes to display a naked eye parameter setting interface, then, the camera setting control in the naked eye parameter setting interface is clicked to display the naked eye parameter setting interface, wherein the lens parameter setting interface can be specifically shown by referring to fig. 17, the lens parameter setting interface shown by fig. 17 can be used for directly setting the overall camera spacing Of the virtual lens based on the lens parameter setting interface, and also can be used for respectively setting the front-back camera spacing, the up-down camera spacing and the left-right camera spacing Of the virtual lens, and Of course, the Field Of View (FOV) Of the virtual lens can be set by advanced setting parameters (wherein the Field Of View Of each virtual lens in the virtual lens group is the same), and the larger the spacing between each virtual lens in the virtual lens group is, the stronger the screen-out effect Of the target interaction model is when being displayed.
It should be noted that, in theory, all parameters can be manually adjusted when adjusting the lens parameters, but in the practical application process, in order to improve the generation efficiency of naked eye 3D resources, generally, only the left-right spacing of the camera is adjusted, so as to achieve the purpose of better screen output effect.
It should be further noted that, when adjusting the lens parameters of the virtual lens, the target device parameters of the target display device may be further adjusted; however, when the target device parameters are adjusted, only the device interleaving parameters can be adjusted, but the device type cannot be adjusted, specifically because the virtual lens group is already determined because the viewpoint number is already determined, and if the device type is adjusted, the viewpoint number needs to be adjusted. However, the viewpoint number is already fixed, and thus the device type cannot be adjusted any more.
In an exemplary embodiment, an exemplary diagram of the specific principle of the field angle of the virtual lens described herein may be shown with reference to fig. 18, and meanwhile, the field angle of the virtual lens described herein may be used to adjust the field size of the virtual lens, in the practical application process, the field size of the virtual lens may affect the display size of the target interaction model captured by the virtual lens on the display screen of the target display device, where the smaller the angle of the field angle of the virtual lens, the larger the display effect of the target interaction model captured by the virtual lens on the display screen, and conversely, the larger the angle of the field angle of the virtual lens, the smaller the display effect of the target interaction model captured by the virtual lens on the display screen, that is, the size of the field angle is inversely proportional to the display size of the target interaction model on the display screen.
In step S140, based on the target resource scene, the virtual lens group, and the zero point position, naked eye 3D resources corresponding to the target interaction model are generated.
Specifically, referring to fig. 19, generating the naked eye 3D resource corresponding to the target interaction model based on the target resource scene, the virtual lens group, and the zero point position may include the following steps:
step S1910, determining a zero plane position according to the zero position, and adjusting the zero plane position.
The zero plane can be used as a demarcation point of the screen outgoing and the screen incoming of the multi-view naked eye 3D, the zero plane position can be determined by taking the zero position as a central point after the zero position is determined in the practical application process, the zero plane position can be adjusted after the zero plane position is determined, the adjustment of the zero plane position can be realized through a lens parameter setting interface shown in FIG. 17, the distance between the virtual lens group and the zero plane can be adjusted through adjusting the front and back positions of the zero plane, and the purpose of adjusting the screen outgoing effect of the target interaction model can be achieved.
Step S1920, determining a stereoscopic display area and a planar display area of the target interaction model in the target resource scene according to the adjusted zero plane position.
Specifically, the stereoscopic display area described herein is an out-screen area, a planar display area, or an in-screen area, where in the practical application process, the out-screen area is an effective area between the zero plane and the virtual camera group, and the in-screen area is an effective area after the zero plane.
Step S1930, determining a model placement area of the target interaction model in the target resource scene according to the stereoscopic display area and the planar display area, and adjusting a target model position of the target interaction model based on the model placement area.
Specifically, in the practical application process, in order to enable a user to simply manufacture an optimal naked eye 3D effect, a model placement area can be determined through a top view auxiliary function, wherein the implementation of the top view auxiliary function depends on a stereoscopic display area and a plane display area, a scene example diagram obtained after the adjustment of the target model position of a target interaction model can be shown by referring to fig. 20, in the example diagram shown by fig. 20, the model placement area can be shown by referring to 2001, in the model placement area shown by 2001, the front boundary of the model placement area is the maximum value of the naked eye 3D screen output area, the rear boundary of the model placement area is the position of a zero plane, and the position of the model placement area moves along with the position movement of the zero plane.
And step 1940, publishing the target resource scene with the adjusted positions and the virtual lens group to obtain naked eye 3D resources corresponding to the target interaction model.
The method comprises the steps of distributing a target resource scene with adjusted positions and the virtual lens group to obtain naked eye 3D resources corresponding to a target interaction model, wherein the method can be realized by responding to touch operation of a resource distribution interaction control, displaying a resource distribution interface, responding to touch operation of the resource distribution interface, determining a resource distribution type, distributing the target resource scene with adjusted positions and the virtual lens group based on the resource distribution type to obtain naked eye 3D resources corresponding to the target interaction model, and the resource distribution type recorded herein can comprise program resource types, video resource types, sequence frame resource types and the like. The method comprises the steps that in the actual application process, if the release operation of naked eye 3D resources is required to be executed, a release control can be clicked to display a resource release interface, wherein the resource release interface can be specifically shown by referring to FIG. 21, meanwhile, the resource release interface shown by FIG. 21 can be used for knowing that the resource can be released as a naked eye 3D program or as a naked eye 3D video when being released, and in the actual application process, the corresponding release type can be selected for release according to actual requirements to obtain the corresponding naked eye 3D resources.
In an example embodiment, when the resource release type is a program resource type, releasing the target resource scene and the virtual lens group after position adjustment based on the resource release type to obtain naked eye 3D resources corresponding to the target interaction model may be achieved by displaying a resource release interface corresponding to the program resource type, determining a saving path of the naked eye 3D resources in response to an input operation of the resource release interface corresponding to the program resource type, and packaging the target resource scene and the virtual lens group after position adjustment to obtain naked eye 3D resources with the program resource type. The resource distribution interface corresponding to the program resource type described herein may be further described with reference to fig. 21. The naked eye 3D resource with the program resource category is characterized in that data and resources of one project are packaged together and released into a naked eye 3D program capable of being independently operated, meanwhile, the released naked eye 3D program is like a parser, the data are parsed and restored into a generated scene at the time of operation, for example, the scene library scene, the model, the animation, the lamplight, the sound and the material properties can be loaded at the time of operation, the released naked eye 3D program supports the naked eye 3D imaging module, and the effect adjustment can be carried out according to the actual display condition at the time of operation on a naked eye 3D screen. On the premise, after the control is released by touch control, the system can automatically pack the target resource scene after position adjustment and the virtual lens group, and further obtain naked eye 3D resources with program resource categories.
In an example embodiment, when the resource release type is a video resource type and/or a sequence frame resource type, releasing the target resource scene and the virtual lens group after position adjustment based on the resource release type to obtain naked eye 3D resources corresponding to the target interaction model, the method can be realized by displaying a resource parameter adjustment interface corresponding to the video resource type and/or the sequence frame resource type, determining target resource parameters corresponding to the video resource type and/or the sequence frame resource type in response to an input operation of the resource parameter adjustment interface, and storing the target resource parameters, the target resource scene after position adjustment and the virtual lens group to obtain naked eye 3D resources with the video resource type and/or the sequence frame resource type. The method for publishing the video resource category or the sequence frame resource category includes clicking a publishing video control in a resource publishing interface to display a resource parameter adjustment interface corresponding to the video resource category and/or the sequence frame resource category, wherein the resource parameter adjustment interface corresponding to the video resource category and/or the sequence frame resource category is specifically shown with reference to fig. 22, and the resource parameter adjustment interface shown based on fig. 22 can know that target resource parameters to be adjusted can include, but are not limited to, a rendering style parameter, a resolution parameter, an output type parameter, and the like, and meanwhile, because different rendering styles, resolutions, and output types exist, the above-described storing the target resource parameters, the target resource scene after position adjustment, and the virtual shot group to obtain naked eye 3D resources with the video resource category and/or the sequence frame resource category can be realized by:
Firstly, determining a target rendering style of naked eye 3D resources according to rendering style parameters in the target resource parameters, secondly, determining a target picture type of the output picture based on input type parameters in the target resource parameters and determining a target resolution of the output picture based on resolution parameters in the target resource parameters, and then, responding to touch operation of a third preset interactive control in the resource parameter adjustment interface, outputting the naked eye 3D resources with the target rendering style and the target resolution and with video resource types and/or sequence frame resource types, wherein the target rendering style described herein comprises a multi-view splicing mode or a rendering result mode, and the target picture type comprises a video picture type or a sequence frame picture type.
In an example embodiment, outputting naked eye 3D resources with a target rendering style and a target resolution and with a video resource class and/or a sequence frame resource class in response to a touch operation of a third preset interactive control in the resource parameter adjustment interface may be achieved by outputting naked eye 3D resources with a multi-view stitching mode and a target resolution and with a video picture type in response to a touch operation of a third preset interactive control in the resource parameter adjustment interface, outputting naked eye 3D resources with a rendering result mode and a target resolution and with a video picture type in response to a touch operation of a third preset interactive control in the resource parameter adjustment interface, or outputting naked eye 3D resources with a multi-view stitching mode and a target resolution and with a sequence frame picture type in response to a touch operation of a third preset interactive control in the resource parameter adjustment interface, or outputting naked eye 3D resources with a sequence frame picture type and with a rendering result mode.
The specific generation process of naked eye 3D resources with video resource categories and/or sequence frame resource categories will be further explained and described below in connection with fig. 22. Specifically, in the practical application process, the implementation principle of the release process of the naked eye 3D resource with the video resource class is generally consistent with the implementation principle of the release process of the naked eye 3D resource with the sequence frame resource class, and the difference is that the naked eye 3D resource with the sequence frame resource class stores each frame of image in the naked eye 3D resource with the video resource class, meanwhile, the output format can comprise a multi-view mode and a rendering result mode in the release process of the naked eye 3D resource with the video resource class and/or the sequence frame resource class, wherein the multi-view mode refers to scene pictures shot by each virtual lens in the virtual lens group and sequentially arranged on one image according to the arrangement position of the virtual lens in the virtual lens group, the rendering result mode refers to the fact that the obtained multi-view video pictures can be combined into one image through an image interleaving algorithm, the obtained synthetic picture can be shown by referring to fig. 24, and meanwhile, the obtained synthetic picture can be shown in the synthetic picture 24, the synthetic picture can be further referred to the image 3D image with the corresponding to the fuzzy algorithm in the three-dimensional algorithm.
So far, the specific generation process of the naked eye 3D resource recorded in the example embodiment of the present disclosure has been fully implemented. The specific display process of the generated naked eye 3D resource on the target display device will be explained and described with reference to the accompanying drawings.
In an example embodiment, the display of naked eye 3D resources may be achieved by outputting naked eye 3D resources having a program resource class to a target display device and displaying naked eye 3D resources having a program resource class through the target display device, and/or outputting naked eye 3D resources having a video resource class and/or a sequence frame resource class to a target display device and displaying naked eye 3D resources having a video resource class and/or a sequence frame resource class through the target display device. The naked eye 3D resources can comprise naked eye 3D resources with program types, multi-view naked eye 3D video resources, rendering result video resources, multi-view naked eye 3D sequence frames and rendering result sequence frames, the released programs, the multi-view naked eye 3D video resources and the multi-view naked eye 3D sequence frames can be displayed for target display equipment with naked eye 3D screens and conference all-in-one machines with high configuration based on the released naked eye 3D resources in different forms, further, only rendering result video or sequence rendering result sequence frames can be released for low-configuration all-in-one machines such as photo frames, and when the naked eye 3D photo frame built-in player can play rendering result video or pictures manufactured according to the system.
Of course, the target device parameters of the target display device are set in the process of generating the naked eye 3D resource, so that the target display device can automatically adapt to the received naked eye 3D resource when displaying the naked eye 3D resource, and can also call an adjusting interface of the naked eye 3D imaging assembly to adjust the display effect so as to further match the display screen of the target display device, but only the screen interleaving parameters can be changed, the device type cannot be modified, for the multi-viewpoint naked eye 3D video resource and the multi-viewpoint naked eye 3D sequence frame, an image interleaving algorithm is required to be called for processing, the obtained result is displayed, and further, a specific playing process can be realized by calling the naked eye 3D video player when playing the multi-viewpoint naked eye 3D video resource and the multi-viewpoint naked eye 3D sequence frame.
In an example embodiment, in the actual application process, when the naked eye 3D resource displayed by the target display device is a naked eye 3D resource with a program resource class, intelligent interaction can be performed on the displayed naked eye 3D resource, and the intelligent interaction is required because the naked eye 3D screen of the target display device is a large-size wide screen, in order to view the displayed naked eye 3D resource, a certain distance exists between the viewing position and the target display device, and therefore, in a remote viewing scene, the target interaction model cannot be directly operated through a keyboard or a mouse. In order to solve the technical problem, interactive experience is improved, and naked eye 3D resources with program resource categories are introduced. That is, when the naked eye 3D resource displayed by the target display device is a naked eye 3D resource having a program resource class, the instruction of man-machine interaction may be executed on the displayed target interaction model after the target display device displays the naked eye 3D resource having the program resource class. The method comprises the steps of responding to an input current interaction gesture, obtaining hand state information and finger movement direction, determining a current interaction instruction required to be executed by a target interaction model in the naked eye 3D resource based on the hand state information and the finger movement direction, controlling the target interaction model in the naked eye 3D resource to execute the current interaction instruction, switching the target interaction model from an original model state to a target model state corresponding to the current interaction instruction, and displaying model animation records generated by executing the current interaction instruction, wherein the current interaction gesture can comprise, but is not limited to, a human body interaction gesture, a motion sensing controller interaction gesture, an external device interaction gesture, a handle interaction gesture and the like.
In an example embodiment, naked eye 3D resources with program resource categories can support gesture algorithm interaction based on a camera, leapfrog interaction, kinect interaction, 3Dof (freedom of freedom) bluetooth handle interaction and 6Dof bluetooth handle interaction, if corresponding effect animation exists in a target interaction model in a program in actual interaction, the effect animation can stop playing when intelligent interaction is performed, the position of the target interaction model is automatically placed at an initial position when the target interaction model is imported, and corresponding interaction instructions are executed based on the scene, and meanwhile, when the interaction instructions are completed, animation effects are restored.
In an example embodiment, the specific implementation principle of gesture algorithm interaction, leapmotion interaction and Kinect interaction based on a camera is that the state and the motion direction of a hand are obtained through a gesture recognition algorithm and Leapmotion so as to control the movement and rotation of an object, wherein the specific operation rule can include, but is not limited to, if a recognized gesture is a palm state and the motion direction of the palm moves up and down and left and right, an operation instruction of a response required to be executed by a target interaction model is that the target interaction model gradually shifts a certain position in a certain time in an up and down and left and right manner, further, when the target interaction model does not receive other operation instructions after a preset time interval, the target interaction model gradually resets according to a certain time, and meanwhile, if the recognized gesture is a fist state and the fist is rotated, the target interaction model executes a rotation operation instruction according to the rotation angle of the fist.
In an example embodiment, the specific implementation principle of the 3Dof bluetooth handle interaction and the 6Dof bluetooth handle interaction is that pressing a determination key of a handle can trigger a control target interaction model, the model of the target interaction model is lost after releasing, the model returns to an initial state when the model is imported, further, after pressing the determination key, the 3Dof bluetooth handle can be rotated to control the rotation of the model, meanwhile, for the 6Dof handle, the target interaction model simultaneously follows the movement and rotation of the handle, and when pressing a Home key of the handle, the model returns to the initial state when the model is imported, the explosion interaction operation is triggered, and then the Home key is pressed, so that the explosion effect returns to the original position.
In an example embodiment, the explosion interaction operation described above refers to that when a palm is clenched to open, each part formed by an object moves outwards according to a certain direction, the object is gradually split, each part split from opening to clenching the object moves gradually to an original state, meanwhile, in order to switch common palm translation and clenching operations, continuous rapid clenching and opening are required for more than 3 times, the explosion interaction operation is opened and closed so as to reduce false triggering of each operation, in the practical application process, the angle of the opening direction of the object can be selected as a free direction angle or an XYZ axis direction angle, wherein the direction angle described in the process is a local angle of an object submodule relative to the whole object, a left-hand coordinate system of Unity is taken as an example, the left-hand coordinate system refers to a space rectangular coordinate system, the left-hand coordinate system points to the positive direction of the x axis, and the right-hand coordinate system refers to the right-hand coordinate system if the normal direction of the Z axis can be switched, the left-hand coordinate system refers to the right-hand coordinate system, the right-hand coordinate system can be opened and the right-hand coordinate system can be opened, the angle can be further referred to the figure 25 refers to the figure 5, and the figure can be opened according to the figure 25, and the figure can be further referred to the figure 25 according to the figure with the specific direction as the specific direction.
On the premise of the specific application scenario recorded above, the current interaction instruction is controlled to be executed by the target interaction model in the naked eye 3D resource, and the method can be realized by controlling the target interaction model in the naked eye 3D resource to execute an up-down movement instruction and/or a left-right movement instruction, and/or controlling the target interaction model in the naked eye 3D resource to execute a rotation instruction, and/or controlling the target interaction model in the naked eye 3D resource to execute an explosion instruction. That is, when the target interaction model in the naked eye 3D resource is intelligently interacted, the target interaction model can be moved up and down, left and right, the target interaction model can be rotated, and each component in the target interaction model can be obtained based on explosion operation.
Further, the explosion instruction is executed by the target interaction model in the naked eye 3D resource, and the explosion instruction can be realized by controlling a model composition sub-module of the target interaction model in the naked eye 3D resource to move according to a preset direction and a preset angle so as to achieve the explosion effect, wherein the preset direction comprises a free movement direction or a coordinate axis movement direction, and the preset angle comprises a local angle of the model composition sub-module relative to the target interaction model. That is, the model composition sub-modules of the target interaction model may be controlled to execute model-fanning instructions at different angles of dissemination. Finally, after the target interaction model executes the corresponding interaction instruction, a preset time length is needed to control the target interaction model to restore from the target model state to the original model state, and then other interaction instructions are executed on the basis of the original model state.
In an example embodiment, when the interaction operation is performed through a gesture algorithm based on a camera, the method can be achieved by firstly acquiring a gesture depth map of a gesture to be detected, calculating current point cloud data of the gesture to be detected according to the gesture depth map, then matching a target geometric gesture for the gesture to be detected in a preset gesture search space according to the current point cloud data, and finally acquiring an interaction operation instruction corresponding to the target geometric gesture and controlling a target interaction model to execute the corresponding interaction operation instruction when the fact that the target geometric gesture corresponding to the gesture to be detected exists in the preset gesture search space is determined.
In an example embodiment, obtaining a gesture depth map of a gesture to be detected, and calculating current point cloud data of the gesture to be detected according to the gesture depth map may be achieved by firstly collecting the gesture depth map of the gesture to be detected through an image collecting device, wherein the image collecting device may be a depth image collecting device, which may be embedded in a target display device or may be set independently, the example is not particularly limited thereto, further, after the gesture depth map is collected, the gesture depth map may be input to an algorithm (for example, MEDIAPIPE), the algorithm may output 3D positions of all key points in the gesture depth map, and obtain the current point cloud data according to the 3D positions of the key points, wherein a hand key point may also be understood as an articulation point of a hand skeleton, and is generally described by 21 3D key points (specifically, may be shown in fig. 28).
In an example embodiment, matching the target geometric gesture for the gesture to be detected in a preset gesture search space according to the current point cloud data may be achieved by firstly calculating a distance difference value between three-dimensional point coordinates in the current point cloud data and a model surface of a standard geometric gesture included in the preset gesture search space based on a preset image processing model, and secondly, taking out a minimum value from the distance difference value and taking the standard geometric gesture corresponding to the minimum value as the target geometric gesture. The method can be realized by calculating a distance difference between three-dimensional point coordinates in current point cloud data and a model surface of a standard geometric gesture in a target geometric gesture matching process, wherein specific generation of the standard geometric gesture can comprise the steps of firstly generating a series of standard geometric gesture models of hands through hands pose (pose can refer to pose parameters or node positions of the hands), further establishing a search space based on the generated standard geometric gesture models, wherein in the standard geometric gesture model generation process, the standard geometric gesture models can be realized by linear blend skinning (skeleton skin animation algorithm), the specific realization principle is that a layer of skin is covered on a hand skeleton and the skin is changed along with skeleton movements, the model is used for the animation field, pose can be converted into a corresponding grid mesh first, then the corresponding grid mesh is further converted into a smooth curved surface model, and further the standard geometric gesture models can be obtained, in the generation process, pose can be used as an independent variable, the standard geometric gesture models can be calculated through pose, and the labeling geometric gesture models are in one-to-one correspondence with pose.
Furthermore, a specific matching process can be realized after the standard geometric gestures are obtained, and in the specific matching process, the standard geometric gestures can be divided into a current frame standard geometric gesture, a previous frame standard geometric gesture corresponding to the current frame standard geometric gesture and a next frame standard geometric gesture corresponding to the current frame standard geometric gesture. The method comprises the steps of initializing an image processing model to be trained to obtain parameters to be optimized included in the image processing model to be trained, inputting current point cloud data of the gesture depth map and hand pose parameters of a previous frame standard geometric gesture into the image processing model to be trained to obtain a predicted distance between the gesture depth map and the previous frame standard geometric gesture, constructing a loss function according to an actual distance between the gesture depth map and the previous frame standard geometric gesture and the predicted distance, optimizing according to a preset optimization algorithm and the loss function optimization parameters, updating the image processing model to be trained according to the optimized parameters to obtain an image processing model to be trained, and calculating the distance difference between the current frame standard geometric gesture and/or the next frame standard geometric gesture and the gesture depth map by using the trained image processing model.
Further, the preset optimization algorithm comprises a particle swarm optimization algorithm and/or a nearest point optimization algorithm. Further, when the preset optimization algorithm is a particle swarm optimization algorithm, optimization of parameters to be optimized according to the preset optimization algorithm and the loss function can be achieved by firstly generating a particle swarm according to the parameters to be optimized and randomly setting the initial position and the initial speed of each particle in the particle swarm, wherein each parameter to be optimized corresponds to one particle respectively, secondly calculating the fitness of each particle according to the loss function, comparing the fitness of each particle at the current position with the fitness of each particle at the best position, taking the current position as the best position of an individual if the fitness of each particle at the current position is better than the fitness of each particle at the best position, if the fitness of each particle at the current position is not better than the fitness of each particle at the best position, then comparing the fitness of each particle at the current position and the best position of the swarm, taking the current position as the global best position if the fitness of each particle at the current position is better than the fitness of the best position of the swarm, and finally updating the particle swarm according to the best position, the global best position and the speed of each particle to be optimized.
In an exemplary embodiment, the optimization of the particle swarm algorithm comprises the specific steps of firstly, randomly setting a starting position xi and a speed vi of particles, setting the number of populations according to the problem to be solved and setting parameters to be adjusted, secondly, calculating the fitness of each particle according to a formula of a fitness function, comparing the fitness of the current position of each particle with the fitness value of the best position pbest of each particle, if the fitness is more optimal, taking the current position as pbest, otherwise, keeping pbest unchanged, then, comparing the fitness of the current position of each particle with the fitness value of the best position pbest of the populations, if the fitness is more optimal, taking the fitness of the current position as the current global best position pbest, finally, updating the speed and the position of the particles according to the formula, and naturally, if the preset termination condition of the algorithm is not met, continuing to calculate the fitness of each particle, and if the termination condition is met, ending the cycle and outputting the best position information. The method is characterized in that parameters of an image processing model are optimized by using a particle swarm optimization algorithm, so that the network training time can be greatly shortened, and the problem of local optimization of a traditional Back Propagation (BP) optimization algorithm is solved.
It should be further described that, in the process of processing the image processing model to be trained, the input of the image processing model to be trained is pose of the gesture depth map and the standard geometric gesture, the output is the predicted distance between the gesture depth map and the standard geometric gesture, so that a loss function can be constructed according to the predicted distance and the actual distance, the smaller the loss function is, the more similar the input gesture depth map and the standard geometric gesture are described, meanwhile, in the specific matching process, pose with the minimum predicted distance value is the required pose (i.e. the target geometric gesture) only if the search space is found, but the minimum distance difference cannot be calculated at one time because the search space cannot be written into an analytic form, therefore, optimization is required to be performed based on a corresponding optimization algorithm in the training process, and the optimal solution is calculated in a continuous iterative mode. Meanwhile, since the iterative numerical solution generally has higher requirement on initialization, if the initialization is bad, it takes a long time to iteratively converge, and it may not converge to a global minimum (because the loss function is a non-convex function), so when the algorithm is implemented, the standard geometric gesture of the current frame is initialized by using pose of the standard geometric gesture of the previous frame to implement a specific calculation process.
So far, the processing method of the naked eye 3D resource in the example embodiment of the disclosure is realized completely. Based on the foregoing, it can be known that the processing method of the naked eye 3D resource according to the exemplary embodiment of the present disclosure can reduce difficulty in manufacturing naked eye 3D content and improve efficiency of generating naked eye 3D content, and can support naked eye 3D screens with a plurality of different view points such as 2, 9, 18, 24, 49, and the like, adjust and modify naked eye 3D parameters, adapt to more naked eye 3D screens, and optionally expand supported devices, and further, the output naked eye 3D content can adjust parameters on screens with different parameters of the same type, and further, the visual naked eye 3D effect adjusting process, model operation, and attribute editing process can make operation more visual and simple.
Further, after the naked eye 3D resources are obtained, the corresponding naked eye 3D resources can be applied. The naked eye 3D resource obtained based on the naked eye 3D resource processing method disclosed by the embodiment of the disclosure can be applied to an electronic commerce scene, a teaching scene, a new product display scene, a new product release meeting scene and the like. In the actual application process, the corresponding viewpoint number and the corresponding lens parameters can be configured in the naked eye 3D resource according to the actual requirement of the requester.
In an application scene, for example, displaying a corresponding virtual object in an electronic market scene, the method can be realized by responding to touch operation on a current display interface to determine a commodity to be displayed, acquiring naked eye 3D resources corresponding to the commodity to be displayed, wherein the naked eye 3D resources are generated by the naked eye 3D resource processing method, displaying the naked eye 3D resources corresponding to the commodity to be displayed, and interacting the displayed commodity to display the internal composition structure of the displayed commodity. In other words, under the electronic market, if a user clicks a certain product and the terminal equipment where the client is located can support naked eye 3D display, naked eye 3D display can be performed on the commodity, meanwhile, if parts or internal structures of the commodity are required to be displayed, gesture interaction can be performed, the commodity is scattered based on specific components, the user can watch the internal structure of the commodity in detail conveniently, and the purpose of improving accuracy of the displayed commodity is achieved, so that the user can purchase the commodity according to actual needs.
In an application scene, for example, the display of the corresponding teaching prop in the teaching scene can be achieved by determining the teaching prop to be displayed, and acquiring the naked eye 3D resource corresponding to the teaching prop to be displayed, wherein the naked eye 3D resource is generated by the naked eye 3D resource processing method, and the display of the naked eye 3D resource corresponding to the teaching prop to be displayed and the interaction of the displayed teaching prop are carried out, so that the internal composition structure of the teaching prop is displayed. The method comprises the steps of obtaining naked eye 3D resources corresponding to a teaching prop and carrying out corresponding display if the teaching prop is needed to be displayed in a teaching scene, carrying out gesture interaction or interaction of other external devices if the internal structure display is needed, and splitting the blackboard eraser based on a specific composition structure of the blackboard eraser to display each composition part when the teaching prop is taken as an example of the blackboard eraser during interaction.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the method of the present disclosure.
The embodiment of the disclosure also provides a processing device for naked eye 3D resources. Specifically, referring to fig. 29, the processing apparatus of the naked eye 3D resource may include an original resource scene generating module 2910, a viewpoint number determining module 2920, a virtual lens group configuring module 2930, and a naked eye 3D resource generating module 2940. Specific:
The original resource scene generating module 2910 may be used to create a newly added generating scene corresponding to the target interaction model, and load the target interaction model in the newly added generating scene to obtain an original resource scene;
the viewpoint number determining module 2920 may be configured to invoke a preset resource library to adjust the original resource scene to obtain a target resource scene, and determine the viewpoint number of the target resource scene according to the target device parameter in the newly-added generated scene;
the virtual lens group configuration module 2930 may be configured to configure lens parameters of a virtual lens group corresponding to the target resource scene according to the viewpoint number, and configure the virtual lens group according to the lens parameters;
The naked eye 3D resource generating module 2940 may be configured to generate a naked eye 3D resource corresponding to the target interaction model based on the target resource scene, the virtual lens group, and the zero point position.
In one exemplary embodiment of the disclosure, creating a newly added generation scene corresponding to a target interaction model includes displaying an engineering creation sub-interface in response to a touch operation on a first preset interactable control on a display interface of a resource creation terminal, determining target device parameters and naked eye 3D interleaving parameters of a target display device displaying naked eye 3D resources corresponding to the target interaction model in response to an input operation on the engineering creation sub-interface, and creating the newly added generation scene corresponding to the target interaction model in response to a touch operation on a second preset interactable control in the engineering creation sub-interface.
In an exemplary embodiment of the present disclosure, loading the target interaction model in the newly-added generation scene to obtain an original resource scene includes acquiring the target interaction model from a preset model library according to a target model name of the target interaction model, and/or importing the target interaction model from an external file according to the target model name of the target interaction model, and performing adaptive adjustment on a model size of the target interaction model in the newly-added generation scene to obtain the original resource scene.
In an exemplary embodiment of the disclosure, performing adaptive adjustment on a model size of the target interaction model in the newly-added generation scene to obtain the original resource scene includes constructing a first rectangle according to the newly-added generation scene and constructing a second rectangle according to the target interaction model, calculating a model scaling factor of the target interaction model in the newly-added generation scene according to the first rectangle and the second rectangle, and performing adaptive adjustment on the model size of the target interaction model based on the model scaling factor to obtain the original resource scene.
In an exemplary embodiment of the disclosure, constructing a first rectangle according to the newly-added generation scene includes displaying the newly-added generation scene on a display interface of a resource creation terminal, taking a center point of the display interface as a center point of the first rectangle, determining a first rectangle length and a first rectangle width of the first rectangle according to an interface length and an interface width occupied by the newly-added generation scene on the display interface, and constructing the first rectangle according to the center point of the first rectangle, the first rectangle length and the first rectangle width.
In an exemplary embodiment of the disclosure, constructing a second rectangle according to the target interaction model includes obtaining pixel coordinates of a pixel point in the target interaction model, obtaining a maximum abscissa value, a maximum ordinate value, a minimum abscissa value and a minimum ordinate value in the pixel point coordinates, determining a second rectangle height according to the maximum abscissa value and the minimum abscissa value, determining a second rectangle length according to the maximum ordinate value and the minimum ordinate value, taking a center point of the target interaction model as a center point of the second rectangle, and constructing the second rectangle according to the center point of the second rectangle, the second rectangle height and the second rectangle length.
In an exemplary embodiment of the disclosure, calculating a model scaling factor of the target interaction model in the newly generated scene according to the first rectangle and the second rectangle includes mapping the first rectangle to three-dimensional coordinates of a resource generating engine to obtain a rectangle mapping result, calculating a first ratio between a rectangle mapping height and a second rectangle height in the rectangle mapping result, calculating a second ratio between a rectangle mapping length and a second rectangle length in the rectangle mapping result, and determining a model scaling factor of the target interaction model in the newly generated scene based on the first ratio and the second ratio.
In an exemplary embodiment of the present disclosure, the processing apparatus of naked eye 3D resources further includes:
The new virtual model loading module can be used for loading a new virtual model in the original resource scene and/or importing the new virtual model in the original resource scene in response to model loading operation;
The model label generating module can be used for generating a model label corresponding to the newly added virtual model and displaying the model label in the original resource scene.
In an exemplary embodiment of the present disclosure, the processing apparatus of naked eye 3D resources further includes:
The model switching display module can be used for responding to the touch operation of the model labels, displaying newly-added virtual models corresponding to the model labels in the original resource scene, and switching target interaction models in the original resource scene based on the newly-added virtual models.
In an exemplary embodiment of the present disclosure, the processing apparatus of naked eye 3D resources further includes:
A display order adjustment module, configured to adjust a display order of the model tag in the original resource scene in response to a touch operation on the model tag, and/or
The mode time sequence setting interface display module can be used for displaying the mode time sequence setting interface of the newly added virtual model and/or the target interaction model;
The model presentation duration determining module may be configured to determine a model presentation duration of the newly added virtual model and/or the target interaction model in response to an input operation to the mode timing setting interface.
In an exemplary embodiment of the present disclosure, the preset resource library includes at least one of a scene library, a model library, an animation library, a material library, a light library, and a sound library.
In an exemplary embodiment of the disclosure, a preset resource library is called to adjust the original resource scene to obtain a target resource scene, which includes loading an original three-dimensional scene corresponding to the target interaction model from the scene library, adding the original three-dimensional scene to the newly-added generation scene, loading an original three-dimensional animation from the animation library, enabling the original three-dimensional animation to act on the target interaction model, loading original lamplight from the lamplight library, adding the original lamplight to the newly-added generation scene, loading model materials corresponding to the target interaction model from the material library, enabling the model materials to act on the target interaction model, and/or loading audio data corresponding to the target interaction model from the sound library, adding the audio data to the newly-added generation scene, and adjusting model attributes and/or animation attributes and/or lamplight attributes and/or sound attributes of the target interaction model in the newly-added generation scene to obtain the target resource.
In one exemplary embodiment of the present disclosure, the original three-dimensional animation includes a program animation and/or a key frame animation, wherein the acting the original three-dimensional animation on the target interaction model includes adding the program animation to the target interaction model, and/or mounting the target interaction model under an animation object in the key frame animation to take the target interaction model as a child of the animation object.
In one exemplary embodiment of the present disclosure, acting the model material on the target interaction model includes replacing original material in the target interaction model based on the model material in response to dragging the model material onto the target interaction model.
In one exemplary embodiment of the disclosure, the model attribute includes a structure level attribute and/or a location attribute, the adjusting the model attribute of the target interaction model includes displaying a model adjustment interface of the model attribute of the target interaction model in response to a touch operation of a model attribute interaction control, and adjusting an attribute value of the structure level attribute and/or an attribute value of the location attribute of the target interaction model in response to an input operation of the model adjustment interface.
In one exemplary embodiment of the present disclosure, adjusting the model properties of the target interaction model further comprises adjusting a current model position of the target interaction model in the newly generated scene and/or rotating the target interaction model in response to a movement event acting on the target interaction model.
In one exemplary embodiment of the present disclosure, adjusting the animation properties includes displaying an animation adjustment interface corresponding to the animation properties in response to a touch operation of an animation setup interactive control, adjusting a period time length and/or an amplitude of an animation in the animation properties in response to an input operation of the animation adjustment interface, and/or adjusting an offset of an animation track in an animation property in an newly generated scene.
In an exemplary embodiment of the disclosure, the original light includes at least one of parallel light, a point light source, a spotlight and a combined light formed by the point light source and the spotlight, wherein adjusting the light attribute includes displaying a light adjustment interface corresponding to the light attribute in response to a touch operation of a light setting interaction control, and adjusting a light position and/or light intensity and/or light color of the parallel light and/or the point light source and/or the spotlight and/or the combined light in the newly generated scene in response to an input operation of the light adjustment interface.
In an exemplary embodiment of the disclosure, the material property comprises at least one of a model color, a texture map, a normal map, transparency, glossiness and refraction, wherein adjusting the material property comprises displaying a material adjustment interface corresponding to the material property in response to a touch operation of a material setting interaction control, and adjusting the model color and/or texture map and/or normal map and/or transparency and/or glossiness and/or refraction of the target interaction model in response to an input operation of the material adjustment interface.
In one exemplary embodiment of the present disclosure, adjusting the sound attribute includes displaying a sound adjustment interface corresponding to the sound attribute in response to a touch operation of a sound setting interactive control, and adjusting a volume level of the audio data in response to an input operation of the sound adjustment interface.
In an exemplary embodiment of the disclosure, determining the number of viewpoints of the target resource scene according to the target device parameters in the newly-added generation scene includes determining device attribute information of a target display device corresponding to the target device parameters according to the target device parameters in the newly-added generation scene, and determining the number of viewpoints required by the target display device when displaying the target resource scene according to the device attribute information.
In one exemplary embodiment of the present disclosure, determining lens parameters of a virtual lens group corresponding to the target resource scene according to the viewpoint number includes determining a lens number of the virtual lens group corresponding to the target resource scene, a zero point position of the target resource scene, lens distances of each virtual lens in the virtual lens group, and a distance difference between the virtual lens group and a zero plane according to the viewpoint number.
In an exemplary embodiment of the present disclosure, configuring the virtual lens group according to the lens parameters includes:
And placing the virtual lens at the original lens position, and adjusting the lens parameters of the virtual lens at the original lens position to obtain the virtual lens group according to the virtual lens with the adjusted parameters.
In one exemplary embodiment of the disclosure, adjusting lens parameters of the virtual lens at the original lens position includes displaying a naked eye parameter setting interface in response to a touch operation of a naked eye setting interaction control, adjusting lens spacing and/or lens attitude information and/or lens viewing angle information of the virtual lens at the original lens position in response to an input operation in the naked eye parameter setting interface, and/or adjusting an original lens position of the virtual lens.
In an exemplary embodiment of the disclosure, generating an naked eye 3D resource corresponding to the target interaction model based on the target resource scene, the virtual lens group and the zero position includes determining a zero plane position according to the zero position and adjusting the zero plane position, determining a stereoscopic display area and a planar display area of the target interaction model in the target resource scene according to the adjusted zero plane position, determining a model placement area of the target interaction model in the target resource scene according to the stereoscopic display area and the planar display area, adjusting a target model position of the target interaction model based on the model placement area, and publishing the position-adjusted target resource scene and the virtual lens group to obtain the naked eye 3D resource corresponding to the target interaction model.
In an exemplary embodiment of the disclosure, publishing the target resource scene and the virtual lens group after position adjustment to obtain naked eye 3D resources corresponding to the target interaction model includes displaying a resource release interface in response to touch operation of a resource release interaction control, determining a resource release type in response to touch operation of the resource release interface, and publishing the target resource scene and the virtual lens group after position adjustment based on the resource release type to obtain naked eye 3D resources corresponding to the target interaction model.
In an exemplary embodiment of the present disclosure, the resource release type includes at least one of a program resource category, a video resource category, and a sequence frame resource category.
In an exemplary embodiment of the disclosure, when the resource release type is a program resource type, releasing the target resource scene and the virtual lens group after position adjustment based on the resource release type to obtain naked eye 3D resources corresponding to the target interaction model, wherein the method comprises the steps of displaying a resource release interface corresponding to the program resource type, determining a storage path of the naked eye 3D resources in response to an input operation of the resource release interface corresponding to the program resource type, and packaging the target resource scene and the virtual lens group after position adjustment to obtain naked eye 3D resources with the program resource type.
In an exemplary embodiment of the disclosure, when the resource release type is a video resource type and/or a sequence frame resource type, based on the resource release type, releasing a target resource scene after position adjustment and the virtual lens group to obtain naked eye 3D resources corresponding to the target interaction model, wherein the method comprises the steps of displaying a resource parameter adjustment interface corresponding to the video resource type and/or the sequence frame resource type, responding to input operation of the resource parameter adjustment interface, determining target resource parameters corresponding to the video resource type and/or the sequence frame resource type, and storing the target resource parameter, the target resource scene after position adjustment and the virtual lens group to obtain naked eye 3D resources with the video resource type and/or the sequence frame resource type.
In an exemplary embodiment of the disclosure, the target resource parameter includes at least one of a rendering style parameter, a resolution parameter and an output type parameter, wherein the target resource parameter, a target resource scene with an adjusted position and the virtual lens group are stored to obtain naked eye 3D resources with video resource types and/or sequence frame resource types, the method includes determining a target rendering style of the naked eye 3D resources according to the rendering style parameter in the target resource parameter, determining a target picture type of the output picture based on the input type parameter in the target resource parameter, determining a target resolution of the output picture based on the resolution parameter in the target resource parameter, and outputting the naked eye 3D resources with the target rendering style and the target resolution and with the video resource types and/or the sequence frame resource types in response to touch operation on a third preset interactive control in the resource parameter adjustment interface.
In an exemplary embodiment of the disclosure, the target rendering style comprises a multi-view stitching mode or a rendering result mode, the target picture type comprises a video picture type or a sequence frame picture type, wherein outputting naked eye 3D resources with a target rendering style and target resolution and with video resource types and/or sequence frame resource types in response to touch operation of a third preset interactive control in the resource parameter adjustment interface comprises outputting naked eye 3D resources with a multi-view stitching mode and target resolution and with video picture type in response to touch operation of the third preset interactive control in the resource parameter adjustment interface, outputting naked eye 3D resources with rendering result mode and target resolution and with video picture type in response to touch operation of the third preset interactive control in the resource parameter adjustment interface, or outputting naked eye 3D resources with multi-view stitching mode and target resolution and with sequence frame type in response to touch operation of the third preset interactive control in the resource parameter adjustment interface, and outputting naked eye 3D resources with sequence frame type in response to touch operation of the third preset interactive control in the resource parameter adjustment interface.
In an exemplary embodiment of the present disclosure, the processing apparatus of naked eye 3D resources further includes:
The first naked eye 3D resource display module can be used for outputting naked eye 3D resources with program resource categories to target display equipment, displaying the naked eye 3D resources with the program resource categories through the target display equipment, and/or
The second naked eye 3D resource display module can be used for outputting the naked eye 3D resource with the video resource category and/or the sequence frame resource category to the target display equipment, and displaying the naked eye 3D resource with the video resource category and/or the sequence frame resource category through the target display equipment.
In an exemplary embodiment of the present disclosure, the processing apparatus of naked eye 3D resources further includes:
The current interaction instruction determining module can be used for responding to the input current interaction gesture, acquiring hand state information and finger motion direction, and determining a current interaction instruction required to be executed by a target interaction model in the naked eye 3D resource based on the hand state information and the finger motion direction;
The model state switching module can be used for controlling a target interaction model in the naked eye 3D resource to execute the current interaction instruction, switching the original model state of the target interaction model into a target model state corresponding to the current interaction instruction, and displaying a model animation generated by executing the current interaction instruction.
In one exemplary embodiment of the present disclosure, the current interaction gesture includes at least one of a human interaction gesture, a somatosensory controller interaction gesture, an external device interaction gesture, and a handle interaction gesture.
In an exemplary embodiment of the disclosure, controlling the target interaction model in the naked eye 3D resource to execute the current interaction instruction includes controlling the target interaction model in the naked eye 3D resource to execute an up-down movement instruction and/or a left-right movement instruction, and/or controlling the target interaction model in the naked eye 3D resource to execute a rotation instruction, and/or controlling the target interaction model in the naked eye 3D resource to execute an explosion instruction.
In an exemplary embodiment of the disclosure, controlling the target interaction model in the naked eye 3D resource to execute the explosion instruction includes controlling a model composition sub-module of the target interaction model in the naked eye 3D resource to move according to a preset direction and a preset angle so as to achieve an explosion effect, wherein the preset direction includes a free movement direction or a coordinate axis movement direction, and the preset angle includes a local angle of the model composition sub-module relative to the target interaction model.
In an exemplary embodiment of the present disclosure, the processing apparatus of naked eye 3D resources further includes:
the model state recovery module can be used for controlling the target interaction model to be recovered from the target model state to the original model state at intervals of preset time.
The specific details of each module in the processing device for naked eye 3D resources are described in detail in the corresponding processing method for naked eye 3D resources, so that details are not repeated here.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied. Furthermore, although the steps of the methods in the present disclosure are depicted in a particular order in the drawings, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
Those skilled in the art will appreciate that the various aspects of the present disclosure may be implemented as a system, method, or program product. Accordingly, aspects of the present disclosure may be embodied in the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects that may be referred to herein collectively as a "circuit," module, "or" system.
An electronic apparatus 3000 according to this embodiment of the present disclosure is described below with reference to fig. 30. The electronic device 3000 shown in fig. 30 is merely an example, and should not impose any limitation on the functionality and scope of use of the embodiments of the present disclosure. As shown in fig. 30, the electronic device 3000 is in the form of a general purpose computing device. The components of the electronic device 3000 may include, but are not limited to, the at least one processing unit 3010, the at least one storage unit 3020 described above, a bus 3030 connecting different system components (including the storage unit 3020 and the processing unit 3010), and a display unit 3040.
Wherein the storage unit stores program code executable by the processing unit 3010 such that the processing unit 3010 performs steps according to various exemplary embodiments of the present disclosure described in the "exemplary methods" section of the present specification. For example, the processing unit 3010 may perform step S110 shown in fig. 1, where a new generation scene corresponding to a target interaction model is created and the target interaction model is loaded in the new generation scene to obtain an original resource scene, step S120, where a preset resource library is called to adjust the original resource scene to obtain a target resource scene, and the number of viewpoints of the target resource scene is determined according to target device parameters in the new generation scene, step S130, where lens parameters of a virtual lens group corresponding to the target resource scene are configured according to the number of viewpoints, and the virtual lens group is configured according to the lens parameters, and step S140, where naked eye 3D resources corresponding to the target interaction model are generated based on the target resource scene, the virtual lens group, and the zero point position.
The memory unit 3020 may include readable media in the form of volatile memory units, such as Random Access Memory (RAM) 30201 and/or cache memory 30202, and may further include Read Only Memory (ROM) 30203. The storage unit 3020 may also include a program/utility 30204 having a set (at least one) of program modules 30205, such program modules 30205 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Bus 3030 may be a bus representing one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 3000 may also communicate with one or more external devices 3100 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 3000, and/or any devices (e.g., routers, modems, etc.) that enable the electronic device 3000 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 3050. Also, electronic device 3000 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the internet via network adapter 3060. As shown, network adapter 3060 communicates with other modules of electronic device 3000 over bus 3030. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 3000, including, but not limited to, microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, including several instructions to cause a computing device (may be a personal computer, a server, a terminal device, or a network device, etc.) to perform the method according to the embodiments of the present disclosure.
In an exemplary embodiment of the present disclosure, a computer-readable storage medium having stored thereon a program product capable of implementing the method described above in the present specification is also provided. In some possible implementations, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to carry out the steps according to the various exemplary embodiments of the disclosure as described in the "exemplary methods" section of this specification, when the program product is run on the terminal device.
A program product for implementing the above-described method according to an embodiment of the present disclosure may employ a portable compact disc read-only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of a readable storage medium include an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The computer readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
Furthermore, the above-described figures are only schematic illustrations of processes included in the method according to the exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily appreciated that the processes shown in the above figures do not indicate or limit the temporal order of these processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, for example, among a plurality of modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.