Method and device for rendering and processing virtual scene on line in live broadcast roomTechnical Field
The invention belongs to the technical field of cloud rendering simulation, and particularly relates to a method and a device for rendering and processing a virtual scene on line in a live broadcast room.
Background
With the popularity of applications supporting virtual scenes (such as virtual reality applications, three-dimensional map programs, military simulation programs, first-person shooter games, virtual live rooms, etc.), a need has arisen to view live video of virtual scenes within applications supporting virtual scenes.
The video live broadcast of the virtual scene in the application program supporting the virtual scene is also called live broadcast in a client, for example, live broadcast in a game client. In the related art, a live broadcast interface is provided in the game application program, and when a spectator user selects to watch a live broadcast of a game, the game application program displays a live broadcast picture recorded by the anchor terminal through the live broadcast interface.
In the related technology, only the live broadcast picture corresponding to the viewing angle selected by the anchor can be seen through the live broadcast interface in the application program, and because different users usually have different attention points in the virtual scene, the live broadcast picture cannot meet the requirements of different audiences, and the live broadcast scene is single and cannot be changed, so that the display effect of the live broadcast picture is poor.
Disclosure of Invention
The invention aims to provide a method and a device for rendering and processing a virtual scene in a live broadcast room on line.
In order to solve the technical problems, the invention is realized by the following technical scheme:
the invention relates to a method for rendering and processing a virtual scene on line in a live broadcast room, which comprises the following steps:
step S1: acquiring a three-dimensional simulation scene model on a virtual simulation live broadcast platform according to preset scene model data information and modeling software;
step S2: importing by a WRL file format to generate a corresponding VRML prototype;
step S3: carrying out lightweight conversion on the source model file;
step S4: simplifying the model by simplifying the algorithm;
step S5: performing three-dimensional image rendering on the simplified model by using a graphic engine;
step S6: the camera collects live broadcasting frames;
step S7: fusing a scene after the three-dimensional image rendering as a background with a live broadcast picture;
step S8: and rendering and displaying the fused video image.
As a preferable technical solution, in step S1, the three-dimensional simulation scene model includes a three-dimensional model obtained by computer-aided rendering, a three-dimensional model obtained by a scanning device, a three-dimensional model obtained by image capture by an image capture device or image capture plus post-editing, and a three-dimensional model obtained by image capture by an image capture device or image capture plus post-editing and synthesis.
As a preferable technical solution, in the step S3, the step of converting the source model file by weight reduction includes:
step S31: calling an interface provided by a DGN Direct component to export the first constructed geometric information in the source model file into triangular patch data;
step S32: calling an API (application program interface) of the HOOPS Exchange component, creating a model segment, and storing the constructed geometric information into the segment;
step S33: calling an interface of the DGN direct component to read the constructed attribute data, and calling an interface of the HOOPS Exchange module to store the attribute data into the created fragment;
step S34: calling an interface provided by the DGN Direct component to transfer the geometric information of the next member in the source model file to a triangular patch and repeating the steps S32 and S33;
step S35: and grouping the fragments according to application requirements.
As a preferable technical solution, in step S4, the algorithmic compact model tiles the parameterized solid model, and then divides the mesh data into: points on the surface are all on the boundary line and the points on the surface are not all on the boundary line, and the points on the boundary line are directly simplified; for points that are not all on the boundary line, the original grid data needs to be simplified layer by layer from left to right and from bottom to top starting from the boundary line.
As a preferred technical solution, in the step S5, the image engine is deployed on a server, and the rendering and displaying of the model are completed on the server.
As a preferred technical solution, in step S7, after a scene rendered from the three-dimensional image is fused with a live broadcast as a background, a fused effect needs to be tested; the test adopts a simulation DCS system, before the test, a test case needs to be input into the simulation DCS system in advance, the engineering configuration needs to be compiled, and the compiled engineering configuration needs to be downloaded to the simulation DCS system and used for providing the test case for a scene to be tested.
The invention relates to a device for rendering and processing a virtual scene on line in a live broadcast room, which comprises a file transmission module, a server rendering module, a camera, a result output module and an interaction module, wherein the file transmission module is used for transmitting a file to the server rendering module;
the file transmission module is used for uploading the model or the drawing to the server by the user; the file transmission module is connected with the server rendering module sequentially through the format conversion module and the lightweight conversion module; the format conversion module is used for converting the imported WRL file into a corresponding VRML prototype; the light-weight conversion module is used for carrying out light-weight conversion on the source model file and simplifying the model through a simplified algorithm;
the server rendering module is used for rendering the three-dimensional image of the simplified model by using a graphic engine;
the camera is used for acquiring a video image of a live broadcast room in real time; the camera is connected with the scene fusion module through the portrait extraction module; the portrait extracting module is used for deducting a human body image in the video image;
the scene fusion module is used for fusing the rendered three-dimensional scene with the human body image which is taken out;
the result output module is used for outputting a final scene rendering effect graph of the live broadcast room to the display module;
and the interaction module is used for realizing information interaction between the live broadcast room and the interaction module.
As a preferable technical scheme, the display content of the display module comprises a three-dimensional model, a two-dimensional drawing, a two-dimensional picture, characters, animation, video and a graphic picture.
As a preferred technical solution, the interactive module includes a menu interactive unit and a graphic operation interactive unit, the menu interactive unit issues an instruction by means of a menu or a button, and determines interactive contents by the instruction; and the graphic operation interaction unit realizes interaction on the operation of the model in the graphic area.
As a preferred technical solution, the interaction mode in the interaction module includes selection, hiding, cutting, moving, rotating, zooming, playing, text input, text editing, text deletion, symbol input, symbol editing, symbol deletion, mark input, mark editing, mark deletion, graphic interception, graphic drawing, graphic editing, graphic deletion, brightness adjustment, transparency adjustment, light and shadow effect adjustment, projection mode adjustment, definition adjustment, rendering mode adjustment, model color replacement, layout adjustment, view switching, and interference check interaction operation control.
The invention has the following beneficial effects:
according to the method, the preset scene model data information and modeling software are introduced to obtain the three-dimensional simulation scene model, the WRL file is used for rendering the three-dimensional scene, and the final rendering effect display is performed by combining with the live broadcast picture, so that the display effect of the live broadcast room is improved, and the live broadcast effect is increased.
Of course, it is not necessary for any product in which the invention is practiced to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of the steps of a method for rendering a virtual scene on-line in a live broadcast room according to the present invention;
fig. 2 is a schematic structural diagram of an apparatus for rendering and processing a virtual scene on line in a live broadcast room according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention is a method for rendering and processing a virtual scene on line in a live broadcast room, including the following steps:
step S1: acquiring a three-dimensional simulation scene model on a virtual simulation live broadcast platform according to preset scene model data information and modeling software;
step S2: importing by a WRL file format to generate a corresponding VRML prototype;
step S3: carrying out lightweight conversion on the source model file;
step S4: the model is simplified through a simplified algorithm, so that the model can be simplified, redundant scenes are removed, and the model is convenient to store;
step S5: the simplified model is subjected to three-dimensional image rendering by using a graphic engine, a rendered three-dimensional image scene is stored, and a set label is stored in a memory, so that the scene change or the modification can be conveniently carried out next time;
step S6: the camera collects live broadcasting frames;
step S7: fusing a scene after the three-dimensional image rendering as a background with a live broadcast picture;
step S8: the fused video images are rendered and displayed, and live broadcast of the fused scenes can be carried out in any scenes, such as space, animation, ocean and the like, so that the live broadcast effect of a live broadcast room is improved, and the interestingness of a main broadcast room is increased.
In step S1, the three-dimensional simulation scene model includes a three-dimensional model drawn by computer assistance, a three-dimensional model obtained by a scanning device, a three-dimensional model obtained by image pickup by an image pickup device or image pickup plus post editing, and a three-dimensional model obtained by image pickup by an image pickup device or image pickup plus post editing synthesis.
In step S3, the step of converting the source model file by weight reduction is as follows:
step S31: calling an interface provided by a DGN Direct component to export the first constructed geometric information in the source model file into triangular patch data;
step S32: calling an API (application program interface) of the HOOPS Exchange component, creating a model segment, and storing the constructed geometric information into the segment;
step S33: calling an interface of the DGN direct component to read the constructed attribute data, and calling an interface of the HOOPS Exchange module to store the attribute data into the created fragment;
step S34: calling an interface provided by the DGN Direct component to transfer the geometric information of the next member in the source model file to a triangular patch and repeating the steps S32 and S33;
step S35: and grouping the fragments according to application requirements.
In step S4, the algorithmic compact model divides the mesh data into: points on the surface are all on the boundary line and the points on the surface are not all on the boundary line, and the points on the boundary line are directly simplified; for points that are not all on the boundary line, the original grid data needs to be simplified layer by layer from left to right and from bottom to top starting from the boundary line.
In step S5, the image engine is deployed on the server, and the rendering and presentation of the model is completed on the server.
In step S7, after a scene rendered from the three-dimensional image is fused with a live broadcast as a background, a fused effect needs to be tested; the simulation DCS system is adopted for testing, before testing, a test case needs to be input into the simulation DCS system in advance, engineering configuration is compiled, and the compiled engineering configuration is downloaded to the simulation DCS system and used for providing the test case for a scene to be tested.
The invention relates to a device for rendering and processing a virtual scene on line in a live broadcast room, which comprises a file transmission module, a server rendering module, a camera, a result output module and an interaction module, wherein the file transmission module is used for transmitting a file to the server rendering module;
the file transmission module is used for uploading the model or the drawing to the server by the user; the file transmission module is connected with the server rendering module sequentially through the format conversion module and the lightweight conversion module; the format conversion module is used for converting the imported WRL file into a corresponding VRML prototype; the light-weight conversion module is used for carrying out light-weight conversion on the source model file and simplifying the model through a simplified algorithm;
the server rendering module is used for rendering the three-dimensional image of the simplified model by using a graphic engine;
the camera is used for acquiring a video image of a live broadcast room in real time; the camera is connected with the scene fusion module through the portrait extraction module; the portrait extracting module is used for deducting the human body image in the video image;
the scene fusion module is used for fusing the rendered three-dimensional scene with the human body image which is taken out;
the result output module is used for outputting a final scene rendering effect picture of the live broadcast room to the display module, and the display content of the display module comprises a three-dimensional model, a two-dimensional drawing, a two-dimensional picture, characters, animation, video and a graphic picture;
the interactive module is used for realizing information interaction with a live broadcast room and comprises a menu interactive unit and a graphic operation interactive unit, wherein the menu interactive unit sends an instruction in a menu or button mode and determines interactive contents through the instruction; the graphic operation interaction unit is used for realizing interaction on the operation of the model in the graphic area; the interactive mode in the interactive module comprises selection, hiding, cutting, moving, rotating, zooming, playing, text input, text editing, text deleting, symbol input, symbol editing, symbol deleting, mark input, mark editing, mark deleting, graphic intercepting, graphic drawing, graphic editing, graphic deleting, brightness adjusting, transparency adjusting, light and shadow effect adjusting, projection mode adjusting, definition adjusting, rendering mode adjusting, model color changing, layout adjusting, view switching and interference checking interactive operation control.
It should be noted that, in the above system embodiment, each included unit is only divided according to functional logic, but is not limited to the above division as long as the corresponding function can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
In addition, it is understood by those skilled in the art that all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing associated hardware, and the corresponding program may be stored in a computer-readable storage medium.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.