Movatterモバイル変換


[0]ホーム

URL:


CN113781660A - Method and device for rendering and processing virtual scene on line in live broadcast room - Google Patents

Method and device for rendering and processing virtual scene on line in live broadcast room
Download PDF

Info

Publication number
CN113781660A
CN113781660ACN202111034846.3ACN202111034846ACN113781660ACN 113781660 ACN113781660 ACN 113781660ACN 202111034846 ACN202111034846 ACN 202111034846ACN 113781660 ACN113781660 ACN 113781660A
Authority
CN
China
Prior art keywords
rendering
model
module
scene
live broadcast
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111034846.3A
Other languages
Chinese (zh)
Inventor
邬学丹
杨俊�
何晓东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai White Rabbit Network Technology Co ltd
Original Assignee
Shanghai White Rabbit Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai White Rabbit Network Technology Co ltdfiledCriticalShanghai White Rabbit Network Technology Co ltd
Priority to CN202111034846.3ApriorityCriticalpatent/CN113781660A/en
Publication of CN113781660ApublicationCriticalpatent/CN113781660A/en
Withdrawnlegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a method and a device for rendering and processing a virtual scene on line in a live broadcast room, and relates to the technical field of cloud rendering simulation. The invention comprises the following steps: constructing a three-dimensional simulation scene model by preset scene model data information and modeling software; lightweight conversion is carried out through WRL file format import; simplifying an algorithm simplified model; performing three-dimensional image rendering on the simplified model by using a graphic engine; the camera collects live broadcasting frames; fusing a scene after the three-dimensional image rendering as a background with a live broadcast picture; and rendering and displaying the fused video image. According to the method, the preset scene model data information and modeling software are introduced to obtain the three-dimensional simulation scene model, the WRL file is used for rendering the three-dimensional scene, and the final rendering effect display is performed by combining with the live broadcast picture, so that the display effect of the live broadcast room is improved, and the live broadcast effect is increased.

Description

Method and device for rendering and processing virtual scene on line in live broadcast room
Technical Field
The invention belongs to the technical field of cloud rendering simulation, and particularly relates to a method and a device for rendering and processing a virtual scene on line in a live broadcast room.
Background
With the popularity of applications supporting virtual scenes (such as virtual reality applications, three-dimensional map programs, military simulation programs, first-person shooter games, virtual live rooms, etc.), a need has arisen to view live video of virtual scenes within applications supporting virtual scenes.
The video live broadcast of the virtual scene in the application program supporting the virtual scene is also called live broadcast in a client, for example, live broadcast in a game client. In the related art, a live broadcast interface is provided in the game application program, and when a spectator user selects to watch a live broadcast of a game, the game application program displays a live broadcast picture recorded by the anchor terminal through the live broadcast interface.
In the related technology, only the live broadcast picture corresponding to the viewing angle selected by the anchor can be seen through the live broadcast interface in the application program, and because different users usually have different attention points in the virtual scene, the live broadcast picture cannot meet the requirements of different audiences, and the live broadcast scene is single and cannot be changed, so that the display effect of the live broadcast picture is poor.
Disclosure of Invention
The invention aims to provide a method and a device for rendering and processing a virtual scene in a live broadcast room on line.
In order to solve the technical problems, the invention is realized by the following technical scheme:
the invention relates to a method for rendering and processing a virtual scene on line in a live broadcast room, which comprises the following steps:
step S1: acquiring a three-dimensional simulation scene model on a virtual simulation live broadcast platform according to preset scene model data information and modeling software;
step S2: importing by a WRL file format to generate a corresponding VRML prototype;
step S3: carrying out lightweight conversion on the source model file;
step S4: simplifying the model by simplifying the algorithm;
step S5: performing three-dimensional image rendering on the simplified model by using a graphic engine;
step S6: the camera collects live broadcasting frames;
step S7: fusing a scene after the three-dimensional image rendering as a background with a live broadcast picture;
step S8: and rendering and displaying the fused video image.
As a preferable technical solution, in step S1, the three-dimensional simulation scene model includes a three-dimensional model obtained by computer-aided rendering, a three-dimensional model obtained by a scanning device, a three-dimensional model obtained by image capture by an image capture device or image capture plus post-editing, and a three-dimensional model obtained by image capture by an image capture device or image capture plus post-editing and synthesis.
As a preferable technical solution, in the step S3, the step of converting the source model file by weight reduction includes:
step S31: calling an interface provided by a DGN Direct component to export the first constructed geometric information in the source model file into triangular patch data;
step S32: calling an API (application program interface) of the HOOPS Exchange component, creating a model segment, and storing the constructed geometric information into the segment;
step S33: calling an interface of the DGN direct component to read the constructed attribute data, and calling an interface of the HOOPS Exchange module to store the attribute data into the created fragment;
step S34: calling an interface provided by the DGN Direct component to transfer the geometric information of the next member in the source model file to a triangular patch and repeating the steps S32 and S33;
step S35: and grouping the fragments according to application requirements.
As a preferable technical solution, in step S4, the algorithmic compact model tiles the parameterized solid model, and then divides the mesh data into: points on the surface are all on the boundary line and the points on the surface are not all on the boundary line, and the points on the boundary line are directly simplified; for points that are not all on the boundary line, the original grid data needs to be simplified layer by layer from left to right and from bottom to top starting from the boundary line.
As a preferred technical solution, in the step S5, the image engine is deployed on a server, and the rendering and displaying of the model are completed on the server.
As a preferred technical solution, in step S7, after a scene rendered from the three-dimensional image is fused with a live broadcast as a background, a fused effect needs to be tested; the test adopts a simulation DCS system, before the test, a test case needs to be input into the simulation DCS system in advance, the engineering configuration needs to be compiled, and the compiled engineering configuration needs to be downloaded to the simulation DCS system and used for providing the test case for a scene to be tested.
The invention relates to a device for rendering and processing a virtual scene on line in a live broadcast room, which comprises a file transmission module, a server rendering module, a camera, a result output module and an interaction module, wherein the file transmission module is used for transmitting a file to the server rendering module;
the file transmission module is used for uploading the model or the drawing to the server by the user; the file transmission module is connected with the server rendering module sequentially through the format conversion module and the lightweight conversion module; the format conversion module is used for converting the imported WRL file into a corresponding VRML prototype; the light-weight conversion module is used for carrying out light-weight conversion on the source model file and simplifying the model through a simplified algorithm;
the server rendering module is used for rendering the three-dimensional image of the simplified model by using a graphic engine;
the camera is used for acquiring a video image of a live broadcast room in real time; the camera is connected with the scene fusion module through the portrait extraction module; the portrait extracting module is used for deducting a human body image in the video image;
the scene fusion module is used for fusing the rendered three-dimensional scene with the human body image which is taken out;
the result output module is used for outputting a final scene rendering effect graph of the live broadcast room to the display module;
and the interaction module is used for realizing information interaction between the live broadcast room and the interaction module.
As a preferable technical scheme, the display content of the display module comprises a three-dimensional model, a two-dimensional drawing, a two-dimensional picture, characters, animation, video and a graphic picture.
As a preferred technical solution, the interactive module includes a menu interactive unit and a graphic operation interactive unit, the menu interactive unit issues an instruction by means of a menu or a button, and determines interactive contents by the instruction; and the graphic operation interaction unit realizes interaction on the operation of the model in the graphic area.
As a preferred technical solution, the interaction mode in the interaction module includes selection, hiding, cutting, moving, rotating, zooming, playing, text input, text editing, text deletion, symbol input, symbol editing, symbol deletion, mark input, mark editing, mark deletion, graphic interception, graphic drawing, graphic editing, graphic deletion, brightness adjustment, transparency adjustment, light and shadow effect adjustment, projection mode adjustment, definition adjustment, rendering mode adjustment, model color replacement, layout adjustment, view switching, and interference check interaction operation control.
The invention has the following beneficial effects:
according to the method, the preset scene model data information and modeling software are introduced to obtain the three-dimensional simulation scene model, the WRL file is used for rendering the three-dimensional scene, and the final rendering effect display is performed by combining with the live broadcast picture, so that the display effect of the live broadcast room is improved, and the live broadcast effect is increased.
Of course, it is not necessary for any product in which the invention is practiced to achieve all of the above-described advantages at the same time.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of the steps of a method for rendering a virtual scene on-line in a live broadcast room according to the present invention;
fig. 2 is a schematic structural diagram of an apparatus for rendering and processing a virtual scene on line in a live broadcast room according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, the present invention is a method for rendering and processing a virtual scene on line in a live broadcast room, including the following steps:
step S1: acquiring a three-dimensional simulation scene model on a virtual simulation live broadcast platform according to preset scene model data information and modeling software;
step S2: importing by a WRL file format to generate a corresponding VRML prototype;
step S3: carrying out lightweight conversion on the source model file;
step S4: the model is simplified through a simplified algorithm, so that the model can be simplified, redundant scenes are removed, and the model is convenient to store;
step S5: the simplified model is subjected to three-dimensional image rendering by using a graphic engine, a rendered three-dimensional image scene is stored, and a set label is stored in a memory, so that the scene change or the modification can be conveniently carried out next time;
step S6: the camera collects live broadcasting frames;
step S7: fusing a scene after the three-dimensional image rendering as a background with a live broadcast picture;
step S8: the fused video images are rendered and displayed, and live broadcast of the fused scenes can be carried out in any scenes, such as space, animation, ocean and the like, so that the live broadcast effect of a live broadcast room is improved, and the interestingness of a main broadcast room is increased.
In step S1, the three-dimensional simulation scene model includes a three-dimensional model drawn by computer assistance, a three-dimensional model obtained by a scanning device, a three-dimensional model obtained by image pickup by an image pickup device or image pickup plus post editing, and a three-dimensional model obtained by image pickup by an image pickup device or image pickup plus post editing synthesis.
In step S3, the step of converting the source model file by weight reduction is as follows:
step S31: calling an interface provided by a DGN Direct component to export the first constructed geometric information in the source model file into triangular patch data;
step S32: calling an API (application program interface) of the HOOPS Exchange component, creating a model segment, and storing the constructed geometric information into the segment;
step S33: calling an interface of the DGN direct component to read the constructed attribute data, and calling an interface of the HOOPS Exchange module to store the attribute data into the created fragment;
step S34: calling an interface provided by the DGN Direct component to transfer the geometric information of the next member in the source model file to a triangular patch and repeating the steps S32 and S33;
step S35: and grouping the fragments according to application requirements.
In step S4, the algorithmic compact model divides the mesh data into: points on the surface are all on the boundary line and the points on the surface are not all on the boundary line, and the points on the boundary line are directly simplified; for points that are not all on the boundary line, the original grid data needs to be simplified layer by layer from left to right and from bottom to top starting from the boundary line.
In step S5, the image engine is deployed on the server, and the rendering and presentation of the model is completed on the server.
In step S7, after a scene rendered from the three-dimensional image is fused with a live broadcast as a background, a fused effect needs to be tested; the simulation DCS system is adopted for testing, before testing, a test case needs to be input into the simulation DCS system in advance, engineering configuration is compiled, and the compiled engineering configuration is downloaded to the simulation DCS system and used for providing the test case for a scene to be tested.
The invention relates to a device for rendering and processing a virtual scene on line in a live broadcast room, which comprises a file transmission module, a server rendering module, a camera, a result output module and an interaction module, wherein the file transmission module is used for transmitting a file to the server rendering module;
the file transmission module is used for uploading the model or the drawing to the server by the user; the file transmission module is connected with the server rendering module sequentially through the format conversion module and the lightweight conversion module; the format conversion module is used for converting the imported WRL file into a corresponding VRML prototype; the light-weight conversion module is used for carrying out light-weight conversion on the source model file and simplifying the model through a simplified algorithm;
the server rendering module is used for rendering the three-dimensional image of the simplified model by using a graphic engine;
the camera is used for acquiring a video image of a live broadcast room in real time; the camera is connected with the scene fusion module through the portrait extraction module; the portrait extracting module is used for deducting the human body image in the video image;
the scene fusion module is used for fusing the rendered three-dimensional scene with the human body image which is taken out;
the result output module is used for outputting a final scene rendering effect picture of the live broadcast room to the display module, and the display content of the display module comprises a three-dimensional model, a two-dimensional drawing, a two-dimensional picture, characters, animation, video and a graphic picture;
the interactive module is used for realizing information interaction with a live broadcast room and comprises a menu interactive unit and a graphic operation interactive unit, wherein the menu interactive unit sends an instruction in a menu or button mode and determines interactive contents through the instruction; the graphic operation interaction unit is used for realizing interaction on the operation of the model in the graphic area; the interactive mode in the interactive module comprises selection, hiding, cutting, moving, rotating, zooming, playing, text input, text editing, text deleting, symbol input, symbol editing, symbol deleting, mark input, mark editing, mark deleting, graphic intercepting, graphic drawing, graphic editing, graphic deleting, brightness adjusting, transparency adjusting, light and shadow effect adjusting, projection mode adjusting, definition adjusting, rendering mode adjusting, model color changing, layout adjusting, view switching and interference checking interactive operation control.
It should be noted that, in the above system embodiment, each included unit is only divided according to functional logic, but is not limited to the above division as long as the corresponding function can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
In addition, it is understood by those skilled in the art that all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing associated hardware, and the corresponding program may be stored in a computer-readable storage medium.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.

Claims (10)

CN202111034846.3A2021-09-042021-09-04Method and device for rendering and processing virtual scene on line in live broadcast roomWithdrawnCN113781660A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111034846.3ACN113781660A (en)2021-09-042021-09-04Method and device for rendering and processing virtual scene on line in live broadcast room

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111034846.3ACN113781660A (en)2021-09-042021-09-04Method and device for rendering and processing virtual scene on line in live broadcast room

Publications (1)

Publication NumberPublication Date
CN113781660Atrue CN113781660A (en)2021-12-10

Family

ID=78841273

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111034846.3AWithdrawnCN113781660A (en)2021-09-042021-09-04Method and device for rendering and processing virtual scene on line in live broadcast room

Country Status (1)

CountryLink
CN (1)CN113781660A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114004939A (en)*2021-12-312022-02-01深圳奥雅设计股份有限公司Three-dimensional model optimization method and system based on modeling software script
CN114173155A (en)*2022-02-092022-03-11檀沐信息科技(深圳)有限公司Virtual live image processing method
CN115190321A (en)*2022-05-132022-10-14广州博冠信息科技有限公司Switching method and device of live broadcast room and electronic equipment
WO2023174209A1 (en)*2022-03-182023-09-21华为云计算技术有限公司Virtual filming method, apparatus and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114004939A (en)*2021-12-312022-02-01深圳奥雅设计股份有限公司Three-dimensional model optimization method and system based on modeling software script
CN114173155A (en)*2022-02-092022-03-11檀沐信息科技(深圳)有限公司Virtual live image processing method
WO2023174209A1 (en)*2022-03-182023-09-21华为云计算技术有限公司Virtual filming method, apparatus and device
CN115190321A (en)*2022-05-132022-10-14广州博冠信息科技有限公司Switching method and device of live broadcast room and electronic equipment
CN115190321B (en)*2022-05-132024-06-04广州博冠信息科技有限公司Live broadcast room switching method and device and electronic equipment

Similar Documents

PublicationPublication DateTitle
CN113781660A (en)Method and device for rendering and processing virtual scene on line in live broadcast room
US6945869B2 (en)Apparatus and method for video based shooting game
CN108616731A (en)360 degree of VR panoramic images images of one kind and video Real-time Generation
CN113709543B (en)Video processing method and device based on virtual reality, electronic equipment and medium
CN113546410B (en)Terrain model rendering method, apparatus, electronic device and storage medium
KR101669897B1 (en)Method and system for generating virtual studio image by using 3-dimensional object modules
CN102834849A (en)Image drawing device for drawing stereoscopic image, image drawing method, and image drawing program
CN105635712A (en) Video real-time recording method and recording device based on augmented reality
CN113660528B (en)Video synthesis method and device, electronic equipment and storage medium
Zerman et al.User behaviour analysis of volumetric video in augmented reality
CN112019907A (en)Live broadcast picture distribution method, computer equipment and readable storage medium
CN112261433A (en)Virtual gift sending method, virtual gift display device, terminal and storage medium
CN117788689A (en)Interactive virtual cloud exhibition hall construction method and system based on three-dimensional modeling
CN112019906A (en)Live broadcast method, computer equipment and readable storage medium
US11961190B2 (en)Content distribution system, content distribution method, and content distribution program
US11792380B2 (en)Video transmission method, video processing device, and video generating system for virtual reality
CN117218266A (en)3D white-mode texture map generation method, device, equipment and medium
Wang et al.On the status quo and application of online virtual art exhibition technologies
JP7714779B2 (en) Data processing method, device, electronic device, and computer program
CN117579885A (en)Special effect display method and system for live broadcasting room
CN116152423A (en)Virtual reality live broadcasting room illumination processing method, device, equipment and storage medium
KR101399633B1 (en)Method and apparatus of composing videos
CN112423014A (en)Remote review method and device
KR102622709B1 (en)Method and Apparatus for generating 360 degree image including 3-dimensional virtual object based on 2-dimensional image
CN111372094B (en)Cloud live broadcast control system applicable to various scenes

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WW01Invention patent application withdrawn after publication

Application publication date:20211210

WW01Invention patent application withdrawn after publication

[8]ページ先頭

©2009-2025 Movatter.jp