Detailed Description
The application will be described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the application are shown. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The described embodiments are intended to be some, but not all, of the many other embodiments that a person of ordinary skill in the art would obtain without inventive faculty are within the scope of the application.
It should be noted that in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying any particular order or sequence. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art. Furthermore, in the description of the present application, the term "plurality" means two or more, unless otherwise indicated. The term "and/or" describes an association relationship of associated objects, meaning that there may be three relationships, e.g., A and/or B, and that there may be three cases where A exists alone, while A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
In order to facilitate understanding of the technical solution of the present application, related concepts related to the present application will be described first.
The rendering engine (RENDERING ENGINE) is a software system in computer graphics for converting three-dimensional model, texture, lighting, etc. information into two-dimensional images for display on a display or other device. Rendering engines are an indispensable tool in the fields of three-dimensional animation, game development, virtual reality, architectural design, and the like. It is responsible for handling all computational and graphics processing tasks from scene description to final image generation. Common rendering engines are illusion engines (Unreal Engine, UE), unity, etc.
The illusion engine is the largest global open source 3D rendering engine, can provide a complete development tool, and is suitable for any user needing real-time technology. From design visualization and movie experience to making high quality games, covering many areas of PCs, hosts, mobile devices, VR and AR platforms, it also provides all the tools needed for project start and delivery. The illusion engine has strong real-time rendering capability, supports high-quality illumination and shadow, and is rich in materials and shader systems, and widely used for game development and virtual reality. The illusion engine's idea of writing is to make content and develop programming more easily. With less program development content involved, abstract programs are used to freely create virtual environments, providing efficient modules and extensible development frameworks for program writers to create, test, and complete various types of software testing work.
Light source Actor in rendering engine, light source Actor is entity that can be placed and edited directly in the checkpoint, such as Directional Light Actor (directed Light source Actor), point Light Actor (point Light source Actor), spot Light Actor (spotlight Actor), etc. These light source actors themselves contain one or more light source modules and provide a graphical user interface to set the various properties of the light source.
Light source instance in a rendering engine, a light source instance refers to one instance of a light source Actor that is specifically present in a checkpoint. When you put a light source Actor in the checkpoint editor, it is actually creating a light source instance. Each light source Actor has specific instantiation locations and property settings in the checkpoint.
Light source component in the rendering engine, the light source component is a component in the light source Actor and is responsible for actual illumination calculation. The light source modules may be attached to any of the light sources, not limited to the light source. This means that the light source module can be attached to a static grid body Actor or other type of Actor to make it a light source. The light source assembly includes specific attributes of the light source, such as color, intensity, range, etc.
In order to solve the problems, the application provides a light source component control method of a rendering engine, a light source component control device of the rendering engine corresponding to the method, an electronic device capable of implementing the light source component control method of the rendering engine and a computer readable storage medium. The following provides detailed descriptions of the above methods, apparatuses, electronic devices, and computer-readable storage media.
In order to make the purpose and the technical scheme of the application clearer and more intuitive, the method provided by the embodiment of the application is described in detail below with reference to the attached drawings and the embodiment. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. It is to be understood that the following embodiments may exist alone and that the embodiments and features of the embodiments described below may be combined with each other without conflict between the embodiments provided by the present application, and for the same or similar matters, descriptions in different embodiments are not repeated. In addition, the timing of steps in the method embodiments described below is merely an example and is not strictly limited, and in some cases, steps shown or described may be performed in an order different from that.
The application provides a light source component control method and device of a rendering engine, electronic equipment and a computer readable storage medium. Specifically, the method for controlling the light source component of the rendering engine according to one embodiment of the present application may be performed by a computer device, where the computer device may be a device such as a terminal or a server. The terminal can be terminal equipment such as a smart phone, a tablet personal computer, a notebook computer, a touch screen, a game machine and the like, and the terminal can also comprise a client, wherein the client can be a game application client, a browser client carrying a game program, an instant messaging client or the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, basic cloud computing services such as big data and artificial intelligence platforms, and the like.
First embodiment
Next, referring to fig. 1, a light source module control method of a rendering engine according to a first embodiment of the present application is described, and fig. 1 is a flowchart of a light source module control method of a rendering engine according to a first embodiment of the present application.
As shown in fig. 1, the light source assembly control method of the rendering engine includes steps S101 to S104:
S101, acquiring light source information of a light source instance in a virtual scene. Wherein the virtual scene is created by the rendering engine, and the light source information includes light source attribute information and light source position information.
S102, acquiring a light source demand text input by a user, wherein the light source demand text is used for representing the light source effect demand on a light source component in a virtual scene created by a rendering engine.
S103, inputting the light source information and the light source demand text of the light source instance into a light source setting prediction model, and obtaining a light source setting instruction corresponding to the light source demand text. The light source setting prediction model is used for outputting a light source setting instruction which can be recognized by the rendering engine and realize a light source effect corresponding to the light source demand text according to the light source demand text, wherein the light source setting instruction comprises a light source attribute and a corresponding numerical value thereof.
S104, controlling a light source component of the light source instance in the virtual scene through the rendering engine according to the light source setting instruction.
Next, steps 101 to S104 will be described in detail.
In a rendering engine (e.g., a illusion engine, a Unity, etc. other 3D rendering engine), a light source Actor is a game object for simulating lighting effects. A light source Actor is a special Actor type that does not display any geometry in a virtual scene, but affects other virtual objects in the virtual scene. The light source can be static or dynamic, is placed in the virtual scene editor, and can be adjusted in position, intensity, color, etc. as desired. Among other things, lighting visual effects include, but are not limited to, simulating natural lighting, emphasizing scene elements, creating atmosphere effects, achieving shadow effects, and the like. The light source Actor is responsible for generating illumination in the virtual scene and helps to render a vivid illumination visual effect.
Common types of light sources include, but are not limited to, point sources, spotlights, directional light sources, area light sources, ambient light sources. Wherein, point Light source (Point Light) uniformly emits Light from one Point to the periphery. Spot Light, like a flashlight or stage spotlight, has a central axis and emits Light over a range of angles. The directional light source (Directional Light) simulates the effect of sunlight and emits parallel light rays from an infinite distance. The area light source (AREA LIGHT) is a generally rectangular or circular planar light source that simulates a large area of a light emitting surface. Ambient Light) provides the base illumination for the scene, usually without explicit directionality.
In the embodiment of the application, before a user controls the light source assembly in the virtual scene through the rendering engine to realize the expected illumination effect, the light source information of all the light source instances existing in the virtual scene at present is acquired. The light source information includes light source attribute information and light source position information. Light source types include, but are not limited to, direct light, point light sources, concentrated light sources, rectangular light sources. Different types of light source instances may have the same attribute or may have different attributes. In particular, the core properties of direct light include, but are not limited to, intensity, color temperature, and source angle. Core properties of point sources include, but are not limited to, intensity, color, decay radius, color temperature, volume scattering intensity, and indirect illumination intensity. Core properties of the light-gathering source include, but are not limited to, intensity, color, attenuation radius, cone interior angle, cone exterior angle, and color temperature. The core properties of rectangular light sources include, but are not limited to, intensity, color, decay radius, source width, source height, barrier angle, barrier length, and color temperature.
Illustratively, taking a rendering engine as a illusion engine as an example, all light source instance information currently existing in a virtual scene created by the illusion engine is obtained, and the light source instances include a main light DirectionalLight, a point light source PointLight, a condensing source SpotLight, a rectangular light source RECTLIGHT and sky light SkyLight. It should be noted that only the first primary light source is retained when there are a plurality of primary lights. The light source attribute information of the light source instance, taking the main illumination DirectionalLight as an example, the light source attribute information of the main illumination DirectionalLight is as follows:
DirectionalLight={
'intensity':1000,
'light_color':(1.0,1.0,1.0,1.0),
'cast_shadows':True,
'shadow resolution':1024,
'direction':(0,0,1)}。
The light source position information of all the light source examples can be in the form of :[DirectionalLight:(0,0,0),PointLight:(328,658,206),SpotLight:(500,245,150),RectLight:(280,468,306),SkyLight:(400,550,302)].
In the embodiment of the application, the light source demand text input by the user is obtained, and the light source demand text is used for representing the light source effect demand of the user on the light source assembly in the virtual scene created by the rendering engine. The light source demand text is a natural language input by a user and used for describing the expected illumination effect and scene illumination atmosphere. Illustratively, the light source demand text as entered by the user is "I want an outdoor landscape scene, in the evening, the sun is about to fall into the mountain. I hope that the sunlight presents golden color tone with moderate intensity, can illuminate the whole scene, but is not too strong. I want the sky color to reflect the sense of sunset while there is a noticeable shadow on the ground and trees. As another example, the user inputs a light source demand text "i need a forest exploration scene, and sunlight is sprinkled through the crown during the daytime. I want the sunlight to be white, moderate in intensity, but not very intense. I want to see some mottled shadow effect on the ground, simulating the illumination effect of sunlight penetrating through the leaves. "
In the embodiment of the application, a light source setting prediction model is obtained, the light source setting prediction model is used for outputting a light source setting instruction which can be identified by a rendering engine and realize a light source effect corresponding to a light source demand text according to the light source demand text, and the light source setting instruction comprises a light source attribute and a corresponding numerical value thereof. And then, inputting the light source information and the light source demand text of the light source instance into the light source setting prediction model to obtain a light source setting instruction corresponding to the light source demand text output by the light source setting prediction model.
An optional implementation manner, one possible implementation manner of inputting the light source information and the light source requirement text of the light source instance into the light source setting prediction model in step S103 includes step S1031:
S1031, based on a preset template of the campt, splicing the light source information of the light source instance and the light source demand text to obtain a target campt, and inputting the target campt into a light source setting prediction model.
The preset template is a user-defined template for splicing the light source information and the light source demand text of the light source instance. The preset template may be { all light source instance information in the virtual scene } +the light source requirement text which is needed to be adjusted by the user + { the light source requirement text input by the user (i.e. the natural language which is input by the user and is used for describing the expected illumination effect and the illumination atmosphere of the scene) }, or { the atmosphere description text of the whole virtual scene } + { all light source instance information in the virtual scene } +the light source requirement text which is needed to be adjusted by the user }, which is just an example, and the embodiment of the application does not limit any limitation.
In the embodiment of the application, based on a preset template of the template, the light source information of the light source instance and the light source demand text are spliced to obtain the target template. The target template is input into the light source setting prediction model, and according to the target template, the light source setting prediction model can be mastered more easily by light source information of all light source examples in the current virtual scene and understand the light source effect requirement of a user, so that the light source setting prediction model is prompted to output a light source setting instruction which meets the light source effect requirement of the user.
The target campt is illustratively a hazy forest scene of smoke and rain, and all light source instance information, namely light source attribute information and light source position information, (DirectionalLight = {)
'intensity':1000,
'light_color':(1.0,1.0,1.0,1.0),
'cast_shadows':True,
'shadow resolution':1024,
'direction':(0,0,1)},
PointLight={......},
SpotLight={......},
RectLight={......},
SkyLight={......}
DirectionalLight:(0,0,0),
PointLight:(328,658,206),
SpotLight:(500,245,150),
RectLight:(280,468,306),
SkyLight (400,550,302) ]) you need now to adjust the light source information, i need a forest exploration scene, and during the day, sunlight is sprinkled through the crown. I want the sunlight to be white, moderate in intensity, but not very intense. I want to see some mottled shadow effect on the ground, simulating the illumination effect of sunlight penetrating through the leaves.
Inputting the target campt into the light source setting prediction model to obtain a light source setting instruction output by the light source setting prediction model, and assuming that N light sources need to be adjusted, the corresponding light source setting instruction can be as follows:
[data1={
'intensity':2500,
'light_color':(180,190,200),
'source_angle':8,
'source_soft_angle':16,
'use_temperature':True,
'temperature':7000,
'cast_shadows':True,
'indirect_lighting_intensity':1.3,
'volumetric_scattering_intensity':2.5
},data2={...},
......
,dataN={......}]。
It should be noted that, the illumination setting prediction model may output light source information for a plurality of light sources in batch at a time, or may output light source information for one light source instance at a time multiple times, so as to avoid that the output is too long and truncated by exceeding the token limit (maximum output number), which may limit the maximum output number in the illumination text demand text in the target template.
In the embodiment of the application, the light source setting instruction corresponding to the light source demand text is obtained based on the steps, and the light source assembly of the light source instance in the virtual scene is controlled by the rendering engine so as to realize the light effect corresponding to the light source demand text in the virtual scene.
According to the light source component control method of the rendering engine, light source information of all light source instances in a virtual scene is obtained. Wherein the virtual scene is created by the rendering engine, and the light source information includes light source attribute information and light source position information. The method comprises the steps of obtaining input light source demand text, wherein the light source demand text is used for representing light source effect demands on light source components in a virtual scene created by a rendering engine. And inputting the light source information and the light source demand text of the light source instance into a light source setting prediction model to obtain a light source setting instruction corresponding to the light source demand text. The light source setting prediction model is used for outputting a light source setting instruction which can be recognized by the rendering engine and realize a light source effect corresponding to the light source demand text according to the light source demand text, wherein the light source setting instruction comprises a light source attribute and a corresponding numerical value thereof. And controlling a light source component of the light source instance in the virtual scene through the rendering engine according to the light source setting instruction. According to the application, a user does not need to master the related operation knowledge of the light source assembly of the rendering engine, and can quickly realize the adjustment of the light source assembly in the virtual scene by only inputting the light source demand text, so that the virtual scene presents the illumination effect reflected by the light source demand text, the operation threshold of the user is greatly reduced, and the control efficiency of the light source assembly by the rendering engine is greatly improved.
On the basis of the first embodiment, a method for controlling a light source assembly of a rendering engine according to the first embodiment is further described below.
An optional implementation manner, the light source component control method of the rendering engine provided in the first embodiment of the present application further includes step S105:
S105, adjusting the light source setting instruction corresponding to the light source demand text according to the illumination parameter range of each illumination component in the rendering engine, so that the numerical value corresponding to each light source attribute in the adjusted illumination setting instruction accords with the numerical value range of each light source attribute in the rendering engine.
By adjusting the light source setting instruction, the numerical value of the light source attribute is ensured to be in a reasonable range, and the unreasonable or non-conforming to the physical rule illumination effect can be avoided. For example, too high an intensity value may result in overexposure, while too low an intensity value may result in a scene that is too dark.
Second embodiment
Next, with reference to fig. 2, a light source module control method of a rendering engine according to a second embodiment of the present application is described, and fig. 2 is a flowchart of the light source module control method of the rendering engine according to the second embodiment of the present application.
As shown in fig. 2, the light source assembly control method of the rendering engine includes steps S201 to S210:
S201, creating a custom plug-in, and providing a graphical user interface by the custom plug-in.
S202, integrating the custom plug-in into a rendering engine, wherein the integrated custom plug-in comprises a control interface of a light source Actor, and the light source instance is an instance of the light source Actor.
S203, the custom plug-in acquires light source information of all light source instances in the virtual scene from the rendering engine through a control interface of the light source Actor.
S204, the custom plug-in receives the input light source demand text through the graphical user interface.
Through the graphical user interface, the light source demand text input by the user manually or by other plug-ins can be received.
S205, the custom plug-in module is based on a preset template of the template, the light source information of the light source instance and the light source demand text are spliced, a target template is obtained, and the target template is sent to the Python server.
S206, integrating the light source setting prediction model into the Python server.
S207, the Python server inputs a target template into the light source setting prediction model to obtain a light source setting instruction corresponding to the light source demand text.
S208, the Python server sends a light source setting instruction corresponding to the light source demand text to the custom plug-in.
S209, the custom plug-in adjusts the light source setting instruction corresponding to the light source demand text according to the illumination parameter range of each illumination component in the rendering engine, so that the numerical value corresponding to each light source attribute in the adjusted illumination setting instruction accords with the numerical value range of each light source attribute in the rendering engine.
S210, controlling a light source component of a light source instance in the virtual scene by a rendering engine according to the light source setting instruction by the custom plug-in.
The custom plug-in developed for the rendering engine is a custom module that can be integrated into the rendering engine, and that can contain code, resource files (e.g., textures, models, etc.), scripts, and other necessary files. Custom plug-ins are designed to enhance or supplement the default functionality of the rendering engine so that a developer can more easily achieve a particular goal or function.
The embodiment of the application provides the method for acquiring the light source information of all the light source instances in the virtual scene, the custom plug-in is created, the custom plug-in provides the graphical user interface, the light source demand text input by the user is received through the graphical user interface, the input process of the user can be simplified, the user does not need to memorize complex commands or parameter formats, and only the light source demand text used for representing the light source effect demands of the user on the light source components in the virtual scene created by the rendering engine is input on the graphical user interface.
The self-defined plug-in is integrated into the rendering engine, and the integrated self-defined plug-in comprises a control interface of the light source Actor, so that the self-defined plug-in can directly acquire the light source information of all the light source examples in the virtual scene from the rendering engine through the control interface of the light source Actor without manually writing codes or setting deep into the engine to acquire, the operation threshold of a user is reduced, and a non-professional technician can easily acquire the light source information of all the light source examples in the virtual scene.
The custom plug-in is based on a preset template of campt, and the light source information of the light source instance and the light source demand text are spliced to obtain a target campt. According to the target prompt, the light source setting prediction model can be more easily mastered with the light source information of all the light source examples in the current virtual scene and understand the light source effect requirement of the user, so that the light source setting prediction model is prompted to output a light source setting instruction which better meets the light source effect requirement of the user. The custom plug-in then sends the target template to the Python server.
The Python server inputs the target template into the light source setting prediction model to obtain a light source setting instruction corresponding to the light source demand text, and then sends the light source setting instruction corresponding to the light source demand text to the custom plug-in. The custom plug-in adjusts the light source setting instruction corresponding to the light source demand text according to the illumination parameter range of each illumination component in the rendering engine, so that the numerical value corresponding to each light source attribute in the adjusted illumination setting instruction accords with the numerical value range of each light source attribute in the rendering engine.
The custom plug-in controls a light source component of a light source instance in the virtual scene through the rendering engine according to a light source setting instruction corresponding to the light source demand text. In the light source component control method of the whole rendering engine, a user only needs to input a light source demand text, does not need to manually calculate the setting parameters of the light source component and does not need to manually control the light source components in the virtual scene one by one through the rendering engine, so that the operation threshold and the workload of the user are greatly reduced, and the light source control efficiency is improved.
Third embodiment
Next, a training method of the light source setting prediction model according to a third embodiment of the present application will be described with reference to fig. 3, and fig. 3 is a schematic diagram of a training flow of the light source setting prediction model according to the third embodiment of the present application.
As shown in fig. 3, the training method of the light source setting prediction model includes steps S301 to S304:
S301, acquiring a training data set, wherein the training data set comprises a plurality of groups of sample pairs, and each sample pair consists of a light source setting description, a light source attribute and an attribute value.
S302, acquiring a preset multi-mode model which has universal text understanding capability and visual perception capability.
The preset multi-modal models refer to machine learning models capable of processing and fusing data from different modalities. Modality (Modality) refers to different manifestations of data, such as text, images, audio, video, etc. The goal of the multimodal model is to be able to extract complementary information from these different modalities of data as they are processed, and to use this information to accomplish specific tasks such as image description generation, emotion analysis, question-answering systems, etc. The predetermined multimodal model may be LLaVA-3 model, by way of example only, and embodiments of the application are not limited in this regard.
S303, loading a preset multi-mode model, freezing a weight matrix of the model, and introducing a first low-rank matrix and a second low-rank matrix to obtain a light source setting prediction model to be trained. The matrix obtained by dot multiplying the transposed matrix of the first low-rank matrix and the transposed matrix of the second low-rank matrix has the same dimension as the weight matrix.
In the embodiment of the application, a preset multi-mode model is loaded and the weight matrix is frozen to realize matrix decomposition, and a first low-rank matrix and a second low-rank matrix are introduced, so that the aims of reducing the calculation complexity and the storage requirement are fulfilled. The rank of both the first low rank matrix and the second low rank matrix is very low and the dimensions of the two low rank matrices are typically much smaller than the dimensions of the weight matrix. Where a low rank matrix refers to mathematically, the rank (rank) of a matrix refers to the number of largest linear independent groups in its row vector (or column vector). Low rank matrices refer to those matrices that are relatively small in rank. In machine learning, low rank matrices are often used to approximate high dimensional matrices in order to reduce computational complexity and memory requirements. In LoRA (Low-Rank Adaptation, loRA), a Low Rank matrix is used to fine tune a pre-trained large model to accommodate new tasks or datasets.
For example, it is assumed that the dimension of the weight matrix of the preset multi-mode model is n×m, the dimension of the first low rank matrix is n×k, and the dimension of the second low rank matrix is m×k. The matrix obtained by dot multiplying the transposed matrix of the first low-rank matrix and the transposed matrix of the second low-rank matrix has the same dimension, i.e., n×m, as the weight matrix.
S304, inputting the training data set into a light source setting prediction model to be trained, updating the first low-rank matrix and the second low-rank matrix, and obtaining the light source setting prediction model after training is completed.
According to the training method for the light source setting prediction model, a training data set is obtained, wherein the training data set comprises a plurality of groups of sample pairs, and each sample pair consists of a light source setting description, a light source attribute and an attribute value. And acquiring a preset multi-modal model, wherein the preset multi-modal model has universal text understanding capability and visual perception capability. Loading a preset multi-mode model, freezing a weight matrix of the model, and introducing a first low-rank matrix and a second low-rank matrix to obtain a light source setting prediction model to be trained. The matrix obtained by cross multiplying the first low-rank matrix by the second low-rank matrix has the same dimension as the weight matrix. And inputting the training data set into a light source setting prediction model to be trained, updating the first low-rank matrix and the second low-rank matrix, and obtaining the light source setting prediction model after training is completed.
In the embodiment of the application, in the process of fine tuning the preset multi-modal model by using the training data set, the weight matrix of the preset multi-modal model is frozen and kept unchanged, and the first low-rank matrix and the second low-rank matrix are updated by using a standard optimization algorithm (such as gradient descent), so that the fine-tuned preset multi-modal model gradually has the capability of outputting corresponding light source attributes and attribute values according to the light source setting description on the basis of having general text understanding capability and visual perception capability.
In the embodiment of the application, the preset multi-modal model after the first low-rank matrix and the second low-rank matrix are introduced is called a light source setting prediction model to be trained. Performing LoRA fine tuning on the light source setting prediction model to be trained by using the training data set, updating the first low-rank matrix and the second low-rank matrix, and obtaining the light source setting prediction model after training is completed. This allows the preset multimodal model to maintain a better generalization ability while possessing the ability to output corresponding light source attributes and attribute values from the light source setting description, as well as significantly reducing the need for computing resources.
On the basis of the training method of the light source setting prediction model, optionally, step S3031 is further included after step S303, and step S3041 is further included after step S304. Step S3031 and step S3041 are as follows:
s3031, the weight matrix, the first low-rank matrix and the second low-rank matrix are stored in a partitioning mode.
S3041, carrying out weight combination on the weight matrix, the first low-rank matrix and the second low-rank matrix to obtain a target weight matrix.
In the embodiment of the application, in the process of carrying out LoRA fine tuning on the light source setting prediction model to be trained by using the training data set, the weight matrix, the first low-rank matrix and the second low-rank matrix are stored in a partitioned mode by using DEEPSPEED ZERO, which means that each training node only stores a part of state parameters of the model, thereby reducing memory occupation.
In the embodiment of the present application, in step S304", a training data set is input into a light source setting prediction model to be trained, and the first low rank matrix and the second low rank matrix are updated, so as to obtain the light source setting prediction model after training is completed. And then, carrying out weight combination on the weight matrix, the first low-rank matrix and the second low-rank matrix to obtain a target weight matrix. Specifically, firstly, multiplying the transposed matrix point of the second low-rank matrix by the intermediate matrix obtained after the first low-rank matrix, and determining the sum of the intermediate matrix and the weight matrix of the original preset multi-mode model as a target weight matrix. The light source component control device of the rendering engine provided by the application is described below, and the light source component control device of the rendering engine described below and the light source component control method of the rendering engine described above can be referred to correspondingly.
Fig. 4 is a schematic structural diagram of a light source module control device of a fantasy engine according to a fourth embodiment of the present application. As shown in fig. 4, the light source assembly control apparatus 400 of the rendering engine includes a first acquisition module 401, a second acquisition module 402, a prediction module 403, and a control module 404.
The system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring light source information of all light source instances in a virtual scene, the virtual scene is created by a rendering engine, and the light source information comprises light source attribute information and light source position information;
the second acquisition module is used for acquiring a light source demand text input by a user in the graphical user interface, wherein the light source demand text is used for representing the light source effect demand of the user on a light source component in a virtual scene created by a rendering engine;
the system comprises a light source demand text, a prediction module, a light source setting prediction module, a display module and a display module, wherein the light source demand text is used for displaying a light source demand text corresponding to the light source demand text, and the prediction module is used for inputting the light source information of the light source instance and the light source demand text into a light source setting prediction model to obtain a light source setting instruction corresponding to the light source demand text;
the control module is used for controlling the light source assembly of the light source instance in the virtual scene through the rendering engine according to the light source setting instruction corresponding to the light source demand text.
In an alternative embodiment, the apparatus further comprises a correction module for:
And adjusting the light source setting instruction corresponding to the light source demand text according to the illumination parameter range of each illumination component in the rendering engine so that the numerical value corresponding to each light source attribute in the adjusted illumination setting instruction accords with the numerical value range of each light source attribute in the rendering engine.
In an alternative embodiment, the prediction module is further configured to:
based on a preset template of the campt, splicing the light source information of the light source instance and the light source demand text to obtain a target campt, and inputting the target campt into a light source setting prediction model.
In an alternative embodiment, the second obtaining module is further configured to:
creating a custom plug-in, wherein the custom plug-in provides a graphical user interface;
Integrating the custom plug-in into the rendering engine, wherein the integrated custom plug-in comprises a control interface of a light source Actor, and the light source instance is an instance of the light source Actor;
the obtaining the light source information of all the light source instances in the virtual scene includes:
The custom plug-in obtains light source information of all light source instances in the virtual scene from the rendering engine through a control interface of the light source Actor;
the obtaining the input light source demand text includes:
The custom plug-in receives an input light source demand text through a graphical user interface;
And the custom plug-in receives a light source setting instruction corresponding to the light source demand text.
In an alternative embodiment, the prediction module is configured to:
Constructing a Python server, and integrating the light source setting prediction model into the Python server;
The step of inputting the light source information of the light source instance and the light source demand text into a light source setting prediction model to obtain a light source setting instruction corresponding to the light source demand text, includes:
The Python server inputs the light source information of the light source instance and the light source demand text into a light source setting prediction model to obtain a light source setting instruction corresponding to the light source demand text;
and the Python server sends a light source setting instruction corresponding to the light source demand text to the custom plug-in.
In an alternative embodiment, the device further comprises a training module, and the training module is specifically configured to:
Acquiring a training data set, wherein the training data set comprises a plurality of groups of sample pairs, and the sample pairs are composed of light source setting descriptions, light source attributes and attribute values;
Acquiring a preset multi-modal model, wherein the preset multi-modal model has universal text understanding capability and visual perception capability;
loading the preset multi-mode model, freezing a weight matrix of the model, and introducing a first low-rank matrix and a second low-rank matrix to obtain a light source setting prediction model to be trained, wherein a matrix obtained by dot multiplication of the first low-rank matrix and a transposed matrix of the second low-rank matrix has the same dimension as the weight matrix;
And inputting the training data set into the light source setting prediction model to be trained, updating the first low-rank matrix and the second low-rank matrix, and obtaining the light source setting prediction model after training is completed.
In an alternative embodiment, the training module is further configured to:
Carrying out partition storage on the weight matrix, the first low-rank matrix and the second low-rank matrix;
The training data set is input into the light source setting prediction model to be trained, and after updating the first low-rank matrix and the second low-rank matrix, the method further comprises:
and carrying out weight combination on the weight matrix, the first low-rank matrix and the second low-rank matrix to obtain a target weight matrix.
The light source component control device of the rendering engine provided in this embodiment may be used to execute the technical scheme of the light source component control method embodiment of the rendering engine, and its implementation principle and technical effect are similar, and this embodiment will not be described here again.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device according to a fifth embodiment of the present application, and as shown in fig. 5, an electronic device 500 according to the present embodiment includes a processor 501 and a memory 502, wherein
A memory 502 for storing computer-executable instructions;
the processor 501 is configured to execute computer-executable instructions stored in the memory to implement the steps executed by the light source component control method of the rendering engine in the above embodiment. Reference may be made in particular to the relevant description of the embodiments of the method described above.
Alternatively, the memory 502 may be separate or integrated with the processor 501.
When the memory 502 is provided separately, the electronic device further comprises a bus 503 for connecting said memory 502 and the processor 501.
The sixth embodiment of the present application further provides a computer readable storage medium, where computer execution instructions are stored, and when a processor executes the computer execution instructions, the technical solution corresponding to the light source component control method of the rendering engine in any one of the foregoing embodiments, which is executed by the electronic device, is implemented.
The seventh embodiment of the application also provides a computer program product, which comprises a computer program, the computer program is stored in a readable storage medium, at least one processor of the electronic device can read the computer program from the readable storage medium, and the at least one processor executes the computer program to enable the electronic device to execute the technical scheme corresponding to the light source component control method of the rendering engine in any embodiment.
While the application has been described in terms of preferred embodiments, it is not intended to be limiting, but rather, it will be apparent to those skilled in the art that various changes and modifications can be made herein without departing from the spirit and scope of the application as defined by the appended claims.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The integrated modules, which are implemented in the form of software functional modules, may be stored in a computer readable storage medium. The software functional module is stored in a storage medium, and includes several instructions for causing an electronic device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods according to the embodiments of the application.
It should be understood that the above Processor may be a central processing module (english: central Processing Unit, abbreviated as CPU), or may be other general purpose processors, a digital signal Processor (english: DIGITAL SIGNAL Processor, abbreviated as DSP), an Application-specific integrated Circuit (english: application SPECIFIC INTEGRATED Circuit, abbreviated as ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in a processor for execution.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile memory NVM, such as at least one magnetic disk memory, and may also be a U-disk, a removable hard disk, a read-only memory, a magnetic disk or optical disk, etc.
The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (PERIPHERAL COMPONENT, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or to one type of bus.
The storage medium may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of implementing the various method embodiments described above may be implemented by hardware associated with program instructions. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs the steps comprising the method embodiments described above, and the storage medium described above includes various media capable of storing program code, such as ROM, RAM, magnetic or optical disk.
It should be noted that the above embodiments are merely for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that the technical solution described in the above embodiments may be modified or some or all of the technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the scope of the technical solution of the embodiments of the present application.