Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In the current game scenario, turning on the "multiplayer team voice" function greatly enhances the interactivity and player experience of the game. However, the game voice function can compete with numerous modules such as image rendering, network communication, physical simulation, etc. for system resources, especially CPU resources, during operation. Once the multi-user voice team function occupies excessive CPU resources, other core game functions such as image rendering, data loading and the like may not be processed in time, so that problems of game clamping and delay are caused, and player experience is seriously affected.
To address such issues, it is often necessary to begin with optimizing the resource usage at the time of game play. For example, the source code compiling efficiency is improved, the CPU multi-core allocation is managed, the network bandwidth use is optimized, and the like. The conventional method is to perform "static" optimization according to a pre-evaluated strategy, such as compiling at a fixed time, executing tasks at a fixed CPU core, etc., but when the game enters a multi-person voice team mode, the resource requirement changes rapidly and indefinitely, so that the static strategy is difficult to cope with transient load impact.
In the prior art, in order to reduce the problem of jamming during game running, the following means are generally adopted:
And the byte code compiling technology is used for compiling the source code before the game is run, so that the real-time compiling pressure on the CPU after the game is started is reduced.
Matching CPU core information, namely distributing tasks with high operation requirements to more proper cores according to the characteristics of different cores of the CPU, so as to improve the overall efficiency.
And the resource preloading is to preload common resources into the memory before the game is started or the scene is switched, so that the clamping caused by the real-time loading is reduced.
However, when the voice function of the multi-user team is started, only a fixed strategy of 'byte code compiling+CPU core allocation' is still not flexible enough, and the resource allocation balance between voice processing and other core functions is difficult to ensure due to the lack of dynamic scheduling capability in the face of a continuously changing real-time game scene.
Based on the background, the key problem to be solved by the application is how to reduce the blocking phenomenon caused by too high CPU occupation when the multi-person team voice is started, so that the fluency of the game is ensured. In other words, a mechanism capable of dynamically scheduling and flexibly allocating tasks and resources according to real-time states (CPU load, memory usage, network conditions, etc.) during game operation is needed to avoid conflicts or bottlenecks in CPU usage between the multiplayer voice function and other game functions.
The following describes the technical scheme of the present application in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flow chart of a game performance optimization method based on dynamic scheduling according to an embodiment of the present application. As shown in fig. 1, the game performance optimization method based on dynamic scheduling specifically may include:
s101, pre-compiling source codes of game voice tools, and converting the source codes into byte codes;
S102, preloading a resource file of a game voice tool, caching the preloaded resource file in a memory, and acquiring operation data in real time in the game operation process;
s103, dynamically adjusting a byte code compiling strategy based on the operation data, and reducing the execution priority of a byte code compiling task when the starting of a multi-user voice team function and the CPU load reach a preset condition are monitored;
S104, carrying out dynamic scheduling on the allocation of CPU cores based on the operation data, allocating the tasks with high priority to the CPU cores with loads lower than a preset load threshold value for execution, and allocating other tasks to the remaining CPU cores;
s105, scheduling and intelligent cache management are carried out on the resource files according to the operation data, priority caching or background loading is carried out on the frequently accessed resource files, and the resource files required by voice processing are preferentially ensured in a multi-user voice team scene;
And S106, when the network bandwidth or the network delay is monitored to reach a preset condition, adjusting the data transmission strategy of the multi-user voice based on the operation data so as to reduce the voice data quality or improve the audio compression rate.
In some embodiments, pre-compiling source code of a game speech tool to convert the source code to bytecode includes:
Reading the source code of the game voice tool in the installation or starting stage of the game voice tool, and analyzing the source code to generate an intermediate representation required by compiling;
based on the intermediate representation, performing compiling operation in a preset compiling environment to generate a corresponding byte code file;
and storing the byte code file in a storage position corresponding to the game voice tool, and executing a corresponding functional module by calling the byte code file when the game voice tool runs.
Specifically, in this embodiment, the source code of the game voice tool may be pre-compiled once in the installation stage or the start stage of the game voice tool, and the whole pre-compiling process includes the following steps:
First, the system automatically starts the precompiled module during the installation phase of the game speech tool, for example, when the game speech tool is first installed in a user terminal device (such as a PC, a game host or a smart mobile terminal), or during the first start-up phase of the game speech tool. The precompiled module firstly accesses source code files (such as script files, logic function module files and the like) of all game voice tools in the installation catalog of the game voice tools or the software package through the file reading unit, and analyzes the source code files. Specifically, the file reading unit obtains the source code content through progressive scanning or file integral reading, and analyzes the grammar structure of the source code by using the code analysis unit to generate an intermediate representation (INTERMEDIATE REPRESENTATION, IR) in a unified format which is convenient for subsequent compiling. The intermediate representation is used as a unified and language-independent code representation form, is suitable for various game compiling environments, and can facilitate subsequent compiling processing.
Further, the generated intermediate representation is transmitted to a preset compiling environment for unified compiling processing, and the preset compiling environment can be integrated in an installation program of the game client, and can also be distributed as an independent module along with the game installation package. In this embodiment, the compilation environment may be a Just-in-Time (JIT) or Just-in-Time (AOT) based compilation framework to adapt to different terminal execution environments.
In a compilation environment, a compilation unit optimizes and converts intermediate representations, including but not limited to optimization steps such as code compaction, removing redundant instructions, instruction reordering, etc., to generate a bytecode file suitable for direct execution. The bytecode file is used as an intermediate code form which is closer to a machine language, has higher execution efficiency compared with the original source code, and has significantly reduced CPU resource occupation required when the game voice tool runs.
The compiled bytecode file is then stored in a specific storage location corresponding to the game voice tool client, for example, a dedicated cache folder under an installation directory or a designated area of an in-user terminal storage system. Specifically, in this embodiment, a specific byte code cache directory may be established under the game voice tool installation directory, and the compiled byte code files may be stored in a unified manner, so that the game voice tool client may be quickly retrieved and invoked when it is started.
In each subsequent running stage of the game voice tool, the game voice tool client does not need to perform real-time compiling operation from source code to byte code again, and directly calls the byte code file which has completed precompiled from the cache position. Specifically, when the game voice tool is started, the byte code loading unit automatically retrieves the byte code cache directory, completes quick loading of the byte code file and executes the byte code file in the corresponding virtual machine or execution engine, thereby greatly reducing CPU load in the running process of the game voice tool.
Further, in some examples, the game speech tool client may also set a bytecode version management mechanism. For example, when the game voice tool is updated or the code is changed, the client can judge whether the corresponding byte code file is updated or not through a version comparison mechanism, and if the source code is changed, the client only re-executes the pre-compiling process on the changed source code file to update the corresponding byte code file without re-compiling all the source codes, thereby further optimizing the compiling efficiency.
Through the above-mentioned pre-compiling technical scheme, the embodiment effectively realizes the advanced one-time compiling processing of the source codes of the game voice tool, avoids the problem that the game voice tool occupies excessive CPU resources due to repeated source code compiling operation in running, and remarkably improves the game performance and player experience.
In some embodiments, preloading a resource file of a game voice tool, caching the preloaded resource file in a memory, and collecting operation data in real time in a game operation process, including:
during the installation or starting stage of the game voice tool, identifying the resource file needing to be loaded preferentially according to the resource configuration information, and loading the related data of the resource file into the memory;
the method comprises the steps of performing cache registration on a resource file which is completely preloaded by utilizing a cache management module, and storing the resource file in a preset memory buffer area;
After the game enters an operation state, collecting operation data comprising CPU load, memory occupation, network bandwidth and network delay by using a monitoring component;
and uploading the operation data to a scheduling control unit for dynamically adjusting the byte code compiling strategy, the CPU core allocation and the subsequent resource scheduling.
Specifically, in this embodiment, in order to reduce audio loading delay and resource scheduling conflict of the game voice tool in a multi-player voice scenario, the resource file is pre-loaded in the installation or starting stage of the game voice tool, and the resource file is cached in the memory, and in combination with real-time acquisition of game operation data, dynamic optimization of voice functions is achieved.
In the installation or starting stage of the game voice tool, the system firstly reads the pre-configured resource configuration information, and the priority and the use frequency of resources such as an audio codec, a voice processing script, a common sound effect file and the like which are commonly used by the game voice tool are recorded in the configuration information.
The resource loading module automatically identifies target resource files needing to be loaded preferentially according to the resource configuration information, and loads relevant data of the target resource files into the local memory.
In some examples, the audio processing script and the voice data model for high priority or high frequency use may be preloaded sequentially according to the access frequency or priority order of the resources, ensuring that voice initialization and processing can be completed at the fastest speed when the game enters the voice function.
In order to ensure the subsequent quick positioning and calling of the preloaded resource file, the embodiment provides a cache management module. After the resource preloading is identified and completed, the cache management module generates a corresponding cache record for each loaded resource file, wherein the cache record comprises a resource identifier, a cache address, loading time, a resource priority and the like.
The cache records are stored in a special cache registry and correspond to the directory structure or file index of the game speech tool. When the related resources of voice processing are required to be called later, the system can quickly position the resource files in the memory through the registry without reading from the disk again, so that the resource loading delay is reduced.
When the game formally enters the running state, the embodiment continuously runs a monitoring component in the game background. The monitoring component collects operation data including CPU load, memory occupation, network bandwidth and network delay in real time in a preset sampling period or event triggering mode.
When the multi-user voice function is active, the monitoring component can additionally acquire the transmission rate of voice data packets, the audio decoding processing time length and the like, and record the voice data packets and the universal system resource use information.
In this embodiment, the operation data collected by the monitoring component is transmitted to the scheduling control unit through a network or an inter-process communication manner. The scheduling control unit is responsible for analyzing and comparing the received operation data and judging whether the current system has the conditions of too high CPU load, tension of memory, insufficient network bandwidth or too large network delay.
Once the obvious difference exists between the running data and the preset threshold value, the scheduling control unit can dynamically adjust the byte code compiling strategy, the CPU core allocation and the subsequent resource scheduling in real time or in stages according to the running data. For example, when it is detected that the occupied memory of the speech processing is about to exceed the set memory threshold, the scheduling control unit may instruct the cache management module to execute a cache cleaning or resource degradation mechanism in the background to release part of the memory to other critical tasks.
In some examples, if the game speech tool detects in the background that most of the speech processing scripts are not actually invoked, the cache management module may remove the unused resources with too low frequency from the memory without affecting the smoothness of the speech functions, so as to save system resources.
If the player switches to a scene irrelevant to the voice tool in the game process, the system can record the access frequency of the resources relevant to the voice function in the scene through the monitoring component and dynamically reduce the priority of the resources in the scheduling control unit, so that the resources are used as candidate resources to be cleaned or backed up when the memory is insufficient, and the performance of other key game functions is ensured.
In summary, by preferentially preloading and caching and registering the game voice tool resource files and dynamically adjusting the system resource allocation by combining the game running data collected in real time, the embodiment effectively reduces the resource calling delay and the system load risk under the multi-person voice, and provides smoother game voice communication experience for players.
In some embodiments, dynamically adjusting a bytecode compilation strategy based on operational data, when it is monitored that a multi-person speech team function is on and a CPU load reaches a preset condition, reducing execution priority of a bytecode compilation task, including:
Acquiring operation data by using a monitoring component, wherein the operation data comprises monitoring information for indicating the starting state of a multi-person voice team function and the load level of a CPU;
transmitting the monitoring information to a compiling management module to determine the current execution priority of the byte code compiling task;
when the compiling management module recognizes that the multi-person voice team forming function is in an on state and the CPU load meets the preset condition, a compiling priority adjusting instruction is sent to the dispatching control unit;
And updating and adjusting the byte code compiling strategy according to the compiling priority adjusting instruction so as to reduce the executing priority of the byte code compiling task.
Specifically, first, the game client starts a lightweight background monitoring component in the running process, and the lightweight background monitoring component is used for collecting and recording system running data in real time. The operation data at least comprises the current load level of the CPU, the memory occupancy rate, the network bandwidth use condition, the network delay and the like. In addition, the monitoring component also monitors the state information of the game client in real time, for example, whether the multi-person voice team forming function is started or not, packages the information into a data message in a unified format, and uploads the data message to the compiling management module periodically or in real time.
Further, after the compiling management module receives the operation data, CPU load information and multi-user voice team function state information in the data message are analyzed and compared. For example, the compiling management module is preset with a plurality of CPU load thresholds, and the thresholds can be dynamically defined according to different game running scenes (such as the number of online players, the game progress state or the voice communication flow size). When the compiling management module recognizes that the current CPU load level exceeds a preset load threshold corresponding to the multi-person voice team forming function, the current system is judged to be in a high load state and the multi-person voice function is judged to be in an on state.
After judging that the conditions are met, the compiling management module automatically sends a compiling priority adjusting instruction to the dispatching control unit. Specifically, the compiling priority adjustment instruction may include an explicit compiling task priority setting requirement, for example, the priority of the byte code compiling task is reduced from the original higher level to a preset lower level, so as to ensure that more CPU resources can be released for the multi-person speech processing task and other key tasks of the game under the current high-load scenario, thereby avoiding performance bottlenecks caused by resource competition.
And after receiving the compiling priority adjustment instruction sent by the compiling management module, the scheduling control unit updates and adjusts the byte code compiling strategy executed by the current game client in real time based on the instruction. For example, the scheduling control unit in this embodiment adopts a hierarchical priority model to cope with different load scenarios during the game running process. When the system load is low (for example, a few players are online), the priority of the byte code compiling task can be automatically improved, so that the compiling operation can be completed quickly, and the overall response speed of the game is improved. When the multi-person voice team is started and is in a higher load state, the scheduling control unit automatically reduces the priority of the byte code compiling task and delays the execution frequency and the execution sequence of the compiling operation, so that sufficient supply of resources required by the voice processing task is ensured, and the phenomenon of clamping in game operation is avoided.
Further, to achieve finer management of different scene loads, the present embodiment may also set a dynamically adjusted priority range. For example, the priorities of the compiling tasks can be divided into a plurality of levels (such as three levels of high, medium and low or more), and the scheduling control unit can analyze the respective resource requirements in real time according to the operation data, so as to flexibly switch between different priorities. When the running data indicates that the CPU load level is lower than a preset threshold value again or the multi-person voice team forming function is closed, the scheduling control unit can restore or improve the execution priority of the compiling task in real time so as to efficiently complete the residual compiling operation.
According to the embodiment, through monitoring the running state of the game and dynamically adjusting the priority of the byte code compiling task in real time, the CPU resource is managed more finely and dynamically in a multi-user voice team forming scene, and compared with a static configuration mode in the prior art, the method has more flexibility, the probability of blocking during high-load game is remarkably reduced, and the overall experience of players is improved.
In some embodiments, the allocation of CPU cores based on the operational data is dynamically scheduled, high priority tasks are allocated to CPU cores with loads below a preset load threshold for execution, and other tasks are allocated to the remaining CPU cores, including:
Comparing and analyzing the load level of the CPU cores by utilizing a scheduling control unit, and determining the CPU cores with the load lower than a preset load threshold;
and distributing the tasks with high priority to CPU cores with loads lower than a preset load threshold for execution, and distributing other tasks except the tasks with high priority to the rest CPU cores.
Specifically, first, the game client monitors and collects the running state information of each CPU core, such as the load level, task queuing, and idle degree of each CPU core, in real time by using the monitoring component, and periodically uploads the real-time running data to the scheduling control unit. The real-time running data can be obtained by real-time statistics of indexes such as the number of tasks running on each CPU core, the thread occupancy rate, the CPU clock period occupancy proportion and the like.
Then, the scheduling control unit performs comparative analysis on the load condition of each CPU core based on the received real-time operation data. For example, a CPU core load threshold is preset in the scheduling control unit, and the threshold can be determined in advance based on experience data or test data and can be flexibly adjusted according to the actual situation when the game runs. At each load comparison analysis, the scheduling control unit identifies and determines a set of CPU cores with load levels below the preset threshold, which are considered as currently available CPU cores capable of preferentially assigning critical tasks.
After determining the CPU core set with lower load, the dispatching control unit executes a dynamic task allocation strategy. For example, the scheduling control unit divides the tasks into high priority tasks and other common tasks according to the importance of the game tasks and real-time requirements. The high-priority tasks comprise tasks which are sensitive to delay and have high resource requirements, such as a multi-person voice processing task, a real-time network data processing task and the like. The scheduling control unit preferentially distributes the high-priority tasks to the identified CPU core set with lower load for execution, so that the key tasks can obtain sufficient computing resources, and task execution delay caused by resource shortage or competition is avoided.
Accordingly, the scheduling control unit distributes other tasks in the game, such as an image rendering task, a physical simulation task, a background data processing task and other common tasks which are relatively insensitive to delay or relatively stable in resource demand, to other CPU cores with relatively high loads or close to a threshold value, so that the balanced distribution of resources is ensured, and the single CPU core is prevented from being in a high-load running state for a long time.
Further, in order to more effectively implement dynamic load balancing and task scheduling of the CPU core, the scheduling control unit in this embodiment may employ an adaptive load balancing algorithm. Specifically, the algorithm continuously tracks the real-time load change of each CPU core, when the system monitors that the load of one CPU core rises above a preset threshold or is obviously higher than that of other cores, the scheduling control unit automatically triggers a task redistribution mechanism to migrate part of tasks on the core to other CPU cores with lower loads for execution, so that the load balance among the CPU cores is maintained in real time.
In addition, the present embodiment may also configure CPU core preferences of different types of tasks, for example, multi-person voice and real-time network communication tasks may be preferentially allocated to a specific low-load core or a specific CPU core with better performance (such as a core with a higher main frequency), so as to further improve the execution efficiency and response speed of a high-priority task. Meanwhile, for computationally intensive tasks such as image rendering or physical simulation, the tasks can be dynamically and evenly distributed to a plurality of CPU cores, so that the multi-core parallel processing capability of the CPU is fully utilized, and the overall resource utilization efficiency of the system is maximized.
According to the embodiment, the CPU core load state information is collected in real time, and the CPU core distribution of the tasks is dynamically scheduled and optimized based on the load threshold, so that the response speed and the running efficiency of the key tasks in the multi-user voice team forming scene are remarkably improved, the game jamming caused by overload of single core resources is effectively prevented, and the stability of game running and player experience are greatly improved.
In some embodiments, scheduling and intelligent cache management are performed on resource files according to operation data, priority caching or background loading is performed on frequently accessed resource files, and resource files required by voice processing are preferentially ensured in a multi-user voice group scene, including:
analyzing the resource access frequency, identifying target resources exceeding a preset access threshold and marking the target resources as high-priority resources;
Loading the high-priority resource into a preset memory cache area, or preferentially executing the loading operation of the high-priority resource in a background process;
when the multi-user voice team formation function is monitored to be in an on state, the scheduling priority of resources required by voice processing is improved;
And continuously monitoring the operation data, and when the resource access frequency or the multi-user voice team status is detected to change, updating the marked loading and caching strategy of the high-priority resource or the voice processing resource in real time.
Specifically, firstly, the system collects access frequency information of each resource file in the game running process in real time through the monitoring component so as to determine the actual use condition of each resource in the game process. The monitoring component continuously tracks, counts and records the access frequency information of the resources, and periodically or in real time uploads the data to the resource scheduling module for further analysis and processing.
After the resource scheduling module obtains the access frequency data, the resource scheduling module identifies target resources with the access frequency exceeding a preset threshold value by comparing the access frequency data with the preset access frequency threshold value. These identified target resources are typically frequently invoked resources during the game, such as image files for high frequency use, background music files, core game model files, and other key sound effects. The resource scheduling module then marks these resources as high priority resources, to distinguish them from other common resources that are accessed less frequently.
After determining the high priority resources, the resource scheduling module immediately executes the corresponding cache or background loading policy. Specifically, the high-priority resources can be loaded into a predefined memory cache area in advance to ensure that the resources can be accessed quickly and directly in the subsequent running stage of the game, or a special background loading process is started to preferentially execute the loading and caching operation of the high-priority resources, so that the real-time calling delay of the resources in the running process of the game is obviously shortened.
Further, this embodiment focuses on the resource allocation requirement after the multi-person voice team function is turned on. When the monitoring component detects that the multi-user voice team forming function of the game client is in an on state in real time, the resource scheduling module correspondingly improves the scheduling priority of related voice processing resources, preferentially allocates key resources such as memory and bandwidth for the voice processing task, and ensures that the voice data processing task is supported by sufficient resources. In particular, resources (such as an audio codec module and a voice data buffer zone) required by voice processing are dynamically set to be highest priority in a cache register table, and are preferentially allocated, preferentially loaded and preferentially called under a resource competition scene, so that real-time communication quality and game experience smoothness of a multi-user voice team are ensured.
Meanwhile, the resource scheduling module in the embodiment continuously monitors and analyzes real-time operation data, including changes of resource access frequency, changes of the states of the multi-user voice team functions and the like. When the system monitors that the access frequency of some resources is obviously increased or reduced, or the multi-user voice team status is switched from on to off or vice versa, the resource scheduling module immediately adjusts the caching and loading strategy of the marked high-priority resources or voice processing resources in real time. For example, when the multi-person speech formation function is turned off, the speech processing resources that were originally marked as high priority will automatically adjust to normal or lower priority to free up the occupied buffer space and bandwidth resources for other tasks.
Furthermore, in an actual application scene, the embodiment can also cooperate with an adaptive cache management algorithm to realize more flexible resource management. For example, the resource scheduling module can adopt an adaptive priority management strategy to continuously monitor the real-time calling condition of each resource, and periodically or dynamically adjust the priority of the resource and the caching strategy in real time, so that the most frequently called resource in the current stage of the game is always stored in the cache, thereby fully improving the effective utilization rate of the memory and the overall performance of the system.
According to the embodiment, through analyzing the game resource access frequency and the multi-person voice team scene state in real time, the loading and the buffer priority of the resources are dynamically adjusted, so that the fine control of resource allocation and memory management is realized, the blocking phenomenon caused by resource competition in the multi-person voice scene is effectively reduced, and the smoothness of game operation and the user experience quality are remarkably improved.
In some embodiments, when it is monitored that the network bandwidth or the network delay reaches a preset condition, adjusting the data transmission policy of the multi-person voice based on the operation data to reduce the voice data quality or increase the audio compression rate includes:
The monitoring component is used for collecting network bandwidth and network delay information, and the network bandwidth and the network delay information are compared and analyzed in the scheduling control unit to judge whether a preset condition is met;
When the network bandwidth or the network delay is judged to meet the preset condition, a data transmission strategy updating instruction is sent to the voice transmission management module;
according to the data transmission strategy updating instruction, adjusting parameter configuration of multi-user voice transmission, setting voice data quality to a predefined low level, or improving audio compression rate;
After the data transmission strategy is updated, the network bandwidth and the network delay information are continuously monitored, and when the states of the network bandwidth and the network delay information change, the dynamic adjustment of the data transmission strategy is triggered.
Specifically, the game client continuously collects data related to the current network state through a monitoring component running in the background, and specifically includes real-time network performance indexes such as network bandwidth utilization rate, network delay time length, network packet loss rate and the like. The monitoring component periodically or in real time acquires and records the network performance data through interaction with the network interface and uploads the operation data to the scheduling control unit.
The scheduling control unit receives the network performance data uploaded by the monitoring component in real time, and compares the network performance data with a preset network performance threshold value in the system in real time for analysis. For example, the network performance threshold includes a lowest bandwidth threshold and a highest network delay threshold, and when the network bandwidth falls below the lowest threshold or the network delay rises above the highest threshold, the scheduling control unit determines that the current network performance reaches a condition that triggers an adjustment of the voice transmission policy.
When the network performance is judged to meet the preset condition, the scheduling control unit automatically sends a data transmission strategy updating instruction to the voice transmission management module. The update instructions contain explicit policy adjustment information, such as explicitly instructing the voice transmission management module to reduce the audio quality level employed for the current voice data transmission to a predefined lower level or to increase the compression rate of the audio data to a predefined higher level to reduce the demand of the voice data for network bandwidth and system CPU resources.
After receiving the update instruction, the voice transmission management module adjusts the parameter configuration of voice data transmission in real time according to the instruction requirement, for example, the voice data sampling rate is automatically reduced from the original higher sampling rate (such as 48kHz or 44.1 kHz) to the lower sampling rate (such as 24kHz or 16 kHz), or the audio coding mode is switched to the audio coding and decoding format with higher compression efficiency, thereby obviously reducing the network bandwidth and CPU calculation burden required by multi-person voice transmission, and ensuring that the voice communication can still keep basic fluency when the network condition is worsened.
Further, after the voice data transmission policy is updated, the monitoring component continues to monitor the real-time performance indexes such as network bandwidth, network delay and the like. When the system detects that the network performance condition is recovered to be better than the preset threshold value or other significant changes occur, the scheduling control unit triggers dynamic strategy adjustment again, for example, the voice data is recovered to the original higher quality level, or the audio compression rate is reduced again, so as to improve the voice communication experience of the user.
In an actual application scenario, the dynamic adjustment mechanism of the embodiment may be further integrated into a wider resource adaptive scheduling framework. Specifically, the game client continuously runs a resource monitoring module after being started, and the module is responsible for comprehensively monitoring multi-dimensional performance data such as CPU load, memory occupation, network state and the like and transmitting the multi-dimensional performance data to the scheduling decision module in real time. The scheduling decision module performs intelligent analysis and decision according to the real-time performance data and a predefined strategy threshold value, and determines various resource scheduling strategies including a voice data transmission strategy, a byte code compiling strategy, a CPU core allocation strategy and a resource caching strategy.
The dynamic adjustment module then dynamically implements resource scheduling and policy adjustment in real time based on the decision results of the scheduling decision module. For example, when the multi-user voice team function is started, if the network bandwidth is monitored to be reduced or the network delay is monitored to be increased, the dynamic adjustment module rapidly reduces the voice transmission quality to release the bandwidth and reduce the load of the CPU, and meanwhile, the CPU core allocation and the byte code compiling task priority are correspondingly linked and optimized to ensure the fluency of the whole game operation.
According to the embodiment, through monitoring the network performance in real time and dynamically adjusting the multi-user voice data transmission strategy and integrating the multi-user voice data transmission strategy into the resource self-adaptive scheduling framework, the fine and dynamic management of resource allocation under the change of network conditions is realized, and the game performance and the user experience under the scene of multi-user voice team formation are remarkably improved.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Fig. 2 is a schematic structural diagram of a game performance optimizing device based on dynamic scheduling according to an embodiment of the present application. As shown in fig. 2, the game performance optimizing apparatus based on dynamic scheduling includes:
A precompiled module 201, configured to precompiled source codes of the game voice tool, and convert the source codes into byte codes;
the preloading module 202 is configured to preload a resource file of the game voice tool, cache the preloaded resource file in the memory, and collect operation data in real time during the game operation process;
the adjusting module 203 is configured to dynamically adjust a byte code compiling policy based on the operation data, and reduce an execution priority of a byte code compiling task when it is monitored that the multi-user voice team function is turned on and the CPU load reaches a preset condition;
The dynamic scheduling module 204 is configured to dynamically schedule allocation of CPU cores based on the operation data, allocate a high-priority task to a CPU core with a load lower than a preset load threshold for execution, and allocate other tasks to the remaining CPU cores;
The buffer module 205 is configured to schedule and intelligently buffer and manage resource files according to the operation data, perform priority buffer or background loading on frequently accessed resource files, and preferentially guarantee resource files required by voice processing in a multi-user voice team scene;
the data transmission module 206 is configured to adjust a data transmission policy of the multi-user voice based on the operation data when it is monitored that the network bandwidth or the network delay reaches a preset condition, so as to reduce the voice data quality or increase the audio compression rate.
In some embodiments, the pre-compiling module 201 of fig. 2 reads the source code of the game voice tool and parses the source code to generate an intermediate representation required for compiling during an installation or start-up phase of the game voice tool, performs a compiling operation in a preset compiling environment based on the intermediate representation to generate a corresponding bytecode file, stores the bytecode file in a storage location corresponding to the game voice tool, and executes a corresponding function module by calling the bytecode file when the game voice tool is running.
In some embodiments, the preloading module 202 of fig. 2 identifies resource files to be loaded preferentially according to resource configuration information and loads relevant data of the resource files into a memory in an installation or starting stage of the game voice tool, registers the preloaded resource files in a cache by using a cache management module and stores the resource files in a preset memory buffer area, collects operation data including CPU load, memory occupation, network bandwidth and network delay by using a monitoring component after the game enters an operation state, and uploads the operation data to a scheduling control unit for dynamically adjusting a byte code compiling strategy, CPU core allocation and subsequent resource scheduling.
In some embodiments, the adjustment module 203 of fig. 2 obtains operation data by using a monitoring component, where the operation data includes monitoring information for indicating an on state of the multi-person voice group function and a CPU load level, transmits the monitoring information to the compilation management module to determine a current execution priority of the byte code compilation task, sends a compilation priority adjustment instruction to the dispatch control unit when the compilation management module recognizes that the multi-person voice group function is on and the CPU load has met a preset condition, and updates and adjusts the byte code compilation strategy according to the compilation priority adjustment instruction to reduce the execution priority of the byte code compilation task.
In some embodiments, the dynamic scheduling module 204 of FIG. 2 utilizes a scheduling control unit to compare and analyze the load levels of the CPU cores to determine CPU cores with loads below a preset load threshold, and assigns high priority tasks to CPU cores with loads below the preset load threshold for execution, and assigns other tasks than the high priority tasks to the remaining CPU cores.
In some embodiments, the caching module 205 of fig. 2 analyzes the resource access frequency, identifies a target resource exceeding a preset access threshold and marks the target resource as a high priority resource, loads the high priority resource into a preset memory cache area, or preferentially executes loading operation of the high priority resource in a background process, when the multi-user voice group function is monitored to be in an on state, improves the scheduling priority of the resource required by voice processing, continuously monitors operation data, and when the resource access frequency or the multi-user voice group state is detected to change, updates the loading and caching strategy of the marked high priority resource or voice processing resource in real time.
In some embodiments, the data transmission module 206 of fig. 2 collects network bandwidth and network delay information by using a monitoring component, compares the network bandwidth and the network delay information in a scheduling control unit, determines whether a preset condition is reached, sends a data transmission policy update instruction to a voice transmission management module when it is determined that the network bandwidth or the network delay meets the preset condition, adjusts parameter configuration of multi-user voice transmission according to the data transmission policy update instruction, sets voice data quality to a predefined low level or increases audio compression rate, continues to monitor the network bandwidth and the network delay information after the data transmission policy update is completed, and triggers dynamic adjustment of the data transmission policy when the states of the network bandwidth and the network delay information change.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device 3 according to an embodiment of the present application. As shown in fig. 3, the electronic device 3 of this embodiment comprises a processor 301, a memory 302 and a computer program 303 stored in the memory 302 and executable on the processor 301. The steps of the various method embodiments described above are implemented when the processor 301 executes the computer program 303. Or the processor 301 when executing the computer program 303 performs the functions of the modules/units in the above-described device embodiments.
Illustratively, the computer program 303 may be partitioned into one or more modules/units, which are stored in the memory 302 and executed by the processor 301 to complete the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 303 in the electronic device 3.
The electronic device 3 may be an electronic device such as a desktop computer, a notebook computer, a palm computer, or a cloud server. The electronic device 3 may include, but is not limited to, a processor 301 and a memory 302. It will be appreciated by those skilled in the art that fig. 3 is merely an example of the electronic device 3 and does not constitute a limitation of the electronic device 3, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the electronic device may also include an input-output device, a network access device, a bus, etc.
The Processor 301 may be a central processing unit (Central Processing Unit, CPU) or other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 302 may be an internal storage unit of the electronic device 3, for example, a hard disk or a memory of the electronic device 3. The memory 302 may also be an external storage device of the electronic device 3, for example, a plug-in hard disk provided on the electronic device 3, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Further, the memory 302 may also include both an internal storage unit and an external storage device of the electronic device 3. The memory 302 is used to store computer programs and other programs and data required by the electronic device. The memory 302 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided by the present application, it should be understood that the disclosed apparatus/computer device and method may be implemented in other manners. For example, the apparatus/computer device embodiments described above are merely illustrative, e.g., the division of modules or elements is merely a logical functional division, and there may be additional divisions of actual implementations, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium can include any entity or device capable of carrying computer program code, recording medium, USB flash disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), electrical carrier signals, telecommunications signals, and software distribution media, among others.
The foregoing embodiments are merely for illustrating the technical solution of the present application, but not for limiting the same, and although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that the technical solution described in the foregoing embodiments may be modified or substituted for some of the technical features thereof, and that these modifications or substitutions should not depart from the spirit and scope of the technical solution of the embodiments of the present application and should be included in the protection scope of the present application.