Movatterモバイル変換


[0]ホーム

URL:


CN120045336A - Game performance optimization method and device based on dynamic scheduling and storage medium - Google Patents

Game performance optimization method and device based on dynamic scheduling and storage medium
Download PDF

Info

Publication number
CN120045336A
CN120045336ACN202510520204.6ACN202510520204ACN120045336ACN 120045336 ACN120045336 ACN 120045336ACN 202510520204 ACN202510520204 ACN 202510520204ACN 120045336 ACN120045336 ACN 120045336A
Authority
CN
China
Prior art keywords
voice
game
data
resource
priority
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510520204.6A
Other languages
Chinese (zh)
Inventor
黄志松
冀啸天
李鹤
周义
姚茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luxcreo Beijing Inc
Original Assignee
Qingfeng Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingfeng Beijing Technology Co LtdfiledCriticalQingfeng Beijing Technology Co Ltd
Priority to CN202510520204.6ApriorityCriticalpatent/CN120045336A/en
Publication of CN120045336ApublicationCriticalpatent/CN120045336A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本申请提供一种基于动态调度的游戏性能优化方法、装置及存储介质。该方法包括:基于运行数据动态调整字节码编译策略,当监测到多人语音组队功能开启且CPU负载达到预设条件时,降低字节码编译任务的执行优先级;基于运行数据对CPU核心的分配进行动态调度,将高优先级任务分配至负载低于预设负载阈值的CPU核心执行,并将其他任务分配至剩余CPU核心;根据运行数据对资源文件进行调度与智能缓存管理;当监测到网络带宽或网络延迟达到预设条件时,基于运行数据调整多人语音的数据传输策略。本申请能够根据游戏运行时的实时状态进行动态调度、灵活分配任务与资源,以降低CPU占用,避免游戏运行卡顿。

The present application provides a method, device and storage medium for optimizing game performance based on dynamic scheduling. The method includes: dynamically adjusting the bytecode compilation strategy based on running data, and lowering the execution priority of the bytecode compilation task when it is detected that the multi-person voice teaming function is turned on and the CPU load reaches the preset conditions; dynamically scheduling the allocation of CPU cores based on running data, allocating high-priority tasks to CPU cores with a load lower than a preset load threshold for execution, and allocating other tasks to the remaining CPU cores; scheduling resource files and intelligently caching them according to running data; and adjusting the multi-person voice data transmission strategy based on running data when it is detected that the network bandwidth or network delay reaches the preset conditions. The present application can dynamically schedule and flexibly allocate tasks and resources according to the real-time status of the game when it is running, so as to reduce CPU occupancy and avoid game running jams.

Description

Game performance optimization method and device based on dynamic scheduling and storage medium
Technical Field
The present application relates to the field of game performance optimization technologies, and in particular, to a method and apparatus for optimizing game performance based on dynamic scheduling, and a storage medium.
Background
With the rapid development of online game technology, multi-person voice organization has gradually become one of important functions for improving game interactivity and user experience. However, after the multi-user voice function is started, the game system needs to process various tasks such as voice communication, image rendering, resource loading and physical simulation, and the tasks need to consume a large amount of system resources when being executed, and especially occupy a relatively high CPU resource.
In order to alleviate the problem of resource occupation during game running, the prior art generally adopts methods of precompiling source codes into byte codes, preloading resource files, performing static task allocation based on CPU core information and the like before game running, and is expected to reduce CPU occupation and task conflict during game running. However, in the actual application scenario, due to factors such as player number change, voice communication flow fluctuation, network environment dynamic change and the like, the fixed compiling strategy and the static resource scheduling method are difficult to adjust in time according to real-time change of the running state of the game, so that the problem that CPU resource occupation is too high frequently occurs when the multi-player voice team forming function is started in the game, and further, game picture blocking, response delay and the like are caused, and the game experience of a user is seriously reduced.
Therefore, when the multi-person voice team function is started, the static scheduling of CPU resources and the loading strategy of fixed resources are difficult to adapt to the load demand changing in real time in the game running process, so that the CPU occupation is too high, and the problem of game running blocking is further caused.
Disclosure of Invention
In view of the above, the embodiments of the present application provide a game performance optimization method, apparatus and storage medium based on dynamic scheduling, so as to solve the problem in the prior art that the compiling policy and static resource scheduling are not flexible enough, and cause the problem of card-on caused by too high CPU occupation.
The first aspect of the embodiment of the application provides a game performance optimization method based on dynamic scheduling, which comprises the steps of pre-compiling source codes of game voice tools, converting the source codes into byte codes, pre-loading resource files of the game voice tools, caching the pre-loaded resource files in a memory, collecting operation data in real time in the game operation process, dynamically adjusting a byte code compiling strategy based on the operation data, reducing the execution priority of byte code compiling tasks when a multi-person voice group function is monitored to be started and CPU load reaches a preset condition, dynamically scheduling the allocation of CPU cores based on the operation data, allocating high-priority tasks to CPU cores with loads lower than a preset load threshold for execution, allocating other tasks to residual CPU cores, scheduling and intelligent cache management of the resource files according to the operation data, preferentially caching or loading the frequently accessed resource files in a background, preferentially guaranteeing the resource files required by voice processing in a multi-person voice group scene, and adjusting the data transmission strategy based on the operation data when the network bandwidth or the network delay reaches the preset condition, so as to reduce the audio quality or improve the compression rate of the data.
The second aspect of the embodiment of the application provides a game performance optimization device based on dynamic scheduling, which comprises a pre-compiling module, a pre-loading module, an adjusting module, a data transmission module and a data transmission module, wherein the pre-compiling module is used for pre-compiling source codes of game voice tools, converting the source codes into byte codes, the pre-loading module is used for pre-loading resource files of the game voice tools, caching the pre-loaded resource files in a memory, collecting operation data in real time in the game operation process, the adjusting module is used for dynamically adjusting a byte code compiling strategy based on the operation data, reducing the execution priority of byte code compiling tasks when the multi-user voice team function is monitored to be started and the CPU load reaches a preset condition, the dynamic scheduling module is used for dynamically scheduling the allocation of CPU cores based on the operation data, allocating high-priority tasks to CPU cores with loads lower than a preset load threshold value, allocating other tasks to the residual CPU cores, the caching module is used for scheduling and intelligent caching management of the resource files, and carrying out priority caching or background loading on the frequently accessed resource files in a multi-user voice team scene, and prioritizing the resource files in a multi-user voice team scene, and the data transmission module is used for guaranteeing the transmission of the priority of the resources files in a network to reduce the quality or reducing the audio network bandwidth when the audio data is monitored to reach the preset condition or the network data quality.
In a third aspect of the embodiments of the present application, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
In a fourth aspect of the embodiments of the present application, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above method.
The above at least one technical scheme adopted by the embodiment of the application can achieve the following beneficial effects:
The method comprises the steps of pre-compiling source codes of game voice tools, converting the source codes into byte codes, pre-loading resource files of the game voice tools, caching the pre-loaded resource files in a memory, collecting operation data in real time in the game operation process, dynamically adjusting a byte code compiling strategy based on the operation data, reducing execution priority of byte code compiling tasks when the starting of a multi-user voice team function is monitored and CPU load reaches a preset condition, dynamically scheduling the allocation of CPU cores based on the operation data, allocating high-priority tasks to CPU cores with loads lower than a preset load threshold for execution, allocating other tasks to the remaining CPU cores, scheduling and intelligently caching the resource files according to the operation data, preferentially caching or loading the frequently accessed resource files in a background, preferentially guaranteeing the resource files required by voice processing in a multi-user voice team scene, and adjusting a data transmission strategy of the multi-user voice based on the operation data when the network bandwidth or network delay is monitored to reach the preset condition, so as to reduce voice data quality or improve audio compression rate. The application can dynamically dispatch and flexibly distribute tasks and resources according to the real-time state of the game running so as to reduce the occupation of the CPU and avoid the clamping and the stopping of the game running.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a dynamic threshold mechanism-based voice noise reduction method for a game team according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a dynamic threshold mechanism-based voice noise reduction device for a game team according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In the current game scenario, turning on the "multiplayer team voice" function greatly enhances the interactivity and player experience of the game. However, the game voice function can compete with numerous modules such as image rendering, network communication, physical simulation, etc. for system resources, especially CPU resources, during operation. Once the multi-user voice team function occupies excessive CPU resources, other core game functions such as image rendering, data loading and the like may not be processed in time, so that problems of game clamping and delay are caused, and player experience is seriously affected.
To address such issues, it is often necessary to begin with optimizing the resource usage at the time of game play. For example, the source code compiling efficiency is improved, the CPU multi-core allocation is managed, the network bandwidth use is optimized, and the like. The conventional method is to perform "static" optimization according to a pre-evaluated strategy, such as compiling at a fixed time, executing tasks at a fixed CPU core, etc., but when the game enters a multi-person voice team mode, the resource requirement changes rapidly and indefinitely, so that the static strategy is difficult to cope with transient load impact.
In the prior art, in order to reduce the problem of jamming during game running, the following means are generally adopted:
And the byte code compiling technology is used for compiling the source code before the game is run, so that the real-time compiling pressure on the CPU after the game is started is reduced.
Matching CPU core information, namely distributing tasks with high operation requirements to more proper cores according to the characteristics of different cores of the CPU, so as to improve the overall efficiency.
And the resource preloading is to preload common resources into the memory before the game is started or the scene is switched, so that the clamping caused by the real-time loading is reduced.
However, when the voice function of the multi-user team is started, only a fixed strategy of 'byte code compiling+CPU core allocation' is still not flexible enough, and the resource allocation balance between voice processing and other core functions is difficult to ensure due to the lack of dynamic scheduling capability in the face of a continuously changing real-time game scene.
Based on the background, the key problem to be solved by the application is how to reduce the blocking phenomenon caused by too high CPU occupation when the multi-person team voice is started, so that the fluency of the game is ensured. In other words, a mechanism capable of dynamically scheduling and flexibly allocating tasks and resources according to real-time states (CPU load, memory usage, network conditions, etc.) during game operation is needed to avoid conflicts or bottlenecks in CPU usage between the multiplayer voice function and other game functions.
The following describes the technical scheme of the present application in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flow chart of a game performance optimization method based on dynamic scheduling according to an embodiment of the present application. As shown in fig. 1, the game performance optimization method based on dynamic scheduling specifically may include:
s101, pre-compiling source codes of game voice tools, and converting the source codes into byte codes;
S102, preloading a resource file of a game voice tool, caching the preloaded resource file in a memory, and acquiring operation data in real time in the game operation process;
s103, dynamically adjusting a byte code compiling strategy based on the operation data, and reducing the execution priority of a byte code compiling task when the starting of a multi-user voice team function and the CPU load reach a preset condition are monitored;
S104, carrying out dynamic scheduling on the allocation of CPU cores based on the operation data, allocating the tasks with high priority to the CPU cores with loads lower than a preset load threshold value for execution, and allocating other tasks to the remaining CPU cores;
s105, scheduling and intelligent cache management are carried out on the resource files according to the operation data, priority caching or background loading is carried out on the frequently accessed resource files, and the resource files required by voice processing are preferentially ensured in a multi-user voice team scene;
And S106, when the network bandwidth or the network delay is monitored to reach a preset condition, adjusting the data transmission strategy of the multi-user voice based on the operation data so as to reduce the voice data quality or improve the audio compression rate.
In some embodiments, pre-compiling source code of a game speech tool to convert the source code to bytecode includes:
Reading the source code of the game voice tool in the installation or starting stage of the game voice tool, and analyzing the source code to generate an intermediate representation required by compiling;
based on the intermediate representation, performing compiling operation in a preset compiling environment to generate a corresponding byte code file;
and storing the byte code file in a storage position corresponding to the game voice tool, and executing a corresponding functional module by calling the byte code file when the game voice tool runs.
Specifically, in this embodiment, the source code of the game voice tool may be pre-compiled once in the installation stage or the start stage of the game voice tool, and the whole pre-compiling process includes the following steps:
First, the system automatically starts the precompiled module during the installation phase of the game speech tool, for example, when the game speech tool is first installed in a user terminal device (such as a PC, a game host or a smart mobile terminal), or during the first start-up phase of the game speech tool. The precompiled module firstly accesses source code files (such as script files, logic function module files and the like) of all game voice tools in the installation catalog of the game voice tools or the software package through the file reading unit, and analyzes the source code files. Specifically, the file reading unit obtains the source code content through progressive scanning or file integral reading, and analyzes the grammar structure of the source code by using the code analysis unit to generate an intermediate representation (INTERMEDIATE REPRESENTATION, IR) in a unified format which is convenient for subsequent compiling. The intermediate representation is used as a unified and language-independent code representation form, is suitable for various game compiling environments, and can facilitate subsequent compiling processing.
Further, the generated intermediate representation is transmitted to a preset compiling environment for unified compiling processing, and the preset compiling environment can be integrated in an installation program of the game client, and can also be distributed as an independent module along with the game installation package. In this embodiment, the compilation environment may be a Just-in-Time (JIT) or Just-in-Time (AOT) based compilation framework to adapt to different terminal execution environments.
In a compilation environment, a compilation unit optimizes and converts intermediate representations, including but not limited to optimization steps such as code compaction, removing redundant instructions, instruction reordering, etc., to generate a bytecode file suitable for direct execution. The bytecode file is used as an intermediate code form which is closer to a machine language, has higher execution efficiency compared with the original source code, and has significantly reduced CPU resource occupation required when the game voice tool runs.
The compiled bytecode file is then stored in a specific storage location corresponding to the game voice tool client, for example, a dedicated cache folder under an installation directory or a designated area of an in-user terminal storage system. Specifically, in this embodiment, a specific byte code cache directory may be established under the game voice tool installation directory, and the compiled byte code files may be stored in a unified manner, so that the game voice tool client may be quickly retrieved and invoked when it is started.
In each subsequent running stage of the game voice tool, the game voice tool client does not need to perform real-time compiling operation from source code to byte code again, and directly calls the byte code file which has completed precompiled from the cache position. Specifically, when the game voice tool is started, the byte code loading unit automatically retrieves the byte code cache directory, completes quick loading of the byte code file and executes the byte code file in the corresponding virtual machine or execution engine, thereby greatly reducing CPU load in the running process of the game voice tool.
Further, in some examples, the game speech tool client may also set a bytecode version management mechanism. For example, when the game voice tool is updated or the code is changed, the client can judge whether the corresponding byte code file is updated or not through a version comparison mechanism, and if the source code is changed, the client only re-executes the pre-compiling process on the changed source code file to update the corresponding byte code file without re-compiling all the source codes, thereby further optimizing the compiling efficiency.
Through the above-mentioned pre-compiling technical scheme, the embodiment effectively realizes the advanced one-time compiling processing of the source codes of the game voice tool, avoids the problem that the game voice tool occupies excessive CPU resources due to repeated source code compiling operation in running, and remarkably improves the game performance and player experience.
In some embodiments, preloading a resource file of a game voice tool, caching the preloaded resource file in a memory, and collecting operation data in real time in a game operation process, including:
during the installation or starting stage of the game voice tool, identifying the resource file needing to be loaded preferentially according to the resource configuration information, and loading the related data of the resource file into the memory;
the method comprises the steps of performing cache registration on a resource file which is completely preloaded by utilizing a cache management module, and storing the resource file in a preset memory buffer area;
After the game enters an operation state, collecting operation data comprising CPU load, memory occupation, network bandwidth and network delay by using a monitoring component;
and uploading the operation data to a scheduling control unit for dynamically adjusting the byte code compiling strategy, the CPU core allocation and the subsequent resource scheduling.
Specifically, in this embodiment, in order to reduce audio loading delay and resource scheduling conflict of the game voice tool in a multi-player voice scenario, the resource file is pre-loaded in the installation or starting stage of the game voice tool, and the resource file is cached in the memory, and in combination with real-time acquisition of game operation data, dynamic optimization of voice functions is achieved.
In the installation or starting stage of the game voice tool, the system firstly reads the pre-configured resource configuration information, and the priority and the use frequency of resources such as an audio codec, a voice processing script, a common sound effect file and the like which are commonly used by the game voice tool are recorded in the configuration information.
The resource loading module automatically identifies target resource files needing to be loaded preferentially according to the resource configuration information, and loads relevant data of the target resource files into the local memory.
In some examples, the audio processing script and the voice data model for high priority or high frequency use may be preloaded sequentially according to the access frequency or priority order of the resources, ensuring that voice initialization and processing can be completed at the fastest speed when the game enters the voice function.
In order to ensure the subsequent quick positioning and calling of the preloaded resource file, the embodiment provides a cache management module. After the resource preloading is identified and completed, the cache management module generates a corresponding cache record for each loaded resource file, wherein the cache record comprises a resource identifier, a cache address, loading time, a resource priority and the like.
The cache records are stored in a special cache registry and correspond to the directory structure or file index of the game speech tool. When the related resources of voice processing are required to be called later, the system can quickly position the resource files in the memory through the registry without reading from the disk again, so that the resource loading delay is reduced.
When the game formally enters the running state, the embodiment continuously runs a monitoring component in the game background. The monitoring component collects operation data including CPU load, memory occupation, network bandwidth and network delay in real time in a preset sampling period or event triggering mode.
When the multi-user voice function is active, the monitoring component can additionally acquire the transmission rate of voice data packets, the audio decoding processing time length and the like, and record the voice data packets and the universal system resource use information.
In this embodiment, the operation data collected by the monitoring component is transmitted to the scheduling control unit through a network or an inter-process communication manner. The scheduling control unit is responsible for analyzing and comparing the received operation data and judging whether the current system has the conditions of too high CPU load, tension of memory, insufficient network bandwidth or too large network delay.
Once the obvious difference exists between the running data and the preset threshold value, the scheduling control unit can dynamically adjust the byte code compiling strategy, the CPU core allocation and the subsequent resource scheduling in real time or in stages according to the running data. For example, when it is detected that the occupied memory of the speech processing is about to exceed the set memory threshold, the scheduling control unit may instruct the cache management module to execute a cache cleaning or resource degradation mechanism in the background to release part of the memory to other critical tasks.
In some examples, if the game speech tool detects in the background that most of the speech processing scripts are not actually invoked, the cache management module may remove the unused resources with too low frequency from the memory without affecting the smoothness of the speech functions, so as to save system resources.
If the player switches to a scene irrelevant to the voice tool in the game process, the system can record the access frequency of the resources relevant to the voice function in the scene through the monitoring component and dynamically reduce the priority of the resources in the scheduling control unit, so that the resources are used as candidate resources to be cleaned or backed up when the memory is insufficient, and the performance of other key game functions is ensured.
In summary, by preferentially preloading and caching and registering the game voice tool resource files and dynamically adjusting the system resource allocation by combining the game running data collected in real time, the embodiment effectively reduces the resource calling delay and the system load risk under the multi-person voice, and provides smoother game voice communication experience for players.
In some embodiments, dynamically adjusting a bytecode compilation strategy based on operational data, when it is monitored that a multi-person speech team function is on and a CPU load reaches a preset condition, reducing execution priority of a bytecode compilation task, including:
Acquiring operation data by using a monitoring component, wherein the operation data comprises monitoring information for indicating the starting state of a multi-person voice team function and the load level of a CPU;
transmitting the monitoring information to a compiling management module to determine the current execution priority of the byte code compiling task;
when the compiling management module recognizes that the multi-person voice team forming function is in an on state and the CPU load meets the preset condition, a compiling priority adjusting instruction is sent to the dispatching control unit;
And updating and adjusting the byte code compiling strategy according to the compiling priority adjusting instruction so as to reduce the executing priority of the byte code compiling task.
Specifically, first, the game client starts a lightweight background monitoring component in the running process, and the lightweight background monitoring component is used for collecting and recording system running data in real time. The operation data at least comprises the current load level of the CPU, the memory occupancy rate, the network bandwidth use condition, the network delay and the like. In addition, the monitoring component also monitors the state information of the game client in real time, for example, whether the multi-person voice team forming function is started or not, packages the information into a data message in a unified format, and uploads the data message to the compiling management module periodically or in real time.
Further, after the compiling management module receives the operation data, CPU load information and multi-user voice team function state information in the data message are analyzed and compared. For example, the compiling management module is preset with a plurality of CPU load thresholds, and the thresholds can be dynamically defined according to different game running scenes (such as the number of online players, the game progress state or the voice communication flow size). When the compiling management module recognizes that the current CPU load level exceeds a preset load threshold corresponding to the multi-person voice team forming function, the current system is judged to be in a high load state and the multi-person voice function is judged to be in an on state.
After judging that the conditions are met, the compiling management module automatically sends a compiling priority adjusting instruction to the dispatching control unit. Specifically, the compiling priority adjustment instruction may include an explicit compiling task priority setting requirement, for example, the priority of the byte code compiling task is reduced from the original higher level to a preset lower level, so as to ensure that more CPU resources can be released for the multi-person speech processing task and other key tasks of the game under the current high-load scenario, thereby avoiding performance bottlenecks caused by resource competition.
And after receiving the compiling priority adjustment instruction sent by the compiling management module, the scheduling control unit updates and adjusts the byte code compiling strategy executed by the current game client in real time based on the instruction. For example, the scheduling control unit in this embodiment adopts a hierarchical priority model to cope with different load scenarios during the game running process. When the system load is low (for example, a few players are online), the priority of the byte code compiling task can be automatically improved, so that the compiling operation can be completed quickly, and the overall response speed of the game is improved. When the multi-person voice team is started and is in a higher load state, the scheduling control unit automatically reduces the priority of the byte code compiling task and delays the execution frequency and the execution sequence of the compiling operation, so that sufficient supply of resources required by the voice processing task is ensured, and the phenomenon of clamping in game operation is avoided.
Further, to achieve finer management of different scene loads, the present embodiment may also set a dynamically adjusted priority range. For example, the priorities of the compiling tasks can be divided into a plurality of levels (such as three levels of high, medium and low or more), and the scheduling control unit can analyze the respective resource requirements in real time according to the operation data, so as to flexibly switch between different priorities. When the running data indicates that the CPU load level is lower than a preset threshold value again or the multi-person voice team forming function is closed, the scheduling control unit can restore or improve the execution priority of the compiling task in real time so as to efficiently complete the residual compiling operation.
According to the embodiment, through monitoring the running state of the game and dynamically adjusting the priority of the byte code compiling task in real time, the CPU resource is managed more finely and dynamically in a multi-user voice team forming scene, and compared with a static configuration mode in the prior art, the method has more flexibility, the probability of blocking during high-load game is remarkably reduced, and the overall experience of players is improved.
In some embodiments, the allocation of CPU cores based on the operational data is dynamically scheduled, high priority tasks are allocated to CPU cores with loads below a preset load threshold for execution, and other tasks are allocated to the remaining CPU cores, including:
Comparing and analyzing the load level of the CPU cores by utilizing a scheduling control unit, and determining the CPU cores with the load lower than a preset load threshold;
and distributing the tasks with high priority to CPU cores with loads lower than a preset load threshold for execution, and distributing other tasks except the tasks with high priority to the rest CPU cores.
Specifically, first, the game client monitors and collects the running state information of each CPU core, such as the load level, task queuing, and idle degree of each CPU core, in real time by using the monitoring component, and periodically uploads the real-time running data to the scheduling control unit. The real-time running data can be obtained by real-time statistics of indexes such as the number of tasks running on each CPU core, the thread occupancy rate, the CPU clock period occupancy proportion and the like.
Then, the scheduling control unit performs comparative analysis on the load condition of each CPU core based on the received real-time operation data. For example, a CPU core load threshold is preset in the scheduling control unit, and the threshold can be determined in advance based on experience data or test data and can be flexibly adjusted according to the actual situation when the game runs. At each load comparison analysis, the scheduling control unit identifies and determines a set of CPU cores with load levels below the preset threshold, which are considered as currently available CPU cores capable of preferentially assigning critical tasks.
After determining the CPU core set with lower load, the dispatching control unit executes a dynamic task allocation strategy. For example, the scheduling control unit divides the tasks into high priority tasks and other common tasks according to the importance of the game tasks and real-time requirements. The high-priority tasks comprise tasks which are sensitive to delay and have high resource requirements, such as a multi-person voice processing task, a real-time network data processing task and the like. The scheduling control unit preferentially distributes the high-priority tasks to the identified CPU core set with lower load for execution, so that the key tasks can obtain sufficient computing resources, and task execution delay caused by resource shortage or competition is avoided.
Accordingly, the scheduling control unit distributes other tasks in the game, such as an image rendering task, a physical simulation task, a background data processing task and other common tasks which are relatively insensitive to delay or relatively stable in resource demand, to other CPU cores with relatively high loads or close to a threshold value, so that the balanced distribution of resources is ensured, and the single CPU core is prevented from being in a high-load running state for a long time.
Further, in order to more effectively implement dynamic load balancing and task scheduling of the CPU core, the scheduling control unit in this embodiment may employ an adaptive load balancing algorithm. Specifically, the algorithm continuously tracks the real-time load change of each CPU core, when the system monitors that the load of one CPU core rises above a preset threshold or is obviously higher than that of other cores, the scheduling control unit automatically triggers a task redistribution mechanism to migrate part of tasks on the core to other CPU cores with lower loads for execution, so that the load balance among the CPU cores is maintained in real time.
In addition, the present embodiment may also configure CPU core preferences of different types of tasks, for example, multi-person voice and real-time network communication tasks may be preferentially allocated to a specific low-load core or a specific CPU core with better performance (such as a core with a higher main frequency), so as to further improve the execution efficiency and response speed of a high-priority task. Meanwhile, for computationally intensive tasks such as image rendering or physical simulation, the tasks can be dynamically and evenly distributed to a plurality of CPU cores, so that the multi-core parallel processing capability of the CPU is fully utilized, and the overall resource utilization efficiency of the system is maximized.
According to the embodiment, the CPU core load state information is collected in real time, and the CPU core distribution of the tasks is dynamically scheduled and optimized based on the load threshold, so that the response speed and the running efficiency of the key tasks in the multi-user voice team forming scene are remarkably improved, the game jamming caused by overload of single core resources is effectively prevented, and the stability of game running and player experience are greatly improved.
In some embodiments, scheduling and intelligent cache management are performed on resource files according to operation data, priority caching or background loading is performed on frequently accessed resource files, and resource files required by voice processing are preferentially ensured in a multi-user voice group scene, including:
analyzing the resource access frequency, identifying target resources exceeding a preset access threshold and marking the target resources as high-priority resources;
Loading the high-priority resource into a preset memory cache area, or preferentially executing the loading operation of the high-priority resource in a background process;
when the multi-user voice team formation function is monitored to be in an on state, the scheduling priority of resources required by voice processing is improved;
And continuously monitoring the operation data, and when the resource access frequency or the multi-user voice team status is detected to change, updating the marked loading and caching strategy of the high-priority resource or the voice processing resource in real time.
Specifically, firstly, the system collects access frequency information of each resource file in the game running process in real time through the monitoring component so as to determine the actual use condition of each resource in the game process. The monitoring component continuously tracks, counts and records the access frequency information of the resources, and periodically or in real time uploads the data to the resource scheduling module for further analysis and processing.
After the resource scheduling module obtains the access frequency data, the resource scheduling module identifies target resources with the access frequency exceeding a preset threshold value by comparing the access frequency data with the preset access frequency threshold value. These identified target resources are typically frequently invoked resources during the game, such as image files for high frequency use, background music files, core game model files, and other key sound effects. The resource scheduling module then marks these resources as high priority resources, to distinguish them from other common resources that are accessed less frequently.
After determining the high priority resources, the resource scheduling module immediately executes the corresponding cache or background loading policy. Specifically, the high-priority resources can be loaded into a predefined memory cache area in advance to ensure that the resources can be accessed quickly and directly in the subsequent running stage of the game, or a special background loading process is started to preferentially execute the loading and caching operation of the high-priority resources, so that the real-time calling delay of the resources in the running process of the game is obviously shortened.
Further, this embodiment focuses on the resource allocation requirement after the multi-person voice team function is turned on. When the monitoring component detects that the multi-user voice team forming function of the game client is in an on state in real time, the resource scheduling module correspondingly improves the scheduling priority of related voice processing resources, preferentially allocates key resources such as memory and bandwidth for the voice processing task, and ensures that the voice data processing task is supported by sufficient resources. In particular, resources (such as an audio codec module and a voice data buffer zone) required by voice processing are dynamically set to be highest priority in a cache register table, and are preferentially allocated, preferentially loaded and preferentially called under a resource competition scene, so that real-time communication quality and game experience smoothness of a multi-user voice team are ensured.
Meanwhile, the resource scheduling module in the embodiment continuously monitors and analyzes real-time operation data, including changes of resource access frequency, changes of the states of the multi-user voice team functions and the like. When the system monitors that the access frequency of some resources is obviously increased or reduced, or the multi-user voice team status is switched from on to off or vice versa, the resource scheduling module immediately adjusts the caching and loading strategy of the marked high-priority resources or voice processing resources in real time. For example, when the multi-person speech formation function is turned off, the speech processing resources that were originally marked as high priority will automatically adjust to normal or lower priority to free up the occupied buffer space and bandwidth resources for other tasks.
Furthermore, in an actual application scene, the embodiment can also cooperate with an adaptive cache management algorithm to realize more flexible resource management. For example, the resource scheduling module can adopt an adaptive priority management strategy to continuously monitor the real-time calling condition of each resource, and periodically or dynamically adjust the priority of the resource and the caching strategy in real time, so that the most frequently called resource in the current stage of the game is always stored in the cache, thereby fully improving the effective utilization rate of the memory and the overall performance of the system.
According to the embodiment, through analyzing the game resource access frequency and the multi-person voice team scene state in real time, the loading and the buffer priority of the resources are dynamically adjusted, so that the fine control of resource allocation and memory management is realized, the blocking phenomenon caused by resource competition in the multi-person voice scene is effectively reduced, and the smoothness of game operation and the user experience quality are remarkably improved.
In some embodiments, when it is monitored that the network bandwidth or the network delay reaches a preset condition, adjusting the data transmission policy of the multi-person voice based on the operation data to reduce the voice data quality or increase the audio compression rate includes:
The monitoring component is used for collecting network bandwidth and network delay information, and the network bandwidth and the network delay information are compared and analyzed in the scheduling control unit to judge whether a preset condition is met;
When the network bandwidth or the network delay is judged to meet the preset condition, a data transmission strategy updating instruction is sent to the voice transmission management module;
according to the data transmission strategy updating instruction, adjusting parameter configuration of multi-user voice transmission, setting voice data quality to a predefined low level, or improving audio compression rate;
After the data transmission strategy is updated, the network bandwidth and the network delay information are continuously monitored, and when the states of the network bandwidth and the network delay information change, the dynamic adjustment of the data transmission strategy is triggered.
Specifically, the game client continuously collects data related to the current network state through a monitoring component running in the background, and specifically includes real-time network performance indexes such as network bandwidth utilization rate, network delay time length, network packet loss rate and the like. The monitoring component periodically or in real time acquires and records the network performance data through interaction with the network interface and uploads the operation data to the scheduling control unit.
The scheduling control unit receives the network performance data uploaded by the monitoring component in real time, and compares the network performance data with a preset network performance threshold value in the system in real time for analysis. For example, the network performance threshold includes a lowest bandwidth threshold and a highest network delay threshold, and when the network bandwidth falls below the lowest threshold or the network delay rises above the highest threshold, the scheduling control unit determines that the current network performance reaches a condition that triggers an adjustment of the voice transmission policy.
When the network performance is judged to meet the preset condition, the scheduling control unit automatically sends a data transmission strategy updating instruction to the voice transmission management module. The update instructions contain explicit policy adjustment information, such as explicitly instructing the voice transmission management module to reduce the audio quality level employed for the current voice data transmission to a predefined lower level or to increase the compression rate of the audio data to a predefined higher level to reduce the demand of the voice data for network bandwidth and system CPU resources.
After receiving the update instruction, the voice transmission management module adjusts the parameter configuration of voice data transmission in real time according to the instruction requirement, for example, the voice data sampling rate is automatically reduced from the original higher sampling rate (such as 48kHz or 44.1 kHz) to the lower sampling rate (such as 24kHz or 16 kHz), or the audio coding mode is switched to the audio coding and decoding format with higher compression efficiency, thereby obviously reducing the network bandwidth and CPU calculation burden required by multi-person voice transmission, and ensuring that the voice communication can still keep basic fluency when the network condition is worsened.
Further, after the voice data transmission policy is updated, the monitoring component continues to monitor the real-time performance indexes such as network bandwidth, network delay and the like. When the system detects that the network performance condition is recovered to be better than the preset threshold value or other significant changes occur, the scheduling control unit triggers dynamic strategy adjustment again, for example, the voice data is recovered to the original higher quality level, or the audio compression rate is reduced again, so as to improve the voice communication experience of the user.
In an actual application scenario, the dynamic adjustment mechanism of the embodiment may be further integrated into a wider resource adaptive scheduling framework. Specifically, the game client continuously runs a resource monitoring module after being started, and the module is responsible for comprehensively monitoring multi-dimensional performance data such as CPU load, memory occupation, network state and the like and transmitting the multi-dimensional performance data to the scheduling decision module in real time. The scheduling decision module performs intelligent analysis and decision according to the real-time performance data and a predefined strategy threshold value, and determines various resource scheduling strategies including a voice data transmission strategy, a byte code compiling strategy, a CPU core allocation strategy and a resource caching strategy.
The dynamic adjustment module then dynamically implements resource scheduling and policy adjustment in real time based on the decision results of the scheduling decision module. For example, when the multi-user voice team function is started, if the network bandwidth is monitored to be reduced or the network delay is monitored to be increased, the dynamic adjustment module rapidly reduces the voice transmission quality to release the bandwidth and reduce the load of the CPU, and meanwhile, the CPU core allocation and the byte code compiling task priority are correspondingly linked and optimized to ensure the fluency of the whole game operation.
According to the embodiment, through monitoring the network performance in real time and dynamically adjusting the multi-user voice data transmission strategy and integrating the multi-user voice data transmission strategy into the resource self-adaptive scheduling framework, the fine and dynamic management of resource allocation under the change of network conditions is realized, and the game performance and the user experience under the scene of multi-user voice team formation are remarkably improved.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Fig. 2 is a schematic structural diagram of a game performance optimizing device based on dynamic scheduling according to an embodiment of the present application. As shown in fig. 2, the game performance optimizing apparatus based on dynamic scheduling includes:
A precompiled module 201, configured to precompiled source codes of the game voice tool, and convert the source codes into byte codes;
the preloading module 202 is configured to preload a resource file of the game voice tool, cache the preloaded resource file in the memory, and collect operation data in real time during the game operation process;
the adjusting module 203 is configured to dynamically adjust a byte code compiling policy based on the operation data, and reduce an execution priority of a byte code compiling task when it is monitored that the multi-user voice team function is turned on and the CPU load reaches a preset condition;
The dynamic scheduling module 204 is configured to dynamically schedule allocation of CPU cores based on the operation data, allocate a high-priority task to a CPU core with a load lower than a preset load threshold for execution, and allocate other tasks to the remaining CPU cores;
The buffer module 205 is configured to schedule and intelligently buffer and manage resource files according to the operation data, perform priority buffer or background loading on frequently accessed resource files, and preferentially guarantee resource files required by voice processing in a multi-user voice team scene;
the data transmission module 206 is configured to adjust a data transmission policy of the multi-user voice based on the operation data when it is monitored that the network bandwidth or the network delay reaches a preset condition, so as to reduce the voice data quality or increase the audio compression rate.
In some embodiments, the pre-compiling module 201 of fig. 2 reads the source code of the game voice tool and parses the source code to generate an intermediate representation required for compiling during an installation or start-up phase of the game voice tool, performs a compiling operation in a preset compiling environment based on the intermediate representation to generate a corresponding bytecode file, stores the bytecode file in a storage location corresponding to the game voice tool, and executes a corresponding function module by calling the bytecode file when the game voice tool is running.
In some embodiments, the preloading module 202 of fig. 2 identifies resource files to be loaded preferentially according to resource configuration information and loads relevant data of the resource files into a memory in an installation or starting stage of the game voice tool, registers the preloaded resource files in a cache by using a cache management module and stores the resource files in a preset memory buffer area, collects operation data including CPU load, memory occupation, network bandwidth and network delay by using a monitoring component after the game enters an operation state, and uploads the operation data to a scheduling control unit for dynamically adjusting a byte code compiling strategy, CPU core allocation and subsequent resource scheduling.
In some embodiments, the adjustment module 203 of fig. 2 obtains operation data by using a monitoring component, where the operation data includes monitoring information for indicating an on state of the multi-person voice group function and a CPU load level, transmits the monitoring information to the compilation management module to determine a current execution priority of the byte code compilation task, sends a compilation priority adjustment instruction to the dispatch control unit when the compilation management module recognizes that the multi-person voice group function is on and the CPU load has met a preset condition, and updates and adjusts the byte code compilation strategy according to the compilation priority adjustment instruction to reduce the execution priority of the byte code compilation task.
In some embodiments, the dynamic scheduling module 204 of FIG. 2 utilizes a scheduling control unit to compare and analyze the load levels of the CPU cores to determine CPU cores with loads below a preset load threshold, and assigns high priority tasks to CPU cores with loads below the preset load threshold for execution, and assigns other tasks than the high priority tasks to the remaining CPU cores.
In some embodiments, the caching module 205 of fig. 2 analyzes the resource access frequency, identifies a target resource exceeding a preset access threshold and marks the target resource as a high priority resource, loads the high priority resource into a preset memory cache area, or preferentially executes loading operation of the high priority resource in a background process, when the multi-user voice group function is monitored to be in an on state, improves the scheduling priority of the resource required by voice processing, continuously monitors operation data, and when the resource access frequency or the multi-user voice group state is detected to change, updates the loading and caching strategy of the marked high priority resource or voice processing resource in real time.
In some embodiments, the data transmission module 206 of fig. 2 collects network bandwidth and network delay information by using a monitoring component, compares the network bandwidth and the network delay information in a scheduling control unit, determines whether a preset condition is reached, sends a data transmission policy update instruction to a voice transmission management module when it is determined that the network bandwidth or the network delay meets the preset condition, adjusts parameter configuration of multi-user voice transmission according to the data transmission policy update instruction, sets voice data quality to a predefined low level or increases audio compression rate, continues to monitor the network bandwidth and the network delay information after the data transmission policy update is completed, and triggers dynamic adjustment of the data transmission policy when the states of the network bandwidth and the network delay information change.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 3 is a schematic structural diagram of an electronic device 3 according to an embodiment of the present application. As shown in fig. 3, the electronic device 3 of this embodiment comprises a processor 301, a memory 302 and a computer program 303 stored in the memory 302 and executable on the processor 301. The steps of the various method embodiments described above are implemented when the processor 301 executes the computer program 303. Or the processor 301 when executing the computer program 303 performs the functions of the modules/units in the above-described device embodiments.
Illustratively, the computer program 303 may be partitioned into one or more modules/units, which are stored in the memory 302 and executed by the processor 301 to complete the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 303 in the electronic device 3.
The electronic device 3 may be an electronic device such as a desktop computer, a notebook computer, a palm computer, or a cloud server. The electronic device 3 may include, but is not limited to, a processor 301 and a memory 302. It will be appreciated by those skilled in the art that fig. 3 is merely an example of the electronic device 3 and does not constitute a limitation of the electronic device 3, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., the electronic device may also include an input-output device, a network access device, a bus, etc.
The Processor 301 may be a central processing unit (Central Processing Unit, CPU) or other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 302 may be an internal storage unit of the electronic device 3, for example, a hard disk or a memory of the electronic device 3. The memory 302 may also be an external storage device of the electronic device 3, for example, a plug-in hard disk provided on the electronic device 3, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Further, the memory 302 may also include both an internal storage unit and an external storage device of the electronic device 3. The memory 302 is used to store computer programs and other programs and data required by the electronic device. The memory 302 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided by the present application, it should be understood that the disclosed apparatus/computer device and method may be implemented in other manners. For example, the apparatus/computer device embodiments described above are merely illustrative, e.g., the division of modules or elements is merely a logical functional division, and there may be additional divisions of actual implementations, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium can include any entity or device capable of carrying computer program code, recording medium, USB flash disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), electrical carrier signals, telecommunications signals, and software distribution media, among others.
The foregoing embodiments are merely for illustrating the technical solution of the present application, but not for limiting the same, and although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those skilled in the art that the technical solution described in the foregoing embodiments may be modified or substituted for some of the technical features thereof, and that these modifications or substitutions should not depart from the spirit and scope of the technical solution of the embodiments of the present application and should be included in the protection scope of the present application.

Claims (10)

Translated fromChinese
1.一种基于动态调度的游戏性能优化方法,其特征在于,包括:1. A method for optimizing game performance based on dynamic scheduling, comprising:对游戏语音工具的源代码进行预编译处理,将所述源代码转换为字节码;Precompiling the source code of the game voice tool and converting the source code into bytecode;对游戏语音工具的资源文件进行预加载,将预加载后的资源文件缓存于内存,并在游戏运行过程中,实时采集运行数据;Preload the resource files of the game voice tool, cache the preloaded resource files in the memory, and collect the running data in real time during the game running;基于所述运行数据动态调整字节码编译策略,当监测到多人语音组队功能开启且CPU负载达到预设条件时,降低字节码编译任务的执行优先级;Dynamically adjust the bytecode compilation strategy based on the operation data, and when it is detected that the multi-person voice team function is turned on and the CPU load reaches a preset condition, reduce the execution priority of the bytecode compilation task;基于所述运行数据对CPU核心的分配进行动态调度,将高优先级任务分配至负载低于预设负载阈值的CPU核心执行,并将其他任务分配至剩余CPU核心;Dynamically scheduling the allocation of CPU cores based on the operating data, allocating high-priority tasks to CPU cores with loads below a preset load threshold for execution, and allocating other tasks to remaining CPU cores;根据所述运行数据对所述资源文件进行调度与智能缓存管理,对频繁访问的资源文件进行优先缓存或后台加载,并在多人语音组队场景中优先保障语音处理所需的资源文件;The resource files are scheduled and intelligently cached according to the operation data, frequently accessed resource files are preferentially cached or loaded in the background, and resource files required for voice processing are preferentially guaranteed in multi-person voice team scenarios;当监测到网络带宽或网络延迟达到预设条件时,基于所述运行数据调整多人语音的数据传输策略,以降低语音数据质量或提高音频压缩率。When it is monitored that the network bandwidth or network delay reaches a preset condition, the data transmission strategy of multi-person voice is adjusted based on the operation data to reduce the voice data quality or increase the audio compression rate.2.根据权利要求1所述的方法,其特征在于,所述对游戏语音工具的源代码进行预编译处理,将所述源代码转换为字节码,包括:2. The method according to claim 1, characterized in that the pre-compiling of the source code of the game voice tool to convert the source code into bytecode comprises:在游戏语音工具的安装或启动阶段,读取游戏语音工具的源代码,并对所述源代码进行解析以生成编译所需的中间表示;During the installation or startup phase of the game voice tool, the source code of the game voice tool is read and the source code is parsed to generate an intermediate representation required for compilation;基于所述中间表示,在预设的编译环境中执行编译操作,生成对应的字节码文件;Based on the intermediate representation, a compilation operation is performed in a preset compilation environment to generate a corresponding bytecode file;将所述字节码文件存储于与所述游戏语音工具对应的存储位置,并在游戏语音工具运行时通过调用所述字节码文件执行相应功能模块。The bytecode file is stored in a storage location corresponding to the game voice tool, and the corresponding function module is executed by calling the bytecode file when the game voice tool is running.3.根据权利要求1所述的方法,其特征在于,所述对游戏语音工具的资源文件进行预加载,将预加载后的资源文件缓存于内存,并在游戏运行过程中,实时采集运行数据,包括:3. The method according to claim 1, characterized in that the resource files of the game voice tool are preloaded, the preloaded resource files are cached in the memory, and the operation data is collected in real time during the game operation, including:在游戏语音工具的安装或启动阶段,根据资源配置信息识别需要优先加载的资源文件,并将所述资源文件的相关数据载入至内存;During the installation or startup phase of the game voice tool, the resource files that need to be loaded first are identified according to the resource configuration information, and the relevant data of the resource files are loaded into the memory;利用缓存管理模块对预加载完成的资源文件进行缓存登记,并存储于预设的内存缓冲区域;The cache management module is used to cache and register the preloaded resource files and store them in a preset memory buffer area;在游戏进入运行状态后,利用监控组件采集包括CPU负载、内存占用、网络带宽和网络延迟在内的运行数据;After the game enters the running state, the monitoring component is used to collect running data including CPU load, memory usage, network bandwidth and network latency;将所述运行数据上传至调度控制单元,以供对字节码编译策略、CPU核心分配和后续资源调度进行动态调整。The operation data is uploaded to the scheduling control unit for dynamic adjustment of bytecode compilation strategy, CPU core allocation and subsequent resource scheduling.4.根据权利要求1所述的方法,其特征在于,所述基于所述运行数据动态调整字节码编译策略,当监测到多人语音组队功能开启且CPU负载达到预设条件时,降低字节码编译任务的执行优先级,包括:4. The method according to claim 1, characterized in that the dynamically adjusting the bytecode compilation strategy based on the running data, when monitoring that the multi-person voice team function is turned on and the CPU load reaches a preset condition, reducing the execution priority of the bytecode compilation task, comprises:利用监控组件获取所述运行数据,其中所述运行数据包括用于指示所述多人语音组队功能开启状态以及CPU负载水平的监测信息;Acquire the operation data using a monitoring component, wherein the operation data includes monitoring information indicating the activation state of the multi-person voice teaming function and the CPU load level;将所述监测信息传递至编译管理模块,以确定字节码编译任务的当前执行优先级;The monitoring information is transmitted to the compilation management module to determine the current execution priority of the bytecode compilation task;当所述编译管理模块识别到多人语音组队功能处于开启状态且所述CPU负载已满足预设条件时,向调度控制单元发送编译优先级调整指令;When the compilation management module recognizes that the multi-person voice teaming function is turned on and the CPU load has met the preset conditions, a compilation priority adjustment instruction is sent to the scheduling control unit;根据所述编译优先级调整指令,对所述字节码编译策略进行更新调整,以降低字节码编译任务的执行优先级。According to the compilation priority adjustment instruction, the bytecode compilation strategy is updated and adjusted to reduce the execution priority of the bytecode compilation task.5.根据权利要求1所述的方法,其特征在于,所述基于所述运行数据对CPU核心的分配进行动态调度,将高优先级任务分配至负载低于预设负载阈值的CPU核心执行,并将其他任务分配至剩余CPU核心,包括:5. The method according to claim 1, wherein the dynamically scheduling the allocation of CPU cores based on the operating data, allocating high priority tasks to CPU cores with a load lower than a preset load threshold for execution, and allocating other tasks to remaining CPU cores, comprises:利用调度控制单元对所述CPU核心的负载水平进行比较分析,确定负载低于预设负载阈值的CPU核心;Comparing and analyzing the load levels of the CPU cores using a scheduling control unit to determine a CPU core whose load is lower than a preset load threshold;将所述高优先级任务分配至负载低于预设负载阈值的CPU核心执行,将除所述高优先级任务之外的其他任务分配至剩余CPU核心。The high priority task is assigned to a CPU core whose load is lower than a preset load threshold for execution, and other tasks except the high priority task are assigned to the remaining CPU cores.6.根据权利要求1所述的方法,其特征在于,所述根据所述运行数据对所述资源文件进行调度与智能缓存管理,对频繁访问的资源文件进行优先缓存或后台加载,并在多人语音组队场景中优先保障语音处理所需的资源文件,包括:6. The method according to claim 1 is characterized in that the resource files are scheduled and intelligently cached according to the running data, frequently accessed resource files are preferentially cached or loaded in the background, and resource files required for voice processing are preferentially guaranteed in a multi-person voice team scenario, including:对资源访问频率进行分析,识别超过预设访问阈值的目标资源并标记为高优先级资源;Analyze resource access frequency, identify target resources that exceed preset access thresholds and mark them as high-priority resources;将所述高优先级资源载入至预设的内存缓存区域,或在后台进程中优先执行所述高优先级资源的加载操作;Loading the high-priority resource into a preset memory cache area, or preferentially executing the loading operation of the high-priority resource in a background process;当监测到多人语音组队功能处于开启状态时,提高语音处理所需资源的调度优先级;When it is detected that the multi-person voice teaming function is enabled, the scheduling priority of the resources required for voice processing is increased;持续监听所述运行数据,在检测到资源访问频率或多人语音组队状态发生改变时,对已标记的高优先级资源或语音处理资源的加载与缓存策略进行实时更新。The operation data is continuously monitored, and when a change in resource access frequency or multi-person voice team status is detected, the loading and caching strategy of the marked high-priority resources or voice processing resources is updated in real time.7.根据权利要求1所述的方法,其特征在于,所述当监测到网络带宽或网络延迟达到预设条件时,基于所述运行数据调整多人语音的数据传输策略,以降低语音数据质量或提高音频压缩率,包括:7. The method according to claim 1, characterized in that when the network bandwidth or network delay reaches a preset condition, the data transmission strategy of multi-person voice is adjusted based on the operation data to reduce the voice data quality or increase the audio compression rate, comprising:利用监控组件采集网络带宽和网络延迟信息,在调度控制单元中对所述网络带宽和网络延迟信息进行比对分析,判断是否达到预设条件;The monitoring component is used to collect network bandwidth and network delay information, and the network bandwidth and network delay information are compared and analyzed in the scheduling control unit to determine whether the preset conditions are met;当判定网络带宽或网络延迟满足预设条件时,向语音传输管理模块发送数据传输策略更新指令;When it is determined that the network bandwidth or network delay meets the preset conditions, a data transmission strategy update instruction is sent to the voice transmission management module;根据所述数据传输策略更新指令,调整多人语音传输的参数配置,将语音数据质量设置为预先定义的低级别,或提高音频压缩率;According to the data transmission strategy update instruction, adjust the parameter configuration of multi-person voice transmission, set the voice data quality to a predefined low level, or increase the audio compression rate;在完成数据传输策略更新后,继续监控所述网络带宽和网络延迟信息,并在所述网络带宽和网络延迟信息的状态发生变化时,触发所述数据传输策略的动态调整。After completing the data transmission strategy update, the network bandwidth and network delay information continue to be monitored, and when the status of the network bandwidth and network delay information changes, the dynamic adjustment of the data transmission strategy is triggered.8.一种基于动态调度的游戏性能优化装置,其特征在于,包括:8. A game performance optimization device based on dynamic scheduling, characterized by comprising:预编译模块,用于对游戏语音工具的源代码进行预编译处理,将所述源代码转换为字节码;A pre-compilation module, used to pre-compile the source code of the game voice tool and convert the source code into bytecode;预加载模块,用于对游戏语音工具的资源文件进行预加载,将预加载后的资源文件缓存于内存,并在游戏运行过程中,实时采集运行数据;The preloading module is used to preload the resource files of the game voice tool, cache the preloaded resource files in the memory, and collect the operation data in real time during the game operation;调整模块,用于基于所述运行数据动态调整字节码编译策略,当监测到多人语音组队功能开启且CPU负载达到预设条件时,降低字节码编译任务的执行优先级;An adjustment module, used to dynamically adjust the bytecode compilation strategy based on the operation data, and when it is detected that the multi-person voice team function is turned on and the CPU load reaches a preset condition, reduce the execution priority of the bytecode compilation task;动态调度模块,用于基于所述运行数据对CPU核心的分配进行动态调度,将高优先级任务分配至负载低于预设负载阈值的CPU核心执行,并将其他任务分配至剩余CPU核心;A dynamic scheduling module, used to dynamically schedule the allocation of CPU cores based on the operation data, allocate high priority tasks to CPU cores with loads below a preset load threshold for execution, and allocate other tasks to the remaining CPU cores;缓存模块,用于根据所述运行数据对所述资源文件进行调度与智能缓存管理,对频繁访问的资源文件进行优先缓存或后台加载,并在多人语音组队场景中优先保障语音处理所需的资源文件;A cache module, used to schedule and intelligently cache the resource files according to the operation data, prioritize caching or background loading of frequently accessed resource files, and prioritize resource files required for voice processing in multi-person voice team scenarios;数据传输模块,用于当监测到网络带宽或网络延迟达到预设条件时,基于所述运行数据调整多人语音的数据传输策略,以降低语音数据质量或提高音频压缩率。The data transmission module is used to adjust the data transmission strategy of multi-person voice based on the operation data to reduce the voice data quality or improve the audio compression rate when it is monitored that the network bandwidth or network delay reaches a preset condition.9.一种电子设备,包括存储器,处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至7中任一项所述方法的步骤。9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method as claimed in any one of claims 1 to 7 when executing the computer program.10.一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述方法的步骤。10. A computer-readable storage medium storing a computer program, wherein the computer program implements the steps of the method according to any one of claims 1 to 7 when executed by a processor.
CN202510520204.6A2025-04-242025-04-24Game performance optimization method and device based on dynamic scheduling and storage mediumPendingCN120045336A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202510520204.6ACN120045336A (en)2025-04-242025-04-24Game performance optimization method and device based on dynamic scheduling and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202510520204.6ACN120045336A (en)2025-04-242025-04-24Game performance optimization method and device based on dynamic scheduling and storage medium

Publications (1)

Publication NumberPublication Date
CN120045336Atrue CN120045336A (en)2025-05-27

Family

ID=95759774

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202510520204.6APendingCN120045336A (en)2025-04-242025-04-24Game performance optimization method and device based on dynamic scheduling and storage medium

Country Status (1)

CountryLink
CN (1)CN120045336A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN120256063A (en)*2025-06-042025-07-04江苏集萃清联智控科技有限公司 Real-time task scheduling optimization method, device and system for FreeRTOS

Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20190308099A1 (en)*2018-04-102019-10-10Google LlcMemory Management in Gaming Rendering
CN117654056A (en)*2023-12-122024-03-08天翼视讯传媒有限公司Cloud game optimization method and system based on game big data feedback
CN117695617A (en)*2023-11-302024-03-15北京蔚领时代科技有限公司Cloud game data optimization processing method and device
CN119088199A (en)*2024-08-282024-12-06四川酷比通信设备有限公司 Intelligent terminal power consumption optimization method, device and terminal based on active and passive identification
CN119135819A (en)*2024-09-102024-12-13大唐终端技术有限公司 An efficient real-time voice and video transmission method
CN119201284A (en)*2024-10-042024-12-27刘宇 Python Serverless function cold start optimization system and method
CN119741459A (en)*2025-03-062025-04-01深圳市安之眼科技有限公司Integrated AR device performance optimization method and device based on multi-core processor

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20190308099A1 (en)*2018-04-102019-10-10Google LlcMemory Management in Gaming Rendering
CN117695617A (en)*2023-11-302024-03-15北京蔚领时代科技有限公司Cloud game data optimization processing method and device
CN117654056A (en)*2023-12-122024-03-08天翼视讯传媒有限公司Cloud game optimization method and system based on game big data feedback
CN119088199A (en)*2024-08-282024-12-06四川酷比通信设备有限公司 Intelligent terminal power consumption optimization method, device and terminal based on active and passive identification
CN119135819A (en)*2024-09-102024-12-13大唐终端技术有限公司 An efficient real-time voice and video transmission method
CN119201284A (en)*2024-10-042024-12-27刘宇 Python Serverless function cold start optimization system and method
CN119741459A (en)*2025-03-062025-04-01深圳市安之眼科技有限公司Integrated AR device performance optimization method and device based on multi-core processor

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN120256063A (en)*2025-06-042025-07-04江苏集萃清联智控科技有限公司 Real-time task scheduling optimization method, device and system for FreeRTOS

Similar Documents

PublicationPublication DateTitle
CN120045336A (en)Game performance optimization method and device based on dynamic scheduling and storage medium
KR100996750B1 (en) Methods, systems, storage media, and processors for determining execution priorities in multithreaded processors
JP2009541848A (en) Method, system and apparatus for scheduling computer microjobs to run uninterrupted
JP2008505389A (en) Method, program storage device, and apparatus for automatically adjusting a virtual memory subsystem of a computer operating system
US10877790B2 (en)Information processing apparatus, control method and storage medium
KR20010009501A (en)Micro scheduling method and operating system kernel therefrom
US9996470B2 (en)Workload management in a global recycle queue infrastructure
CN114115702B (en)Storage control method, storage control device, storage system and storage medium
CN119473564B (en)Scheduling method and system for Direct IO intensive tasks under NUMA architecture
CN120045337A (en)Multi-player voice game performance optimization method and device based on dynamic resource management
Bosch et al.Real-time disk scheduling in a mixed-media file system
WO2025167233A1 (en)Application compilation method and apparatus, storage medium, and electronic device
CN107820605A (en)System and method for dynamic low-latency optimization
CN118819825A (en) Concurrent processing method, concurrent control system, electronic device and storage medium
CN118394523A (en)Method, device, computer equipment and storage medium for improving resource allocation
WO2025030807A1 (en)Interrupt request balance method and apparatus, and computing device
CN111107429A (en)Method, apparatus and computer readable storage medium for improving performance of television system
CN117331654A (en)System analysis control method and computer system
CN112817769B (en)Game resource dynamic caching method and device, storage medium and electronic equipment
WO2023226437A1 (en)Resource scheduling method, apparatus and device
Craciunas et al.I/O resource management through system call scheduling
WO2022050197A1 (en)Computer system and computer program
CN115390983A (en)Hardware resource allocation method, device, equipment and storage medium for virtual machine
JP3839259B2 (en) Multithread control method, multithread control apparatus, recording medium, and program
CN114510324B (en)Disk management method and system for KVM virtual machine with ceph volume mounted thereon

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp