Movatterモバイル変換


[0]ホーム

URL:


CN119415237B - Interrupt scheduling method, device, equipment and computer readable storage medium - Google Patents

Interrupt scheduling method, device, equipment and computer readable storage medium
Download PDF

Info

Publication number
CN119415237B
CN119415237BCN202412000229.1ACN202412000229ACN119415237BCN 119415237 BCN119415237 BCN 119415237BCN 202412000229 ACN202412000229 ACN 202412000229ACN 119415237 BCN119415237 BCN 119415237B
Authority
CN
China
Prior art keywords
interrupt
target
data
scheduling
system performance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202412000229.1A
Other languages
Chinese (zh)
Other versions
CN119415237A (en
Inventor
王培辉
王传雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Metabrain Intelligent Technology Co Ltd
Original Assignee
Suzhou Metabrain Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Metabrain Intelligent Technology Co LtdfiledCriticalSuzhou Metabrain Intelligent Technology Co Ltd
Priority to CN202412000229.1ApriorityCriticalpatent/CN119415237B/en
Publication of CN119415237ApublicationCriticalpatent/CN119415237A/en
Application grantedgrantedCritical
Publication of CN119415237BpublicationCriticalpatent/CN119415237B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses an interrupt scheduling method, a device, equipment and a computer readable storage medium, which are applied to the technical field of non-uniform memory access and comprise the steps of obtaining system load prediction data corresponding to a non-uniform memory access system; the method comprises the steps of dynamically adjusting an interrupt scheduling strategy based on system load prediction data to obtain a target interrupt scheduling strategy, determining target interrupt processing equipment corresponding to each interrupt based on the target interrupt scheduling strategy, and processing each interrupt based on the target interrupt processing equipment. Compared with the traditional static-based scheduling strategy for scheduling the interrupt, the method and the system have the advantages that the system load prediction data is acquired, the interrupt scheduling strategy is dynamically adjusted based on the system load prediction data, and the target interrupt scheduling strategy is obtained, so that the interrupt is scheduled based on the adjusted interrupt scheduling strategy, the intellectualization and the dynamics of interrupt scheduling are realized, and the accuracy of interrupt scheduling is improved.

Description

Interrupt scheduling method, device, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of non-uniform memory access technologies, and in particular, to an interrupt scheduling method, apparatus, device, and computer readable storage medium.
Background
In modern computer systems, particularly multiprocessor systems employing non-uniform memory access (NUMA) architecture, efficient scheduling of hardware interrupts is critical to optimizing system performance. Interrupt scheduling refers to distributing interrupt requests to the appropriate CPU core for processing. Conventional interrupt scheduling methods are typically based on static policies, making interrupt scheduling less effective.
It can be seen that how to improve the interrupt scheduling effect is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
Accordingly, an object of the present invention is to provide an interrupt scheduling method, apparatus, device, and computer readable storage medium, which solve the technical problem of poor interrupt scheduling effect in the prior art.
In order to solve the technical problems, the present invention provides an interrupt scheduling method, including:
acquiring system load prediction data corresponding to a non-uniform memory access system;
Dynamically adjusting an interrupt scheduling strategy based on the system load prediction data to obtain a target interrupt scheduling strategy;
and determining target interrupt processing equipment corresponding to each interrupt based on the target interrupt scheduling strategy, and processing each interrupt based on the target interrupt processing equipment.
In one aspect, before determining the target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling policy, before processing each interrupt based on the target interrupt processing device, the method further includes:
the interrupt is prioritized according to the interrupt type and the historical interrupt processing time, and the priority of each interrupt is obtained;
adjusting the target interrupt scheduling strategy based on the priority of each interrupt to obtain an adjusted target interrupt scheduling strategy;
correspondingly, determining a target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling policy, and processing each interrupt based on the target interrupt processing device, including:
and determining target interrupt processing equipment corresponding to each interrupt based on the adjusted target interrupt scheduling strategy, and processing each interrupt based on the target interrupt processing equipment.
In one aspect, the prioritizing the interrupts according to interrupt types and historical interrupt processing time to obtain the priority of each interrupt includes:
When the interrupt types are inconsistent, carrying out priority grading on the interrupt based on the interrupt types to obtain the priority of each interrupt;
and when the interrupt types are consistent, carrying out priority grading on the interrupt based on the historical interrupt processing time to obtain the priority of each interrupt, wherein the higher the historical interrupt processing time is, the higher the priority grade is.
In one aspect, before dynamically adjusting the interrupt scheduling policy based on the system load prediction data to obtain the target interrupt scheduling policy, the method further includes:
Predicting a target interrupt and a target interrupt time point based on the historical interrupt data;
And loading data corresponding to the target interrupt into a cache before the target interrupt occurs based on the interrupt time point to obtain a target cache.
In one aspect, before the target interrupt occurs based on the interrupt time point, loading data corresponding to the target interrupt into a cache, and after obtaining a target cache, further including:
when the data size in the target cache is determined to be larger than a set data threshold, judging whether interrupt processing frequency corresponding to each data in the target cache is larger than a set processing frequency threshold;
Determining to flush the data from the target cache when the interrupt processing frequency is not greater than the set processing frequency threshold;
And when the interrupt processing frequency is greater than the set processing frequency threshold, determining that the data is not processed.
In one aspect, in the process of performing interrupt scheduling, the method further includes:
filtering the system load prediction data based on the access control list to obtain target acquisition data;
and encrypting the target acquisition data and the interrupt scheduling result by using a data encryption mechanism to obtain target encryption data.
In one aspect, before the obtaining the system load prediction data corresponding to the non-uniform memory access system, the method further includes:
Acquiring current system performance states and historical system performance data corresponding to the non-uniform memory access system;
And predicting based on the current system performance state and the historical system performance data, and determining the system load prediction data.
In one aspect, predicting based on the current system performance state and the historical system performance data, determining the system load prediction data includes:
updating parameters of the prediction model based on the current system performance state to obtain an updated target prediction model;
And predicting by utilizing the target prediction model according to the historical system performance data, and determining the system load prediction data.
In one aspect, updating parameters of a prediction model based on the current system performance state to obtain an updated target prediction model, including:
Updating parameters of the long-short-term memory network model based on the current system performance state to obtain an updated target prediction model, wherein the current system performance state is system performance after the interrupt is executed based on an interrupt scheduling strategy;
Wherein, the data segmentation refers to data segmentation according to data types;
the model initialization refers to initializing parameters of a long-term and short-term memory network model;
the parameter optimization refers to adjusting parameters of a model to minimize a loss function;
the model verification refers to determining the prediction performance of a trained long-term and short-term memory network model.
On the one hand, the interrupt scheduling policy is dynamically adjusted based on the system load prediction data to obtain a target interrupt scheduling policy, which comprises the following steps:
And dynamically adjusting an interrupt scheduling strategy based on the system load prediction data and the current system performance state to obtain the target interrupt scheduling strategy, wherein the dynamically adjusted parameters comprise an interrupt affinity threshold, and the interrupt affinity comprises interrupt priority scheduling to a memory.
In one aspect, before predicting based on the current system performance state and the historical system performance data, determining the system load prediction data further comprises:
Extracting key characteristic data in historical system performance data by using a principal component analysis method, wherein the key characteristic data comprises interrupt times, memory utilization rate, equipment quantity, terminal frequency and central processing unit load;
performing scale unification processing on the key feature data to obtain scale unification performance data;
Accordingly, predicting based on the current system performance state and the historical system performance data, determining the system load prediction data includes:
And predicting based on the current system performance state and the unified performance data, and determining system load prediction data.
In one aspect, after determining the target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling policy and processing each interrupt based on the target interrupt processing device, the method further includes:
acquiring system performance feedback data after processing the interrupt based on the target interrupt scheduling strategy, wherein the system performance feedback data comprises at least one of central processing unit utilization rate, memory access delay, input/output operation frequency, network flow, waiting time, resource utilization rate and interrupt response time;
Analyzing the system performance feedback data to determine a target performance parameter corresponding to the current system;
Comparing the target performance parameter with a performance parameter threshold;
when the target performance parameter is determined to be smaller than the performance parameter threshold value, not processing;
When the target performance parameter is determined to be greater than or equal to the performance parameter threshold, determining to send prompt information so that the client adjusts the interrupt scheduling strategy based on the prompt information;
the system load feedback data corresponding to the system load prediction data in the system performance feedback data is processed;
comparing the system load feedback data with the system load prediction data to determine a system load prediction difference value;
And adjusting the corresponding system load prediction model based on the system load prediction difference value to obtain an adjusted system load prediction model.
The embodiment of the invention also provides an interrupt scheduling device, which comprises:
The data acquisition module is used for acquiring system load prediction data corresponding to the non-uniform memory access system;
The dynamic adjustment module is used for dynamically adjusting the interrupt scheduling strategy based on the system load prediction data to obtain a target interrupt scheduling strategy;
and the interrupt scheduling module is used for determining target interrupt processing equipment corresponding to each interrupt based on the target interrupt scheduling strategy and processing each interrupt based on the target interrupt processing equipment.
The embodiment of the invention also provides an interrupt scheduling device, which comprises:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the interrupt scheduling method as described above.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps of the interrupt scheduling method when being executed by a processor.
The embodiment of the invention also provides a computer program product, which comprises a computer program/instruction, wherein the computer program/instruction realizes the steps of the interrupt scheduling method when being executed by a processor.
In order to solve the technical problems, the embodiment of the invention provides an interrupt scheduling method, which comprises the steps of obtaining system load prediction data corresponding to a non-uniform memory access system, dynamically adjusting an interrupt scheduling policy based on the system load prediction data to obtain a target interrupt scheduling policy, determining target interrupt processing equipment corresponding to each interrupt based on the target interrupt scheduling policy, and processing each interrupt based on the target interrupt processing equipment.
Compared with the traditional interrupt scheduling strategy for scheduling the interrupt based on the unchanged interrupt scheduling strategy, the method and the device have the advantages that the interrupt scheduling strategy is dynamically adjusted based on the system load prediction data to obtain the target interrupt scheduling strategy, so that the interrupt can be scheduled to more accurate interrupt processing equipment in real time when the interrupt is scheduled, the interrupt can be scheduled based on the dynamically adjusted target interrupt scheduling strategy, and the intellectualization and the dynamics of the interrupt scheduling and the accuracy of the interrupt scheduling are realized.
Drawings
For a clearer description of embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described, it being apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a flowchart of an interrupt scheduling method according to an embodiment of the present invention;
FIG. 2 is a flowchart of another interrupt scheduling method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating an interrupt scheduling method according to an embodiment of the present invention;
FIG. 4 is a block diagram of an interrupt scheduling method according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an interrupt scheduler according to an embodiment of the present invention;
Fig. 6 is a schematic structural diagram of an interrupt dispatching device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present invention.
The terms "comprising" and "having" in the description of the invention and in the above-described figures, as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may include other steps or elements not expressly listed.
In order to better understand the aspects of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and detailed description.
Next, a detailed description is given of an interrupt scheduling method provided by the embodiment of the present invention. Fig. 1 is a flowchart of an interrupt scheduling method according to an embodiment of the present invention, where the method includes:
s101, acquiring system load prediction data corresponding to a non-uniform memory access system.
The embodiment is not limited to a specific execution body. For example, the execution body of the embodiment may be a computer, or the execution body of the embodiment may also be a mobile phone, or the like, and the execution body of the embodiment may also be a node in the system. The Non-coherent memory access (Non-Uniform Memory Access, NUMA) system in this embodiment is a computer architecture that is used in a multiprocessor or multicore system. The embodiment is not limited to a particular manner of deriving system load prediction data, and may be performed, for example, by analyzing historical load data, applying machine learning models, and adjusting resource allocation policies. The following is a detailed description of the system load prediction data, which includes indicators such as CPU (central processing unit) usage, memory usage, disk I/O (input/output), and network traffic, where historical load data of the system needs to be collected. The collected data is cleaned and preprocessed to ensure accuracy of the analysis. This may include operations to remove outliers, fill in missing values, and so on. Through careful analysis of historical load data, the characteristics of load periodicity, trend and the like can be known. Time series analysis is an important step therein, which helps reveal the trend and periodicity of the data over time. A suitable machine learning algorithm is selected to construct the predictive model. Common algorithms are linear regression, decision trees, random forests, neural networks, and the like. The model is trained using the historical load data as a training set. During training, model parameters need to be adjusted to optimize the predictive performance. The predictive performance of the model is evaluated by a validation set or a test set. Common evaluation metrics include Mean Square Error (MSE), mean Absolute Error (MAE), etc.
S102, dynamically adjusting the interrupt scheduling strategy based on the system load prediction data to obtain a target interrupt scheduling strategy.
Interrupt scheduling in this embodiment refers to the process of how an operating system decides which task or process should be preferentially processed when receiving an interrupt signal, and by reasonably arranging the order of interrupt processing, the response time of the system is reduced, and the performance of the system is improved. This embodiment does not limit the original interrupt scheduling policy. For example, the interrupt scheduling policy in this embodiment may be a device corresponding to each interrupt, or the interrupt scheduling policy in this embodiment may also be a priority of each interrupt process. This embodiment is not limited to a particular manner of dynamically adjusting the interrupt scheduling policy based on the system load prediction data. For example, the embodiment may determine the load of each interrupt processing apparatus based on the system load prediction data, determine the interrupt processing capability according to the load, preferentially allocate the interrupt to the interrupt processing apparatus having the high interrupt processing capability, and ensure that the interrupt is preferentially allocated to the processing apparatus having the low load and the high processing capability.
It should be further noted that, in order to improve the security of interrupt processing, before dynamically adjusting the interrupt scheduling policy based on the system load prediction data to obtain the target interrupt scheduling policy, the method may further include predicting a target interrupt and a target interrupt time point based on the historical interrupt data, and loading data corresponding to the target interrupt into the cache before the target interrupt occurs based on the interrupt time point to obtain the target cache. The target cache in this embodiment is configured to store data corresponding to each interrupt, and after the interrupt is processed, the data in the target cache may be deleted, so as to improve the caching capability of the target cache. The embodiment predicts the target interrupt based on the historical interrupt data and the target interrupt time point corresponding to the target interrupt, so that the data corresponding to the target interrupt is stored in the cache in time based on the target interrupt time point, and the data loading time during interrupt processing is reduced.
It should be further noted that, before the target interrupt occurs based on the interrupt time point, loading the data corresponding to the target interrupt into the cache to obtain the target cache, and then, when determining that the size of the data in the target cache is greater than the set data threshold, determining whether the interrupt processing frequency corresponding to each data in the target cache is greater than the set processing frequency threshold; when the interrupt processing frequency is not greater than the set processing frequency threshold, determining to flush the data from the target cache, and when the interrupt processing frequency is greater than the set processing frequency threshold, determining not to process the data. In the embodiment, when processing the data in the target cache, the interrupt processing frequency of the interrupt is considered, and the data corresponding to the interrupt with low interrupt processing frequency is cleared from the target cache preferentially, so that the storage capacity of the target cache is improved. In this embodiment, the data corresponding to the interrupt with high interrupt processing frequency is not deleted, so as to reduce the frequency of writing the data corresponding to the interrupt into the target cache, improve the processing capability, and update the data in the target cache according to the latest data corresponding to the interrupt after the interrupt processing is completed for the data corresponding to the interrupt with high interrupt processing frequency.
S103, determining target interrupt processing equipment corresponding to each interrupt based on a target interrupt scheduling strategy, and processing each interrupt based on the target interrupt processing equipment.
The target interrupt handling device in this embodiment refers to a device that can handle an interrupt. The embodiment is not limited to a particular type of target interrupt handling device, for example, the target interrupt handling device in the embodiment may be a CPU, or the target interrupt handling device in the embodiment may also be a DMA controller, which allows the peripheral to exchange data directly with system memory without intervention of the CPU (Direct Memory Access ). The target interrupt scheduling policy in this embodiment may include at least one of a correspondence between interrupts and interrupt processing devices, interrupt processing devices closest to the interrupts, and processing capabilities of the interrupt processing devices corresponding to the interrupts, where specific closest refers to the same NUMA, and it is understood that each interrupt has a corresponding device just beginning, and that the device corresponds to a NUMA, and that there are other interrupt processing devices under the NUMA, and that the interrupt can be preferentially allocated to devices under the same NUMA, and the capabilities of the interrupt processing devices in this embodiment refer to the own specification and architecture of the interrupt processing device. The embodiment can schedule each interrupt to target interrupt processing equipment with low load based on a target interrupt scheduling strategy, wherein the low load refers to equipment with a lower load value than a set load value, or can also determine the number of interrupts according to the descending order of the load of each target interrupt processing equipment according to system load prediction data, and select the number of equipment corresponding to the number of interrupts from the front side in the ordered load to process the interrupt.
The method includes determining a target interrupt processing device corresponding to each interrupt based on a target interrupt scheduling policy, and determining a target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling policy, wherein the method further includes, before processing each interrupt based on the target interrupt processing device, prioritizing the interrupt according to an interrupt type and a historical interrupt processing time to obtain a priority of each interrupt, adjusting the target interrupt scheduling policy based on the priority of each interrupt to obtain an adjusted target interrupt scheduling policy, and accordingly determining the target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling policy, and processing each interrupt based on the target interrupt processing device. The specific interrupt type is not limited in this embodiment. For example, the interrupt types in this embodiment may be hardware interrupts and software interrupts, and the priority of the hardware interrupts is generally referred to as higher than the priority of the software interrupts, because the hardware interrupts may involve device failures or hardware errors and need to be handled as soon as possible. For example, machine check interrupts (e.g., power failures, host errors, etc.) are typically of higher priority because they relate to system stability and data security, or the interrupt type in this embodiment may also be NUMA (Non-Uniform Memory Access ) disks, keyboards, mice, etc. The historical interrupt processing time in this embodiment refers to the time that each interrupt is processed. The relationship of the historical interrupt handling time and priority in this embodiment may be determined on demand. For example, the longer the historical interrupt handling time, the lower the priority, and the later may be handled, or the longer the historical interrupt handling time, the higher the priority, in this embodiment, since the longer the time the earlier the interrupt is handled, the more the interrupt is guaranteed to be handled within a set time. In this embodiment, the target interrupt scheduling policy is adjusted based on the priority of each interrupt, and the adjusted target interrupt scheduling policy means that the target device may be preferentially allocated to the interrupt with the high priority for processing according to the priority of the interrupt.
The method includes the steps of carrying out priority grading on the interrupt according to interrupt types and historical interrupt processing time to obtain the priority of each interrupt, wherein when the interrupt types are inconsistent, carrying out priority grading on the interrupt according to the interrupt types to obtain the priority of each interrupt, and when the interrupt types are consistent, carrying out priority grading on the interrupt according to the historical interrupt processing time to obtain the priority of each interrupt, wherein the higher the historical interrupt processing time is, the higher the priority grade is. The importance of interrupt types in determining the priority of interrupts in this embodiment is greater than the historical interrupt processing time.
In order to improve the safety of interrupt scheduling, the method can further comprise the steps of filtering system load prediction data based on an access control list to obtain target acquisition data, and encrypting the target acquisition data and an interrupt scheduling result by utilizing a data encryption mechanism to obtain target encryption data in the interrupt scheduling process. The access control list (Access Control List, ACL for short) in this embodiment is a security mechanism for defining and restricting access rights to system resources. The data encryption mechanism is a technical means for converting plaintext data into ciphertext through an algorithm and a secret key so as to ensure the security in the data transmission or storage process. The embodiment is not limited to a specific data encryption mechanism, and for example, the data encryption mechanism in the embodiment may be symmetric encryption, or the data encryption mechanism in the embodiment may also be asymmetric encryption. This embodiment prevents malicious interrupt attacks and data leakage through Access Control Lists (ACLs) and data encryption techniques.
The interrupt scheduling method provided by the embodiment of the invention comprises the steps of S101, S102, dynamically adjusting an interrupt scheduling policy based on system load prediction data to obtain a target interrupt scheduling policy, S103, determining target interrupt processing equipment corresponding to each interrupt based on the target interrupt scheduling policy, and processing each interrupt based on the target interrupt processing equipment. Compared with the traditional static-based scheduling, the method and the system dynamically adjust the interrupt scheduling strategy based on the system load prediction data to obtain the target interrupt scheduling strategy, and schedule the interrupt based on the dynamically adjusted target interrupt scheduling strategy, thereby realizing the intellectualization and the dynamics of interrupt scheduling.
For the sake of better understanding of the present invention, please refer to fig. 2, fig. 2 is a flowchart of another interrupt scheduling method according to an embodiment of the present invention, which may specifically include:
S201, acquiring current system performance state and historical system performance data.
The current system performance state in this embodiment refers to the performance of computer hardware and software in the running process, and includes key indexes such as response speed, processing capability, stability, and the like. The embodiment is not limited to a particular method of determining the system performance status, and may, for example, determine the system performance based on CPU usage, which is the computer's brain, that reflects the current system load conditions. High usage may mean that there are programs that occupy too much computing resources, resulting in a slow system response. Or the embodiment may determine based on memory usage that memory is where the computer is used to temporarily store running programs and data. If the available memory is insufficient, the system may need to read data from the hard disk frequently, which can significantly reduce performance. The embodiment can also be based on the disk read-write speed, namely the disk read-write speed directly influences the data access efficiency. If the disk is in a high load state for a long time, it may be due to the fact that a background service or application is performing a large amount of data exchange. This embodiment may also be based on network bandwidth utilization, which is an important indicator for network-dependent applications. Network congestion may cause data transmission delays, affecting the user experience. The embodiment may also be based on the system stability index, which may be viewed by a reliability monitor, as well as various events that occur to the system (e.g., software installation, system updates, application crashes, etc.). This information helps to analyze whether the system performance is stable. This embodiment may also be based on third party performance testing tools to comprehensively evaluate the performance level of the system. Or in this embodiment the system performance may also be determined based on at least two of the above parameters. The system performance status in this embodiment is used to determine the level of current system performance, and thus determine whether performance has degraded after processing the interrupt based on the target interrupt scheduling policy, and if so, ensure that performance remains unchanged or even improves, indicating that the current interrupt scheduling policy needs to be adjusted. The embodiment is not limited to specific historical system performance data, and for example, the historical system performance data in the embodiment may be at least one of CPU load, memory usage, interrupt frequency, and network traffic.
S202, predicting based on the current system performance state and the historical system performance data, and determining system load prediction data.
The embodiment is not limited to a particular method of predicting based on current system performance states and historical system performance data to determine system load prediction data. For example, the embodiment may predict based on the current system performance state and the historical system performance data based on the machine learning method to determine the system load prediction data, or the embodiment may also predict and analyze based on the current system performance state and the historical system performance data to obtain a rule of load change, and determine the system load prediction data based on the rule of load change.
It should be further noted that, in order to improve accuracy of determining the system load prediction data, the predicting based on the current system performance state and the historical system performance data may include:
S2021, updating parameters of the prediction model based on the current system performance state to obtain an updated target prediction model;
s2022, predicting by using a target prediction model according to the historical system performance data, and determining system load prediction data.
In this embodiment, the parameters of the predictive model may be updated based on the current system performance state. When the prediction model is a long-term and short-term memory model, the parameters which can be adjusted are memory depth and memory length. It will be appreciated that if the target interrupt scheduling policy is adjusted based on the last predicted system load prediction data, the performance of the system is reduced compared to before, which represents a problem with previous predictions, and therefore the parameters of the prediction model that affect certain prediction data need to be adjusted. This embodiment can predict the future system load and interrupt demand in real time. The prediction model in the embodiment can timely adjust parameters of the model according to feedback (current system performance state), and the effect of predicting the model is improved.
The method includes the steps of updating parameters of a prediction model based on a current system performance state to obtain an updated target prediction model, wherein the updating of the parameters of a long-short-term memory network model based on the current system performance state to obtain an updated target prediction model, the current system performance state is system performance after an interrupt is executed based on an interrupt scheduling policy, a training process of the long-short-term memory network model comprises data segmentation, model initialization, parameter optimization and model verification, the data segmentation refers to data segmentation according to data types, the model initialization refers to initializing the parameters of the long-short-term memory network model, the parameter optimization refers to adjusting the parameters of the model to minimize a loss function, and the model verification refers to determining the prediction performance of the trained long-short-term memory network model. In the embodiment, LSTM (long-short-term memory model) has time sensitivity, can learn the mode and the characteristic in time sequence data, and is suitable for tasks such as time sequence prediction, signal processing and the like. The embodiment uses a long-short-term memory network (LSTM) as a core prediction model, and can accurately capture time sequence changes and complex modes of system loads. The LSTM model can effectively process long-time dependency through a memory unit and a gating mechanism, and is suitable for dynamic prediction of system load. The method is characterized in that an LSTM model is combined with a real-time data stream to form a closed-loop prediction system, and a prediction result can be updated in millisecond time, so that accurate prediction of system load is realized.
It should be further noted that, in order to improve accuracy of the data, before predicting based on the current system performance state and the historical system performance data, determining the system load prediction data may further include extracting key feature data in the historical system performance data by using a principal component analysis method, where the key feature data includes a number of interrupts, a memory utilization rate, a number of devices, a terminal frequency, and a central processing unit load, performing scale unification processing on the key feature data to obtain performance data with uniform scale, and correspondingly, predicting based on the current system performance state and the historical system performance data, determining the system load prediction data may include predicting based on the current system performance state and the performance data with uniform scale, and determining the system load prediction data. This embodiment uses a Principal Component Analysis (PCA) method to extract key feature data from historical system performance data. PCA is a statistical method for transforming raw data into a new set of coordinate systems by linear transformation such that the basis vectors of the new coordinate systems are the main components of the raw data. This helps us identify the most important features in the data and remove redundant information. The embodiment performs scale unification processing on the key feature data to eliminate the dimensional influence among different features, so that the data is more suitable for subsequent analysis and prediction.
S203, dynamically adjusting the interrupt scheduling policy based on the system load prediction data to obtain a target interrupt scheduling policy.
The embodiment can determine the performance of each target interrupt processing device based on the system load prediction data, so that the target interrupt processing device corresponding to each interrupt in the interrupt scheduling strategy is preferentially adjusted. In this embodiment, mapping the same network card in the target interrupt scheduling policy to the same CPU as much as possible refers to that in the multi-core processor system, in order to improve the processing efficiency and the system performance, interrupt requests generated by specific hardware (such as the network card) are preferentially allocated to one or several fixed CPU cores for processing. In particular, this strategy has several key aspects in that interrupt affinity, by setting interrupt affinity (IRQ AFFINITY), a particular interrupt request can be bound to a particular CPU. The purpose of this is to avoid that the processing of all interrupts is concentrated on a few CPUs, thus overloading these CPUs, while the other CPUs are relatively idle. For a network card supporting multiple queues, interrupts can be uniformly distributed to different CPUs through hardware queues, and for a network card not supporting multiple queues, similar effects can be realized through software queues. In this way, each CPU can handle a portion of the interrupt, thereby improving overall processing efficiency. The network card interrupt is bound to the fixed CPU, so that the cost of interrupt processing can be reduced, the throughput of network data can be improved, and the delay can be reduced. This is because when multiple CPUs handle interrupts from the same network card at the same time, additional context switching and cache coherency issues may occur, affecting performance.
It should be further noted that, in order to improve accuracy of the interrupt policy, the dynamically adjusting the interrupt scheduling policy based on the system load prediction data to obtain the target interrupt scheduling policy may include dynamically adjusting the interrupt scheduling policy based on the system load prediction data and the current system performance state to obtain the target interrupt scheduling policy, where the dynamically adjusted parameter includes an interrupt affinity threshold, and the interrupt affinity includes interrupt priority scheduling to the memory. The interrupt affinity threshold in this embodiment includes interrupt priority scheduling to memory, where when memory is insufficient, determining the maximum CPU threshold for each interrupt to be handled on each CPU, if the current CPU threshold is greater than the maximum CPU threshold, then the CPU will not be scheduled. The embodiment can give the interrupt affinity threshold value to preferentially distribute the interrupt to the CPU with lower load, thereby improving the interrupt processing efficiency.
S204, determining target interrupt processing equipment corresponding to each interrupt based on a target interrupt scheduling strategy, and processing each interrupt based on the target interrupt processing equipment.
The embodiment is not limited to a specific target interrupt processing apparatus, and the target interrupt processing apparatus in the embodiment may be a CPU. The embodiment can adopt a Q-learning method, and the Q-learning is a value-based model-free reinforcement learning algorithm through learning and optimizing real-time feedback of a scheduling strategy.
It should be further noted that in the process of processing the interrupt based on the interrupt scheduling policy, the method may further include determining whether the current interrupt processing device is processing other interrupts, when the current interrupt and the priority of other interrupts are determined, processing interrupt requests with higher priority preferentially, and masking out interrupt requests with the same level or lower level to form interrupt nesting. Or in the case of a multi-core CPU (interrupt handling device) if a large number of hardware interrupts are allocated to different CPU core processes. The performance can be well balanced. For example, the server has a plurality of CPU multi-cores, a plurality of network cards and a plurality of hard disks, and if the network card can interrupt and exclusive one CPU core and the disk IO interrupt and exclusive one CPU core, the load of a single CPU can be greatly reduced, and the overall processing efficiency can be improved.
It should be further noted that, based on any embodiment, in order to improve stability of system operation and accuracy of prediction, after determining, based on the target interrupt scheduling policy, a target interrupt processing device corresponding to each interrupt, and after processing each interrupt based on the target interrupt processing device, the method may further include obtaining system performance feedback data after processing the interrupt based on the target interrupt scheduling policy, where the system performance feedback data includes at least one of a central processing unit usage rate, a memory access delay, an input/output operation frequency, a network flow, a waiting time, a resource utilization rate, and an interrupt response time, analyzing the system performance feedback data, determining a target performance parameter corresponding to a current system, comparing the target performance parameter with a performance parameter threshold, when determining that the target performance parameter is less than the performance parameter threshold, not processing, when determining that the target performance parameter is greater than or equal to the performance parameter threshold, determining to send prompt information, so that the client adjusts, based on the prompt information, the interrupt scheduling policy, comparing the system load feedback data with system load feedback data corresponding to system load prediction data, determining a system load feedback data and a system load prediction data, determining a system load prediction difference, and adjusting a system load prediction model based on the system load prediction model. In this embodiment, analyzing the system performance feedback data to determine the target performance parameter corresponding to the current system refers to integrating all the system performance feedback data to determine a more determined target performance parameter, so that the target performance parameter can accurately reflect the state of the current system. In this embodiment, when it is determined that the target performance parameter is equal to or greater than the performance parameter threshold, the system will send a prompt message to the client. These hints may include performance warnings, suggested optimization measures, or adjustment suggestions for interrupting the scheduling policy, etc., so that the client can learn about the system conditions in time and make adjustments accordingly. The embodiment adjusts the original system load prediction model based on the calculated system load prediction difference. This may include modifying model parameters, introducing new influencing factors, or replacing more appropriate predictive algorithms, etc. The adjusted model should be able to more accurately predict future system load conditions, thereby providing powerful support for optimization of interrupt scheduling policies.
According to the interrupt scheduling method provided by the embodiment of the invention, the load and the interrupt processing capacity of each CPU are calculated in real time, and the affinity and the mapping strategy of the interrupt are dynamically adjusted. The innovation is that an adaptive threshold adjustment mechanism is introduced, parameters of a scheduling strategy are dynamically adjusted according to the current state of the system, and the interrupt is ensured to be preferentially distributed to the CPU with lower load and stronger processing capacity. The method has the specific beneficial effects that:
firstly, the system performance is improved, namely, under the high-load network request scene, the interrupt processing time is reduced by 30%, and the overall throughput is improved by 25%.
And secondly, the resource utilization rate is enhanced, namely the waste of CPU and memory resources is reduced by dynamically adjusting the interrupt strategy.
Thirdly, the stability of the system is improved, and the system can still stably run under abnormal conditions through a self-adaptive optimization and fault tolerance mechanism.
For easy understanding, please refer to fig. 3, fig. 3 is a flowchart of an interrupt scheduling method according to an embodiment of the present invention, which specifically includes:
s301, collecting system performance data, wherein the system performance data comprise CPU load, memory utilization rate, interrupt frequency and network flow multidimensional data.
For easy understanding, please refer to fig. 4, fig. 4 is a structural frame diagram of an interrupt scheduling method according to an embodiment of the present invention. It can be seen from fig. 4 that the whole method comprises an acquisition module, a data processing module, a prediction module, a machine learning module, a scheduling strategy and an enhanced feedback module. The data acquisition module of the embodiment is designed according to the principle that the data acquisition module is responsible for monitoring the running state of the system in real time and acquiring multidimensional data comprising CPU load, memory utilization rate, interrupt frequency, network flow and the like. In order to ensure the accuracy and the real-time performance of the data, the module adopts a high-precision sensor and a data acquisition card. The method is characterized in that sensors are deployed on key nodes of a system, and various performance indexes are obtained in real time. The data is transmitted to the central processing unit via the high-speed bus and stored in the distributed database for subsequent processing and analysis.
S302, performing data cleaning, normalization and dimension reduction technology on the system performance data, and processing to obtain target system performance data.
The design principle of the data processing module of the embodiment is that after data acquisition, the system needs to conduct feature extraction and preprocessing on the data so as to identify key factors affecting interrupt scheduling. And the data cleaning, normalization and dimension reduction technology is adopted to remove noise and redundant information and improve the data quality. The method is specifically realized by utilizing a Principal Component Analysis (PCA) technology to extract main characteristics (the main characteristics mainly refer to the number of interruption times, the utilization rate of a memory, the number of devices, the total interruption load and the CPU load) in the data, and converting the data into a uniform scale through standardized processing. The data after feature extraction is stored in a feature database for use by the predictive model. The data processing in this embodiment may also include categorizing the data, e.g., categorizing to determine which NUMA and hardware layout they belong to CPU, memory, network card, and so forth.
S303, calling a prediction model at fixed time intervals, predicting according to the system performance data, and determining system load prediction data.
The design principle of the machine learning module in the embodiment is that a long-short-term memory network (LSTM) is adopted to construct a prediction model, so that time sequence change and a complex mode of system load can be captured. And (3) training the model by using historical data, and improving the generalization capability of the model through cross-validation and super-parameter optimization. The model training is carried out in a GPU acceleration environment, so that training efficiency is improved. The training process comprises data segmentation, model initialization, parameter optimization and model verification. And finally, deploying the trained model on a prediction server, and providing a load prediction result in real time. And (3) designing a closed loop system, and feeding back a load prediction result to a scheduling strategy in real time. By predicting future load changes, the interrupt distribution strategy is adjusted in advance, and the performance bottleneck in the load peak period is avoided. Dynamic parameter adjustment, namely dynamically adjusting key parameters of a scheduling algorithm, such as an interrupt affinity threshold value, a load balancing strategy and the like, according to the predicted load trend. The design principle of the prediction module in this embodiment is to deploy a trained LSTM model to predict the future system load and interrupt requirements in real time (interrupt requirements refer to the requirements to complete the interrupt). Based on the prediction result, the affinity and mapping strategy of the interrupt are dynamically adjusted, and the interrupt is preferentially distributed to the CPU with lower load. The method specifically comprises the steps that a system calls a prediction model at fixed time intervals, and prediction is carried out according to the current system state (feedback) and historical data. The prediction result is used for updating an interrupt schedule, and the schedule is stored in the shared memory for real-time access by the scheduler. Priority ranking-the interrupts are divided into multiple priority hierarchies (e.g., high, medium, low), with the ranking being based on the source, type, and historical processing time of the interrupt. Priority scheduling strategy, when designing scheduling strategy, priority processing high priority interrupt, ensuring timely response of key task. Meanwhile, the processing time of the low-priority interrupt is dynamically adjusted, so that the influence on the overall performance of the system is avoided. And a cache prefetching mechanism for loading related data into the cache in advance when some interrupts are predicted to be about to occur, so that the data loading time during interrupt processing is reduced. And optimizing a cache replacement strategy, namely dynamically adjusting the cache replacement strategy according to the frequency of interrupt processing and the data access mode, and preferentially reserving the data accessed by high frequency.
S304, adjusting the interrupt scheduling strategy according to the system load prediction data to obtain the adjusted interrupt scheduling strategy.
S305, adjusting the adjusted interrupt scheduling strategy according to the interrupt type and the historical processing time to obtain the interrupt scheduling strategy based on the priority.
S306, distributing the interrupt to the corresponding CPU by using the interrupt scheduling strategy based on the priority.
S307, the prediction model and the interrupt scheduling strategy are continuously updated through the interrupt scheduling execution effect.
The self-adaptive optimization module in the embodiment adopts the design principle that a machine learning model is continuously updated through a feedback mechanism to adapt to the change of the system environment. And introducing a Q-learning algorithm, and adjusting strategy parameters according to the scheduling effect to realize self-adaptive optimization. The method is specifically realized in that the system records the scheduling effect after each scheduling, including indexes such as interrupt processing time, CPU utilization rate and the like. By Q-learning algorithm, the system updates the policy parameters based on these feedback to optimize scheduling performance. Strategy exploration and optimization, namely continuously trying different scheduling strategy combinations through an exploration mechanism in reinforcement learning, and recording the influence of the scheduling strategy combinations on the system performance. The reward mechanism is designed, namely a reasonable reward mechanism is designed, interrupt processing efficiency, system load balance and the like are used as reward indexes, and policy optimization directions are guided. The security and fault tolerance mechanism is designed based on the design principle that security strategies are designed to prevent malicious interrupt attack and data leakage. And a fault-tolerant mechanism is introduced to ensure that the system can still stably run under the condition of hardware fault or abnormality. The security of system data is protected by setting an Access Control List (ACL) and a data encryption mechanism. The system periodically performs fault detection and automatically switches to a standby scheme when an abnormality is detected, so as to ensure the stability of the system.
The following describes an interrupt scheduling device provided by an embodiment of the present invention, and the interrupt scheduling device described below and the interrupt scheduling method described above may be referred to correspondingly.
Fig. 5 is a schematic structural diagram of an interrupt scheduling device according to an embodiment of the present invention, which may include:
The data acquisition module 100 is configured to acquire system load prediction data corresponding to the non-uniform memory access system;
the dynamic adjustment module 200 is configured to dynamically adjust the interrupt scheduling policy based on the system load prediction data, so as to obtain a target interrupt scheduling policy;
The interrupt scheduling module 300 is configured to determine, based on the target interrupt scheduling policy, a target interrupt processing device corresponding to each interrupt, and process each interrupt based on the target interrupt processing device.
Further, based on the above embodiment, the interrupt scheduling apparatus may further include:
the priority determining module is used for carrying out priority grading on the interrupt according to the interrupt type and the historical interrupt processing time to obtain the priority of each interrupt;
The priority-based interrupt scheduling policy adjustment module is used for adjusting the target interrupt scheduling policy based on the priority of each interrupt to obtain an adjusted target interrupt scheduling policy;
Accordingly, the interrupt scheduling module 300 includes:
And the interrupt processing unit is used for determining target interrupt processing equipment corresponding to each interrupt based on the adjusted target interrupt scheduling strategy and processing each interrupt based on the target interrupt processing equipment.
Further, based on any of the above embodiments, the priority determining module may include:
The priority grading unit is used for grading the priority of the interrupt based on the interrupt type when the interrupt types are inconsistent, so as to obtain the priority of each interrupt;
And the priority determining unit is used for carrying out priority grading on the interrupt based on the historical interrupt processing time when the interrupt types are consistent, so as to obtain the priority of each interrupt, wherein the higher the historical interrupt processing time is, the higher the priority grade is.
Further, based on any of the above embodiments, the interrupt scheduling apparatus may further include:
The target interrupt and target interrupt time point prediction module is used for predicting target interrupt and target interrupt time points based on historical interrupt data;
And the target cache determining unit is used for loading the data corresponding to the target interrupt into a cache based on the interrupt time point before the target interrupt occurs, so as to obtain a target cache.
Further, based on the above embodiment, the interrupt scheduling apparatus may further include:
The judging module is used for judging whether the interrupt processing frequency corresponding to each data in the target cache is larger than a set processing frequency threshold value or not when the data size in the target cache is determined to be larger than the set data threshold value;
the data clearing module is used for determining to clear the data from the target cache when the interrupt processing frequency is not greater than the set processing frequency threshold;
and the non-processing module is used for determining not to process the data when the interrupt processing frequency is larger than the set processing frequency threshold value.
Further, based on any of the above embodiments, the interrupt scheduling apparatus may further include:
the data filtering module is used for filtering the system load prediction data based on the access control list to obtain target acquisition data;
And the data encryption module is used for encrypting the target acquisition data and the interrupt scheduling result by utilizing a data encryption mechanism to obtain target encryption data.
Further, based on any of the above embodiments, the interrupt scheduling apparatus may further include:
The performance data acquisition module is used for acquiring the current system performance state and the historical system performance data corresponding to the non-uniform memory access system;
And the system load prediction data determining module is used for predicting based on the current system performance state and the historical system performance data and determining the system load prediction data.
Further, based on the above embodiment, the above system load prediction data determining module may include:
the parameter updating unit is used for updating the parameters of the prediction model based on the current system performance state to obtain an updated target prediction model;
and the target prediction model prediction unit is used for predicting by utilizing the target prediction model according to the historical system performance data and determining the system load prediction data.
Further, based on the above embodiment, the parameter updating unit may include:
The parameter updating subunit is used for updating parameters of the long-short-period memory network model based on the current system performance state to obtain the updated target prediction model, wherein the current system performance state is system performance after the interrupt scheduling strategy is executed, and the training process of the long-short-period memory network model comprises data segmentation, model initialization, parameter optimization and model verification;
Wherein, the data segmentation refers to data segmentation according to data types;
the model initialization refers to initializing parameters of a long-term and short-term memory network model;
the parameter optimization refers to adjusting parameters of a model to minimize a loss function;
the model verification refers to determining the prediction performance of a trained long-term and short-term memory network model.
Further, based on the above embodiment, the dynamic adjustment module 200 may include:
and the dynamic adjustment unit is used for dynamically adjusting the interrupt scheduling strategy based on the system load prediction data and the current system performance state to obtain the target interrupt scheduling strategy, wherein the dynamically adjusted parameters comprise an interrupt affinity threshold, and the interrupt affinity comprises interrupt priority scheduling to a memory.
Further, the interrupt scheduling apparatus may further include:
the system comprises a feature extraction module, a characteristic analysis module and a characteristic analysis module, wherein the feature extraction module is used for extracting key feature data in historical system performance data by utilizing a principal component analysis method, wherein the key feature data comprises interrupt times, memory utilization rate, equipment quantity, terminal frequency and a central processing unit load;
the scale unification module is used for performing scale unification processing on the key characteristic data to obtain scale unification performance data;
Correspondingly, the system load prediction data determining module may include:
and the system load prediction data determining unit is used for predicting based on the current system performance state and the performance data with unified scale to determine system load prediction data.
Further, based on any of the above embodiments, the interrupt scheduling apparatus may further include:
The system performance feedback data acquisition module is used for acquiring system performance feedback data after the interruption is processed based on the target interruption scheduling strategy, wherein the system performance feedback data comprises at least one of central processing unit utilization rate, memory access delay, input and output operation frequency, network flow, waiting time, resource utilization rate and interruption response time;
The target performance parameter determining module is used for analyzing the system performance feedback data and determining the target performance parameter corresponding to the current system;
the comparison module is used for comparing the target performance parameter with a performance parameter threshold;
A non-processing module, configured to, when it is determined that the target performance parameter is less than the performance parameter threshold, not perform processing;
the prompt information sending module is used for determining to send prompt information when the target performance parameter is determined to be greater than or equal to the performance parameter threshold value, so that the client adjusts the interrupt scheduling strategy based on the prompt information;
the system load feedback data determining module is used for determining system load feedback data corresponding to the system load prediction data in the system performance feedback data;
The difference value determining module is used for comparing the system load feedback data with the system load prediction data to determine a system load prediction difference value;
and the adjusting module is used for adjusting the corresponding system load prediction model based on the system load prediction difference value to obtain an adjusted system load prediction model.
It should be noted that, the order of the modules and units in the interrupt scheduling apparatus may be changed without affecting the logic.
The description of the features in the embodiment corresponding to fig. 5 may be referred to the related description of the embodiment corresponding to fig. 5, which is not repeated here.
The interrupt scheduling device provided by the embodiment of the invention can comprise a data acquisition module 100 for acquiring system load prediction data corresponding to a non-uniform memory access system, a dynamic adjustment module 200 for dynamically adjusting an interrupt scheduling policy based on the system load prediction data to obtain a target interrupt scheduling policy, and an interrupt scheduling module 300 for determining a target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling policy and processing each interrupt based on the target interrupt processing device. Compared with the traditional method for scheduling the interrupt based on static state, the method and the device for scheduling the interrupt based on the system load prediction data dynamically adjust the interrupt scheduling policy to obtain the target interrupt scheduling policy, and the interrupt can be scheduled based on the dynamically adjusted target interrupt scheduling policy, so that the intellectualization and the dynamics of interrupt scheduling are realized, and the accuracy of interrupt scheduling is improved.
The following describes an interrupt scheduling device provided in the embodiment of the present invention, and the interrupt scheduling device described below and the interrupt scheduling method described above may be referred to correspondingly.
FIG. 6 is a schematic structural diagram of an interrupt service apparatus according to an embodiment of the present invention, as shown in FIG. 6, the interrupt service apparatus includes a memory 60 for storing a computer program;
a processor 61 for implementing the steps of the interrupt scheduling method according to the above embodiment when executing a computer program.
The interrupt scheduling device provided in this embodiment may include, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like.
Processor 61 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor 61 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field-Programmable gate array (fieldprogrammable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 61 may also include a main processor, which is a processor for processing data in a wake-up state, also referred to as a central processor (Central Processing Unit, CPU), and a coprocessor, which is a low-power processor for processing data in a standby state. In some embodiments, the processor 61 may be integrated with an image processor (Graphics Processing Unit, GPU) for rendering and rendering of content to be displayed by the display screen. In some embodiments, the processor 61 may also include an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) processor for processing computing operations related to machine learning.
Memory 60 may include one or more computer-readable storage media, which may be non-transitory. Memory 60 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 60 is at least used for storing a computer program 601, which, when loaded and executed by the processor 61, is capable of implementing the relevant steps of the interrupt scheduling method disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 60 may further include an operating system 602, data 603, and the like, where the storage manner may be transient storage or permanent storage. Operating system 602 may include Windows, unix, linux, among other things. The data 603 may include, but is not limited to, data required for interrupt scheduling, and the like.
In some embodiments, the interrupt schedule device may further include a display 62, an input-output interface 63, a communication interface 64, a power supply 65, and a communication bus 66.
Those skilled in the art will appreciate that the structure shown in fig. 6 does not constitute a limitation of the interrupt scheduling apparatus and may include more or fewer components than illustrated.
It will be appreciated that if the interrupt scheduling method in the above embodiment is implemented in the form of a software functional unit and sold or used as a separate product, it may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in part or in whole or in part in the form of a software product stored in a storage medium for performing all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrically erasable programmable ROM, a register, a hard disk, a removable magnetic disk, a CD-ROM, a magnetic disk, or an optical disk, etc. which can store program codes.
Based on this, the embodiment of the invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the interrupt scheduling method as described above.
Based on this, an embodiment of the present invention also provides a computer program product, including a computer program/instruction, which when executed by a processor implements the steps of the interrupt scheduling method described above.
The above describes in detail an interrupt scheduling device provided by the embodiment of the present invention. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The method, the device, the equipment and the computer readable storage medium for interrupt scheduling provided by the invention are described in detail. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.

Claims (13)

Translated fromChinese
1.一种中断调度方法,其特征在于,包括:1. An interrupt scheduling method, characterized in that it includes:获取非一致性内存访问系统对应的当前的系统性能状态和历史系统性能数据;Obtain the current system performance status and historical system performance data corresponding to the non-uniform memory access system;基于所述当前的系统性能状态和所述历史系统性能数据进行预测,确定系统负载预测数据,包括:基于所述当前的系统性能状态对预测模型的参数进行更新,得到更新后的目标预测模型;所述预测模型是基于机器学习算法进行构建得到的;根据所述历史系统性能数据利用所述目标预测模型进行预测,确定系统负载预测数据;Predicting based on the current system performance state and the historical system performance data to determine system load prediction data, including: updating parameters of a prediction model based on the current system performance state to obtain an updated target prediction model; the prediction model is constructed based on a machine learning algorithm; predicting based on the historical system performance data using the target prediction model to determine system load prediction data;基于所述系统负载预测数据和所述当前的系统性能状态对中断调度策略进行动态调整,得到目标中断调度策略;其中,动态调整的参数包括中断亲和性阈值,中断亲和性包括中断优先调度至内存;所述目标中断调度策略包括中断与中断处理设备的对应关系、中断距离最近的中断处理设备、中断对应的中断处理设备的处理能力中的至少一种;The interrupt scheduling strategy is dynamically adjusted based on the system load prediction data and the current system performance state to obtain a target interrupt scheduling strategy; wherein the dynamically adjusted parameters include an interrupt affinity threshold, and the interrupt affinity includes interrupt priority scheduling to the memory; the target interrupt scheduling strategy includes at least one of a corresponding relationship between the interrupt and the interrupt processing device, an interrupt processing device closest to the interrupt, and a processing capability of the interrupt processing device corresponding to the interrupt;基于所述目标中断调度策略确定每个中断对应的目标中断处理设备,基于所述目标中断处理设备对每个中断进行处理。A target interrupt processing device corresponding to each interrupt is determined based on the target interrupt scheduling policy, and each interrupt is processed based on the target interrupt processing device.2.根据权利要求1所述的中断调度方法,其特征在于,在基于所述目标中断调度策略确定每个中断对应的目标中断处理设备,基于所述目标中断处理设备对每个中断进行处理之前,还包括:2. The interrupt scheduling method according to claim 1, characterized in that before determining the target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling strategy and processing each interrupt based on the target interrupt processing device, it further comprises:根据中断类型和历史中断处理时间对中断进行优先级分级,得到每个中断的优先级;Prioritize interrupts according to their type and historical interrupt processing time to obtain the priority of each interrupt;基于每个中断的优先级调整所述目标中断调度策略,得到调整后的目标中断调度策略;Adjusting the target interrupt scheduling strategy based on the priority of each interrupt to obtain an adjusted target interrupt scheduling strategy;相应的,基于所述目标中断调度策略确定每个中断对应的目标中断处理设备,基于所述目标中断处理设备对每个中断进行处理,包括:Accordingly, determining a target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling strategy, and processing each interrupt based on the target interrupt processing device includes:基于所述调整后的目标中断调度策略确定每个中断对应的目标中断处理设备,基于所述目标中断处理设备对每个中断进行处理。A target interrupt processing device corresponding to each interrupt is determined based on the adjusted target interrupt scheduling policy, and each interrupt is processed based on the target interrupt processing device.3.根据权利要求2所述的中断调度方法,其特征在于,所述根据中断类型和历史中断处理时间对中断进行优先级分级,得到每个中断的优先级,包括:3. The interrupt scheduling method according to claim 2, wherein the priority of the interrupts is classified according to the interrupt type and the historical interrupt processing time to obtain the priority of each interrupt, comprising:当中断类型不一致时,基于所述中断类型对中断进行优先级分级,得到每个中断的优先级;When the interrupt types are inconsistent, priority classification is performed on the interrupts based on the interrupt types to obtain the priority of each interrupt;当所述中断类型一致时,基于所述历史中断处理时间对中断进行优先级分级,得到每个中断的优先级;其中,历史中断处理时间越大,优先级等级越高。When the interrupt types are consistent, the interrupts are prioritized based on the historical interrupt processing time to obtain the priority level of each interrupt; wherein, the greater the historical interrupt processing time, the higher the priority level.4.根据权利要求1所述的中断调度方法,其特征在于,在基于所述系统负载预测数据对中断调度策略进行动态调整,得到目标中断调度策略之前,还包括:4. The interrupt scheduling method according to claim 1, characterized in that before dynamically adjusting the interrupt scheduling strategy based on the system load prediction data to obtain the target interrupt scheduling strategy, it also includes:基于历史中断数据预测目标中断和目标中断时间点;Predict target disruptions and target disruption time points based on historical disruption data;基于所述中断时间点在所述目标中断发生之前,将所述目标中断对应的数据加载至缓存中,得到目标缓存。Based on the interruption time point before the target interruption occurs, data corresponding to the target interruption is loaded into a cache to obtain a target cache.5.根据权利要求4所述的中断调度方法,其特征在于,在基于所述中断时间点在所述目标中断发生之前,将所述目标中断对应的数据加载至缓存中,得到目标缓存之后,还包括:5. The interrupt scheduling method according to claim 4, characterized in that, before the target interrupt occurs based on the interrupt time point, the data corresponding to the target interrupt is loaded into a cache, and after obtaining the target cache, it further comprises:当确定所述目标缓存中的数据大小大于设定数据阈值时,判断所述目标缓存中每个数据对应的中断处理频率是否大于设定处理频率阈值;When it is determined that the data size in the target cache is greater than the set data threshold, determining whether the interrupt processing frequency corresponding to each data in the target cache is greater than the set processing frequency threshold;当所述中断处理频率不大于所述设定处理频率阈值时,确定将该数据从目标缓存中清除;When the interrupt processing frequency is not greater than the set processing frequency threshold, determining to clear the data from the target cache;当所述中断处理频率大于所述设定处理频率阈值时,确定不对该数据进行处理。When the interrupt processing frequency is greater than the set processing frequency threshold, it is determined not to process the data.6.根据权利要求1所述的中断调度方法,其特征在于,在进行中断调度的过程中,还包括:6. The interrupt scheduling method according to claim 1, characterized in that, during the interrupt scheduling process, it further comprises:基于访问控制列表对系统负载预测数据进行过滤,得到目标采集数据;Filter the system load prediction data based on the access control list to obtain the target collection data;利用数据加密机制对所述目标采集数据和中断调度结果进行加密,得到目标加密数据。The target collection data and interrupt scheduling results are encrypted using a data encryption mechanism to obtain target encrypted data.7.根据权利要求1所述的中断调度方法,其特征在于,基于所述当前的系统性能状态对预测模型的参数进行更新,得到更新后的目标预测模型,包括:7. The interrupt scheduling method according to claim 1, characterized in that the parameters of the prediction model are updated based on the current system performance state to obtain an updated target prediction model, comprising:基于所述当前的系统性能状态对长短期记忆网络模型的参数进行更新,得到更新后的所述目标预测模型;其中,所述当前的系统性能状态为基于中断调度策略执行中断后的系统性能;所述长短期记忆网络模型的训练过程包括数据分割、模型初始化、参数优化和模型验证;Based on the current system performance state, the parameters of the long short-term memory network model are updated to obtain the updated target prediction model; wherein the current system performance state is the system performance after the interruption is executed based on the interruption scheduling strategy; the training process of the long short-term memory network model includes data segmentation, model initialization, parameter optimization and model verification;其中,所述数据分割是指按照数据类型对数据进行分割;Wherein, the data segmentation refers to segmenting the data according to the data type;所述模型初始化是指对长短期记忆网络模型的参数进行初始化;The model initialization refers to initializing the parameters of the long short-term memory network model;所述参数优化是指调整模型的参数来最小化损失函数;The parameter optimization refers to adjusting the parameters of the model to minimize the loss function;所述模型验证是指确定训练好的长短期记忆网络模型的预测性能。The model validation refers to determining the predictive performance of the trained long short-term memory network model.8.根据权利要求1所述的中断调度方法,其特征在于,在基于所述当前的系统性能状态和所述历史系统性能数据进行预测,确定所述系统负载预测数据之前,还包括:8. The interrupt scheduling method according to claim 1, characterized in that before predicting based on the current system performance state and the historical system performance data to determine the system load prediction data, it also includes:利用主成分分析方法,提取所述历史系统性能数据中的关键特征数据;其中,所述关键特征数据包括中断次数、内存利用率、设备数量、终端频率和中央处理器负载;Using principal component analysis method, extract key characteristic data from the historical system performance data; wherein the key characteristic data includes the number of interruptions, memory utilization, number of devices, terminal frequency and central processing unit load;对所述关键特征数据进行尺度统一处理得到尺度统一的性能数据;Performing scale unification processing on the key characteristic data to obtain scale-unified performance data;相应的,基于所述当前的系统性能状态和所述历史系统性能数据进行预测,确定所述系统负载预测数据,包括:Accordingly, performing prediction based on the current system performance state and the historical system performance data to determine the system load prediction data includes:基于所述当前的系统性能状态和所述尺度统一的性能数据进行预测,确定系统负载预测数据。A prediction is performed based on the current system performance state and the scale-unified performance data to determine system load prediction data.9.根据权利要求1所述的中断调度方法,其特征在于,在基于所述目标中断调度策略确定每个中断对应的目标中断处理设备,基于所述目标中断处理设备对每个中断进行处理之后,还包括:9. The interrupt scheduling method according to claim 1, characterized in that after determining the target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling strategy and processing each interrupt based on the target interrupt processing device, it further comprises:获取基于所述目标中断调度策略对中断进行处理之后的系统性能反馈数据;其中,所述系统性能反馈数据包括中央处理器使用率、内存访问延迟、输入输出操作频率、网络流量、等待时间、资源利用率和中断响应时间中的至少一种;Obtaining system performance feedback data after processing the interrupt based on the target interrupt scheduling strategy; wherein the system performance feedback data includes at least one of a central processing unit utilization rate, a memory access delay, an input and output operation frequency, a network flow, a waiting time, a resource utilization rate, and an interrupt response time;对所述系统性能反馈数据进行分析,确定当前系统对应的目标性能参数;Analyze the system performance feedback data to determine the target performance parameters corresponding to the current system;将所述目标性能参数和性能参数阈值进行比较;comparing the target performance parameter to a performance parameter threshold;当确定所述目标性能参数小于所述性能参数阈值时,不进行处理;When it is determined that the target performance parameter is less than the performance parameter threshold, no processing is performed;当确定所述目标性能参数大于等于所述性能参数阈值时,确定发送提示信息,以使客户端基于所述提示信息对中断调度策略进行调整;When it is determined that the target performance parameter is greater than or equal to the performance parameter threshold, determine to send prompt information so that the client adjusts the interrupt scheduling strategy based on the prompt information;将所述系统性能反馈数据中与所述系统负载预测数据对应的系统负载反馈数据;The system load feedback data corresponding to the system load prediction data in the system performance feedback data;将所述系统负载反馈数据和所述系统负载预测数据进行对比,确定系统负载预测差值;Comparing the system load feedback data with the system load prediction data to determine a system load prediction difference;基于所述系统负载预测差值对相应的系统负载预测模型进行调整,得到调整后的系统负载预测模型。The corresponding system load prediction model is adjusted based on the system load prediction difference to obtain an adjusted system load prediction model.10.一种中断调度装置,其特征在于,包括:10. An interrupt scheduling device, comprising:数据获取模块,用于获取非一致性内存访问系统对应的系统负载预测数据;A data acquisition module, used to acquire system load prediction data corresponding to the non-uniform memory access system;性能数据获取模块,用于获取所述非一致性内存访问系统对应的当前的系统性能状态和历史系统性能数据;A performance data acquisition module, used to acquire the current system performance status and historical system performance data corresponding to the non-consistent memory access system;系统负载预测数据确定模块,用于基于所述当前的系统性能状态和所述历史系统性能数据进行预测,确定所述系统负载预测数据;所述系统负载预测数据确定模块包括:参数更新单元,用于基于所述当前的系统性能状态对预测模型的参数进行更新,得到更新后的目标预测模型;所述预测模型是基于机器学习算法进行构建得到的;基于目标预测模型预测单元,用于根据所述历史系统性能数据利用所述目标预测模型进行预测,确定所述系统负载预测数据;A system load prediction data determination module, used to predict based on the current system performance state and the historical system performance data to determine the system load prediction data; the system load prediction data determination module includes: a parameter updating unit, used to update the parameters of the prediction model based on the current system performance state to obtain an updated target prediction model; the prediction model is constructed based on a machine learning algorithm; a target prediction model-based prediction unit, used to predict based on the historical system performance data using the target prediction model to determine the system load prediction data;动态调整模块,用于基于所述系统负载预测数据和所述当前的系统性能状态对中断调度策略进行动态调整,得到目标中断调度策略;其中,动态调整的参数包括中断亲和性阈值,中断亲和性包括中断优先调度至内存;所述目标中断调度策略包括中断与中断处理设备的对应关系、中断距离最近的中断处理设备、中断对应的中断处理设备的处理能力中的至少一种;A dynamic adjustment module, configured to dynamically adjust the interrupt scheduling strategy based on the system load prediction data and the current system performance status to obtain a target interrupt scheduling strategy; wherein the dynamically adjusted parameters include an interrupt affinity threshold, and the interrupt affinity includes interrupt priority scheduling to the memory; the target interrupt scheduling strategy includes at least one of a corresponding relationship between the interrupt and the interrupt processing device, an interrupt processing device closest to the interrupt, and a processing capability of the interrupt processing device corresponding to the interrupt;中断调度模块,用于基于所述目标中断调度策略确定每个中断对应的目标中断处理设备,基于所述目标中断处理设备对每个中断进行处理。The interrupt scheduling module is used to determine the target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling strategy, and process each interrupt based on the target interrupt processing device.11.一种中断调度设备,其特征在于,包括:11. An interrupt scheduling device, comprising:存储器,用于存储计算机程序;Memory for storing computer programs;处理器,用于执行所述计算机程序以实现如权利要求1至9任意一项所述中断调度方法的步骤。A processor, configured to execute the computer program to implement the steps of the interrupt scheduling method as claimed in any one of claims 1 to 9.12.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至9任意一项所述中断调度方法的步骤。12. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the interrupt scheduling method according to any one of claims 1 to 9 are implemented.13.一种计算机程序产品,包括计算机程序/指令,其特征在于,该计算机程序/指令被处理器执行时实现权利要求1至9任一项所述中断调度方法的步骤。13. A computer program product, comprising a computer program/instruction, characterized in that when the computer program/instruction is executed by a processor, the steps of the interrupt scheduling method according to any one of claims 1 to 9 are implemented.
CN202412000229.1A2024-12-312024-12-31Interrupt scheduling method, device, equipment and computer readable storage mediumActiveCN119415237B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202412000229.1ACN119415237B (en)2024-12-312024-12-31Interrupt scheduling method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202412000229.1ACN119415237B (en)2024-12-312024-12-31Interrupt scheduling method, device, equipment and computer readable storage medium

Publications (2)

Publication NumberPublication Date
CN119415237A CN119415237A (en)2025-02-11
CN119415237Btrue CN119415237B (en)2025-07-11

Family

ID=94478330

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202412000229.1AActiveCN119415237B (en)2024-12-312024-12-31Interrupt scheduling method, device, equipment and computer readable storage medium

Country Status (1)

CountryLink
CN (1)CN119415237B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119781946A (en)*2025-03-122025-04-08北京和峰科技有限公司 Interrupt method, interrupt device, CPU architecture, computer program and storage medium
CN120179369B (en)*2025-05-212025-08-26武汉凌久微电子有限公司 A dynamic adaptive interrupt processing method and interrupt processing system

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113867918A (en)*2021-09-302021-12-31北京紫光展锐通信技术有限公司Interrupt balancing method and device, electronic equipment and computer readable storage medium
CN118689613A (en)*2024-08-262024-09-24珠海市阿普顿电气有限公司 ARM9 platform management method and system based on Linux system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US12056521B2 (en)*2021-09-032024-08-06Microsoft Technology Licensing, LlcMachine-learning-based replenishment of interruptible workloads in cloud environment
CN115543577B (en)*2022-08-082023-08-04广东技术师范大学 Covariate-based Kubernetes resource scheduling optimization method, storage medium and equipment
CN118446348A (en)*2023-02-062024-08-06中兴通讯股份有限公司Energy efficiency optimization management method and device for data center
CN119201435B (en)*2024-09-032025-07-04苏州吉天信息技术有限公司 A dynamic load balancing method and system for AI model intelligent machine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113867918A (en)*2021-09-302021-12-31北京紫光展锐通信技术有限公司Interrupt balancing method and device, electronic equipment and computer readable storage medium
CN118689613A (en)*2024-08-262024-09-24珠海市阿普顿电气有限公司 ARM9 platform management method and system based on Linux system

Also Published As

Publication numberPublication date
CN119415237A (en)2025-02-11

Similar Documents

PublicationPublication DateTitle
CN119415237B (en)Interrupt scheduling method, device, equipment and computer readable storage medium
US20160004567A1 (en)Scheduling applications in a clustered computer system
US10303128B2 (en)System and method for control and/or analytics of an industrial process
CN110990138A (en)Resource scheduling method, device, server and storage medium
US12180475B2 (en)Oversubscription scheduling
US20230127112A1 (en)Sub-idle thread priority class
CN118034892B (en) A method for implementing multi-core concurrent load on cluster file system client
EP3007407A1 (en)Configuration method, equipment, system and computer readable medium for determining a new configuration of calculation resources
CN119883578A (en)Task scheduling method and system, electronic equipment and storage medium
Qiao et al.Conserve: Harvesting gpus for low-latency and high-throughput large language model serving
Swain et al.Efficient straggler task management in cloud environment using stochastic gradient descent with momentum learning-driven neural networks
CN118034938A (en)Job scheduling method, intelligent computing cloud operating system and computing platform
CN118227289A (en)Task scheduling method, device, electronic equipment, storage medium and program product
Chen et al.Joint Optimization of Request Scheduling and Container Prewarming in Serverless Computing
EP3599547B1 (en)Elastic storage volume type selection and optimization engine for public cloud environments
CN119759544B (en) Resource scheduling method, device and computer equipment for power range simulation system
US20250310202A1 (en)Optimizing Resource Scaling
US20250284612A1 (en)Artificial intelligence governed processor
CN120010790B (en)Storage device and control method thereof
KR102842423B1 (en)Electronic apparatus and method for allocating resources in hybrid system of cloud and on premise
CN120407214B (en)Resource allocation method, device, equipment and medium
US20250284549A1 (en)Workload management via adaptive request rate limiting
US20250300911A1 (en)Optimizing Processor Unit Frequency
CN120610811A (en)Optimization method, device, equipment and storage medium of server resource configuration information
CN119883597A (en)Resource pool control method and device, electronic equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp