Disclosure of Invention
Accordingly, an object of the present invention is to provide an interrupt scheduling method, apparatus, device, and computer readable storage medium, which solve the technical problem of poor interrupt scheduling effect in the prior art.
In order to solve the technical problems, the present invention provides an interrupt scheduling method, including:
acquiring system load prediction data corresponding to a non-uniform memory access system;
Dynamically adjusting an interrupt scheduling strategy based on the system load prediction data to obtain a target interrupt scheduling strategy;
and determining target interrupt processing equipment corresponding to each interrupt based on the target interrupt scheduling strategy, and processing each interrupt based on the target interrupt processing equipment.
In one aspect, before determining the target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling policy, before processing each interrupt based on the target interrupt processing device, the method further includes:
the interrupt is prioritized according to the interrupt type and the historical interrupt processing time, and the priority of each interrupt is obtained;
adjusting the target interrupt scheduling strategy based on the priority of each interrupt to obtain an adjusted target interrupt scheduling strategy;
correspondingly, determining a target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling policy, and processing each interrupt based on the target interrupt processing device, including:
and determining target interrupt processing equipment corresponding to each interrupt based on the adjusted target interrupt scheduling strategy, and processing each interrupt based on the target interrupt processing equipment.
In one aspect, the prioritizing the interrupts according to interrupt types and historical interrupt processing time to obtain the priority of each interrupt includes:
When the interrupt types are inconsistent, carrying out priority grading on the interrupt based on the interrupt types to obtain the priority of each interrupt;
and when the interrupt types are consistent, carrying out priority grading on the interrupt based on the historical interrupt processing time to obtain the priority of each interrupt, wherein the higher the historical interrupt processing time is, the higher the priority grade is.
In one aspect, before dynamically adjusting the interrupt scheduling policy based on the system load prediction data to obtain the target interrupt scheduling policy, the method further includes:
Predicting a target interrupt and a target interrupt time point based on the historical interrupt data;
And loading data corresponding to the target interrupt into a cache before the target interrupt occurs based on the interrupt time point to obtain a target cache.
In one aspect, before the target interrupt occurs based on the interrupt time point, loading data corresponding to the target interrupt into a cache, and after obtaining a target cache, further including:
when the data size in the target cache is determined to be larger than a set data threshold, judging whether interrupt processing frequency corresponding to each data in the target cache is larger than a set processing frequency threshold;
Determining to flush the data from the target cache when the interrupt processing frequency is not greater than the set processing frequency threshold;
And when the interrupt processing frequency is greater than the set processing frequency threshold, determining that the data is not processed.
In one aspect, in the process of performing interrupt scheduling, the method further includes:
filtering the system load prediction data based on the access control list to obtain target acquisition data;
and encrypting the target acquisition data and the interrupt scheduling result by using a data encryption mechanism to obtain target encryption data.
In one aspect, before the obtaining the system load prediction data corresponding to the non-uniform memory access system, the method further includes:
Acquiring current system performance states and historical system performance data corresponding to the non-uniform memory access system;
And predicting based on the current system performance state and the historical system performance data, and determining the system load prediction data.
In one aspect, predicting based on the current system performance state and the historical system performance data, determining the system load prediction data includes:
updating parameters of the prediction model based on the current system performance state to obtain an updated target prediction model;
And predicting by utilizing the target prediction model according to the historical system performance data, and determining the system load prediction data.
In one aspect, updating parameters of a prediction model based on the current system performance state to obtain an updated target prediction model, including:
Updating parameters of the long-short-term memory network model based on the current system performance state to obtain an updated target prediction model, wherein the current system performance state is system performance after the interrupt is executed based on an interrupt scheduling strategy;
Wherein, the data segmentation refers to data segmentation according to data types;
the model initialization refers to initializing parameters of a long-term and short-term memory network model;
the parameter optimization refers to adjusting parameters of a model to minimize a loss function;
the model verification refers to determining the prediction performance of a trained long-term and short-term memory network model.
On the one hand, the interrupt scheduling policy is dynamically adjusted based on the system load prediction data to obtain a target interrupt scheduling policy, which comprises the following steps:
And dynamically adjusting an interrupt scheduling strategy based on the system load prediction data and the current system performance state to obtain the target interrupt scheduling strategy, wherein the dynamically adjusted parameters comprise an interrupt affinity threshold, and the interrupt affinity comprises interrupt priority scheduling to a memory.
In one aspect, before predicting based on the current system performance state and the historical system performance data, determining the system load prediction data further comprises:
Extracting key characteristic data in historical system performance data by using a principal component analysis method, wherein the key characteristic data comprises interrupt times, memory utilization rate, equipment quantity, terminal frequency and central processing unit load;
performing scale unification processing on the key feature data to obtain scale unification performance data;
Accordingly, predicting based on the current system performance state and the historical system performance data, determining the system load prediction data includes:
And predicting based on the current system performance state and the unified performance data, and determining system load prediction data.
In one aspect, after determining the target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling policy and processing each interrupt based on the target interrupt processing device, the method further includes:
acquiring system performance feedback data after processing the interrupt based on the target interrupt scheduling strategy, wherein the system performance feedback data comprises at least one of central processing unit utilization rate, memory access delay, input/output operation frequency, network flow, waiting time, resource utilization rate and interrupt response time;
Analyzing the system performance feedback data to determine a target performance parameter corresponding to the current system;
Comparing the target performance parameter with a performance parameter threshold;
when the target performance parameter is determined to be smaller than the performance parameter threshold value, not processing;
When the target performance parameter is determined to be greater than or equal to the performance parameter threshold, determining to send prompt information so that the client adjusts the interrupt scheduling strategy based on the prompt information;
the system load feedback data corresponding to the system load prediction data in the system performance feedback data is processed;
comparing the system load feedback data with the system load prediction data to determine a system load prediction difference value;
And adjusting the corresponding system load prediction model based on the system load prediction difference value to obtain an adjusted system load prediction model.
The embodiment of the invention also provides an interrupt scheduling device, which comprises:
The data acquisition module is used for acquiring system load prediction data corresponding to the non-uniform memory access system;
The dynamic adjustment module is used for dynamically adjusting the interrupt scheduling strategy based on the system load prediction data to obtain a target interrupt scheduling strategy;
and the interrupt scheduling module is used for determining target interrupt processing equipment corresponding to each interrupt based on the target interrupt scheduling strategy and processing each interrupt based on the target interrupt processing equipment.
The embodiment of the invention also provides an interrupt scheduling device, which comprises:
a memory for storing a computer program;
a processor for executing the computer program to implement the steps of the interrupt scheduling method as described above.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps of the interrupt scheduling method when being executed by a processor.
The embodiment of the invention also provides a computer program product, which comprises a computer program/instruction, wherein the computer program/instruction realizes the steps of the interrupt scheduling method when being executed by a processor.
In order to solve the technical problems, the embodiment of the invention provides an interrupt scheduling method, which comprises the steps of obtaining system load prediction data corresponding to a non-uniform memory access system, dynamically adjusting an interrupt scheduling policy based on the system load prediction data to obtain a target interrupt scheduling policy, determining target interrupt processing equipment corresponding to each interrupt based on the target interrupt scheduling policy, and processing each interrupt based on the target interrupt processing equipment.
Compared with the traditional interrupt scheduling strategy for scheduling the interrupt based on the unchanged interrupt scheduling strategy, the method and the device have the advantages that the interrupt scheduling strategy is dynamically adjusted based on the system load prediction data to obtain the target interrupt scheduling strategy, so that the interrupt can be scheduled to more accurate interrupt processing equipment in real time when the interrupt is scheduled, the interrupt can be scheduled based on the dynamically adjusted target interrupt scheduling strategy, and the intellectualization and the dynamics of the interrupt scheduling and the accuracy of the interrupt scheduling are realized.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by a person of ordinary skill in the art without making any inventive effort are within the scope of the present invention.
The terms "comprising" and "having" in the description of the invention and in the above-described figures, as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may include other steps or elements not expressly listed.
In order to better understand the aspects of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and detailed description.
Next, a detailed description is given of an interrupt scheduling method provided by the embodiment of the present invention. Fig. 1 is a flowchart of an interrupt scheduling method according to an embodiment of the present invention, where the method includes:
s101, acquiring system load prediction data corresponding to a non-uniform memory access system.
The embodiment is not limited to a specific execution body. For example, the execution body of the embodiment may be a computer, or the execution body of the embodiment may also be a mobile phone, or the like, and the execution body of the embodiment may also be a node in the system. The Non-coherent memory access (Non-Uniform Memory Access, NUMA) system in this embodiment is a computer architecture that is used in a multiprocessor or multicore system. The embodiment is not limited to a particular manner of deriving system load prediction data, and may be performed, for example, by analyzing historical load data, applying machine learning models, and adjusting resource allocation policies. The following is a detailed description of the system load prediction data, which includes indicators such as CPU (central processing unit) usage, memory usage, disk I/O (input/output), and network traffic, where historical load data of the system needs to be collected. The collected data is cleaned and preprocessed to ensure accuracy of the analysis. This may include operations to remove outliers, fill in missing values, and so on. Through careful analysis of historical load data, the characteristics of load periodicity, trend and the like can be known. Time series analysis is an important step therein, which helps reveal the trend and periodicity of the data over time. A suitable machine learning algorithm is selected to construct the predictive model. Common algorithms are linear regression, decision trees, random forests, neural networks, and the like. The model is trained using the historical load data as a training set. During training, model parameters need to be adjusted to optimize the predictive performance. The predictive performance of the model is evaluated by a validation set or a test set. Common evaluation metrics include Mean Square Error (MSE), mean Absolute Error (MAE), etc.
S102, dynamically adjusting the interrupt scheduling strategy based on the system load prediction data to obtain a target interrupt scheduling strategy.
Interrupt scheduling in this embodiment refers to the process of how an operating system decides which task or process should be preferentially processed when receiving an interrupt signal, and by reasonably arranging the order of interrupt processing, the response time of the system is reduced, and the performance of the system is improved. This embodiment does not limit the original interrupt scheduling policy. For example, the interrupt scheduling policy in this embodiment may be a device corresponding to each interrupt, or the interrupt scheduling policy in this embodiment may also be a priority of each interrupt process. This embodiment is not limited to a particular manner of dynamically adjusting the interrupt scheduling policy based on the system load prediction data. For example, the embodiment may determine the load of each interrupt processing apparatus based on the system load prediction data, determine the interrupt processing capability according to the load, preferentially allocate the interrupt to the interrupt processing apparatus having the high interrupt processing capability, and ensure that the interrupt is preferentially allocated to the processing apparatus having the low load and the high processing capability.
It should be further noted that, in order to improve the security of interrupt processing, before dynamically adjusting the interrupt scheduling policy based on the system load prediction data to obtain the target interrupt scheduling policy, the method may further include predicting a target interrupt and a target interrupt time point based on the historical interrupt data, and loading data corresponding to the target interrupt into the cache before the target interrupt occurs based on the interrupt time point to obtain the target cache. The target cache in this embodiment is configured to store data corresponding to each interrupt, and after the interrupt is processed, the data in the target cache may be deleted, so as to improve the caching capability of the target cache. The embodiment predicts the target interrupt based on the historical interrupt data and the target interrupt time point corresponding to the target interrupt, so that the data corresponding to the target interrupt is stored in the cache in time based on the target interrupt time point, and the data loading time during interrupt processing is reduced.
It should be further noted that, before the target interrupt occurs based on the interrupt time point, loading the data corresponding to the target interrupt into the cache to obtain the target cache, and then, when determining that the size of the data in the target cache is greater than the set data threshold, determining whether the interrupt processing frequency corresponding to each data in the target cache is greater than the set processing frequency threshold; when the interrupt processing frequency is not greater than the set processing frequency threshold, determining to flush the data from the target cache, and when the interrupt processing frequency is greater than the set processing frequency threshold, determining not to process the data. In the embodiment, when processing the data in the target cache, the interrupt processing frequency of the interrupt is considered, and the data corresponding to the interrupt with low interrupt processing frequency is cleared from the target cache preferentially, so that the storage capacity of the target cache is improved. In this embodiment, the data corresponding to the interrupt with high interrupt processing frequency is not deleted, so as to reduce the frequency of writing the data corresponding to the interrupt into the target cache, improve the processing capability, and update the data in the target cache according to the latest data corresponding to the interrupt after the interrupt processing is completed for the data corresponding to the interrupt with high interrupt processing frequency.
S103, determining target interrupt processing equipment corresponding to each interrupt based on a target interrupt scheduling strategy, and processing each interrupt based on the target interrupt processing equipment.
The target interrupt handling device in this embodiment refers to a device that can handle an interrupt. The embodiment is not limited to a particular type of target interrupt handling device, for example, the target interrupt handling device in the embodiment may be a CPU, or the target interrupt handling device in the embodiment may also be a DMA controller, which allows the peripheral to exchange data directly with system memory without intervention of the CPU (Direct Memory Access ). The target interrupt scheduling policy in this embodiment may include at least one of a correspondence between interrupts and interrupt processing devices, interrupt processing devices closest to the interrupts, and processing capabilities of the interrupt processing devices corresponding to the interrupts, where specific closest refers to the same NUMA, and it is understood that each interrupt has a corresponding device just beginning, and that the device corresponds to a NUMA, and that there are other interrupt processing devices under the NUMA, and that the interrupt can be preferentially allocated to devices under the same NUMA, and the capabilities of the interrupt processing devices in this embodiment refer to the own specification and architecture of the interrupt processing device. The embodiment can schedule each interrupt to target interrupt processing equipment with low load based on a target interrupt scheduling strategy, wherein the low load refers to equipment with a lower load value than a set load value, or can also determine the number of interrupts according to the descending order of the load of each target interrupt processing equipment according to system load prediction data, and select the number of equipment corresponding to the number of interrupts from the front side in the ordered load to process the interrupt.
The method includes determining a target interrupt processing device corresponding to each interrupt based on a target interrupt scheduling policy, and determining a target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling policy, wherein the method further includes, before processing each interrupt based on the target interrupt processing device, prioritizing the interrupt according to an interrupt type and a historical interrupt processing time to obtain a priority of each interrupt, adjusting the target interrupt scheduling policy based on the priority of each interrupt to obtain an adjusted target interrupt scheduling policy, and accordingly determining the target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling policy, and processing each interrupt based on the target interrupt processing device. The specific interrupt type is not limited in this embodiment. For example, the interrupt types in this embodiment may be hardware interrupts and software interrupts, and the priority of the hardware interrupts is generally referred to as higher than the priority of the software interrupts, because the hardware interrupts may involve device failures or hardware errors and need to be handled as soon as possible. For example, machine check interrupts (e.g., power failures, host errors, etc.) are typically of higher priority because they relate to system stability and data security, or the interrupt type in this embodiment may also be NUMA (Non-Uniform Memory Access ) disks, keyboards, mice, etc. The historical interrupt processing time in this embodiment refers to the time that each interrupt is processed. The relationship of the historical interrupt handling time and priority in this embodiment may be determined on demand. For example, the longer the historical interrupt handling time, the lower the priority, and the later may be handled, or the longer the historical interrupt handling time, the higher the priority, in this embodiment, since the longer the time the earlier the interrupt is handled, the more the interrupt is guaranteed to be handled within a set time. In this embodiment, the target interrupt scheduling policy is adjusted based on the priority of each interrupt, and the adjusted target interrupt scheduling policy means that the target device may be preferentially allocated to the interrupt with the high priority for processing according to the priority of the interrupt.
The method includes the steps of carrying out priority grading on the interrupt according to interrupt types and historical interrupt processing time to obtain the priority of each interrupt, wherein when the interrupt types are inconsistent, carrying out priority grading on the interrupt according to the interrupt types to obtain the priority of each interrupt, and when the interrupt types are consistent, carrying out priority grading on the interrupt according to the historical interrupt processing time to obtain the priority of each interrupt, wherein the higher the historical interrupt processing time is, the higher the priority grade is. The importance of interrupt types in determining the priority of interrupts in this embodiment is greater than the historical interrupt processing time.
In order to improve the safety of interrupt scheduling, the method can further comprise the steps of filtering system load prediction data based on an access control list to obtain target acquisition data, and encrypting the target acquisition data and an interrupt scheduling result by utilizing a data encryption mechanism to obtain target encryption data in the interrupt scheduling process. The access control list (Access Control List, ACL for short) in this embodiment is a security mechanism for defining and restricting access rights to system resources. The data encryption mechanism is a technical means for converting plaintext data into ciphertext through an algorithm and a secret key so as to ensure the security in the data transmission or storage process. The embodiment is not limited to a specific data encryption mechanism, and for example, the data encryption mechanism in the embodiment may be symmetric encryption, or the data encryption mechanism in the embodiment may also be asymmetric encryption. This embodiment prevents malicious interrupt attacks and data leakage through Access Control Lists (ACLs) and data encryption techniques.
The interrupt scheduling method provided by the embodiment of the invention comprises the steps of S101, S102, dynamically adjusting an interrupt scheduling policy based on system load prediction data to obtain a target interrupt scheduling policy, S103, determining target interrupt processing equipment corresponding to each interrupt based on the target interrupt scheduling policy, and processing each interrupt based on the target interrupt processing equipment. Compared with the traditional static-based scheduling, the method and the system dynamically adjust the interrupt scheduling strategy based on the system load prediction data to obtain the target interrupt scheduling strategy, and schedule the interrupt based on the dynamically adjusted target interrupt scheduling strategy, thereby realizing the intellectualization and the dynamics of interrupt scheduling.
For the sake of better understanding of the present invention, please refer to fig. 2, fig. 2 is a flowchart of another interrupt scheduling method according to an embodiment of the present invention, which may specifically include:
S201, acquiring current system performance state and historical system performance data.
The current system performance state in this embodiment refers to the performance of computer hardware and software in the running process, and includes key indexes such as response speed, processing capability, stability, and the like. The embodiment is not limited to a particular method of determining the system performance status, and may, for example, determine the system performance based on CPU usage, which is the computer's brain, that reflects the current system load conditions. High usage may mean that there are programs that occupy too much computing resources, resulting in a slow system response. Or the embodiment may determine based on memory usage that memory is where the computer is used to temporarily store running programs and data. If the available memory is insufficient, the system may need to read data from the hard disk frequently, which can significantly reduce performance. The embodiment can also be based on the disk read-write speed, namely the disk read-write speed directly influences the data access efficiency. If the disk is in a high load state for a long time, it may be due to the fact that a background service or application is performing a large amount of data exchange. This embodiment may also be based on network bandwidth utilization, which is an important indicator for network-dependent applications. Network congestion may cause data transmission delays, affecting the user experience. The embodiment may also be based on the system stability index, which may be viewed by a reliability monitor, as well as various events that occur to the system (e.g., software installation, system updates, application crashes, etc.). This information helps to analyze whether the system performance is stable. This embodiment may also be based on third party performance testing tools to comprehensively evaluate the performance level of the system. Or in this embodiment the system performance may also be determined based on at least two of the above parameters. The system performance status in this embodiment is used to determine the level of current system performance, and thus determine whether performance has degraded after processing the interrupt based on the target interrupt scheduling policy, and if so, ensure that performance remains unchanged or even improves, indicating that the current interrupt scheduling policy needs to be adjusted. The embodiment is not limited to specific historical system performance data, and for example, the historical system performance data in the embodiment may be at least one of CPU load, memory usage, interrupt frequency, and network traffic.
S202, predicting based on the current system performance state and the historical system performance data, and determining system load prediction data.
The embodiment is not limited to a particular method of predicting based on current system performance states and historical system performance data to determine system load prediction data. For example, the embodiment may predict based on the current system performance state and the historical system performance data based on the machine learning method to determine the system load prediction data, or the embodiment may also predict and analyze based on the current system performance state and the historical system performance data to obtain a rule of load change, and determine the system load prediction data based on the rule of load change.
It should be further noted that, in order to improve accuracy of determining the system load prediction data, the predicting based on the current system performance state and the historical system performance data may include:
S2021, updating parameters of the prediction model based on the current system performance state to obtain an updated target prediction model;
s2022, predicting by using a target prediction model according to the historical system performance data, and determining system load prediction data.
In this embodiment, the parameters of the predictive model may be updated based on the current system performance state. When the prediction model is a long-term and short-term memory model, the parameters which can be adjusted are memory depth and memory length. It will be appreciated that if the target interrupt scheduling policy is adjusted based on the last predicted system load prediction data, the performance of the system is reduced compared to before, which represents a problem with previous predictions, and therefore the parameters of the prediction model that affect certain prediction data need to be adjusted. This embodiment can predict the future system load and interrupt demand in real time. The prediction model in the embodiment can timely adjust parameters of the model according to feedback (current system performance state), and the effect of predicting the model is improved.
The method includes the steps of updating parameters of a prediction model based on a current system performance state to obtain an updated target prediction model, wherein the updating of the parameters of a long-short-term memory network model based on the current system performance state to obtain an updated target prediction model, the current system performance state is system performance after an interrupt is executed based on an interrupt scheduling policy, a training process of the long-short-term memory network model comprises data segmentation, model initialization, parameter optimization and model verification, the data segmentation refers to data segmentation according to data types, the model initialization refers to initializing the parameters of the long-short-term memory network model, the parameter optimization refers to adjusting the parameters of the model to minimize a loss function, and the model verification refers to determining the prediction performance of the trained long-short-term memory network model. In the embodiment, LSTM (long-short-term memory model) has time sensitivity, can learn the mode and the characteristic in time sequence data, and is suitable for tasks such as time sequence prediction, signal processing and the like. The embodiment uses a long-short-term memory network (LSTM) as a core prediction model, and can accurately capture time sequence changes and complex modes of system loads. The LSTM model can effectively process long-time dependency through a memory unit and a gating mechanism, and is suitable for dynamic prediction of system load. The method is characterized in that an LSTM model is combined with a real-time data stream to form a closed-loop prediction system, and a prediction result can be updated in millisecond time, so that accurate prediction of system load is realized.
It should be further noted that, in order to improve accuracy of the data, before predicting based on the current system performance state and the historical system performance data, determining the system load prediction data may further include extracting key feature data in the historical system performance data by using a principal component analysis method, where the key feature data includes a number of interrupts, a memory utilization rate, a number of devices, a terminal frequency, and a central processing unit load, performing scale unification processing on the key feature data to obtain performance data with uniform scale, and correspondingly, predicting based on the current system performance state and the historical system performance data, determining the system load prediction data may include predicting based on the current system performance state and the performance data with uniform scale, and determining the system load prediction data. This embodiment uses a Principal Component Analysis (PCA) method to extract key feature data from historical system performance data. PCA is a statistical method for transforming raw data into a new set of coordinate systems by linear transformation such that the basis vectors of the new coordinate systems are the main components of the raw data. This helps us identify the most important features in the data and remove redundant information. The embodiment performs scale unification processing on the key feature data to eliminate the dimensional influence among different features, so that the data is more suitable for subsequent analysis and prediction.
S203, dynamically adjusting the interrupt scheduling policy based on the system load prediction data to obtain a target interrupt scheduling policy.
The embodiment can determine the performance of each target interrupt processing device based on the system load prediction data, so that the target interrupt processing device corresponding to each interrupt in the interrupt scheduling strategy is preferentially adjusted. In this embodiment, mapping the same network card in the target interrupt scheduling policy to the same CPU as much as possible refers to that in the multi-core processor system, in order to improve the processing efficiency and the system performance, interrupt requests generated by specific hardware (such as the network card) are preferentially allocated to one or several fixed CPU cores for processing. In particular, this strategy has several key aspects in that interrupt affinity, by setting interrupt affinity (IRQ AFFINITY), a particular interrupt request can be bound to a particular CPU. The purpose of this is to avoid that the processing of all interrupts is concentrated on a few CPUs, thus overloading these CPUs, while the other CPUs are relatively idle. For a network card supporting multiple queues, interrupts can be uniformly distributed to different CPUs through hardware queues, and for a network card not supporting multiple queues, similar effects can be realized through software queues. In this way, each CPU can handle a portion of the interrupt, thereby improving overall processing efficiency. The network card interrupt is bound to the fixed CPU, so that the cost of interrupt processing can be reduced, the throughput of network data can be improved, and the delay can be reduced. This is because when multiple CPUs handle interrupts from the same network card at the same time, additional context switching and cache coherency issues may occur, affecting performance.
It should be further noted that, in order to improve accuracy of the interrupt policy, the dynamically adjusting the interrupt scheduling policy based on the system load prediction data to obtain the target interrupt scheduling policy may include dynamically adjusting the interrupt scheduling policy based on the system load prediction data and the current system performance state to obtain the target interrupt scheduling policy, where the dynamically adjusted parameter includes an interrupt affinity threshold, and the interrupt affinity includes interrupt priority scheduling to the memory. The interrupt affinity threshold in this embodiment includes interrupt priority scheduling to memory, where when memory is insufficient, determining the maximum CPU threshold for each interrupt to be handled on each CPU, if the current CPU threshold is greater than the maximum CPU threshold, then the CPU will not be scheduled. The embodiment can give the interrupt affinity threshold value to preferentially distribute the interrupt to the CPU with lower load, thereby improving the interrupt processing efficiency.
S204, determining target interrupt processing equipment corresponding to each interrupt based on a target interrupt scheduling strategy, and processing each interrupt based on the target interrupt processing equipment.
The embodiment is not limited to a specific target interrupt processing apparatus, and the target interrupt processing apparatus in the embodiment may be a CPU. The embodiment can adopt a Q-learning method, and the Q-learning is a value-based model-free reinforcement learning algorithm through learning and optimizing real-time feedback of a scheduling strategy.
It should be further noted that in the process of processing the interrupt based on the interrupt scheduling policy, the method may further include determining whether the current interrupt processing device is processing other interrupts, when the current interrupt and the priority of other interrupts are determined, processing interrupt requests with higher priority preferentially, and masking out interrupt requests with the same level or lower level to form interrupt nesting. Or in the case of a multi-core CPU (interrupt handling device) if a large number of hardware interrupts are allocated to different CPU core processes. The performance can be well balanced. For example, the server has a plurality of CPU multi-cores, a plurality of network cards and a plurality of hard disks, and if the network card can interrupt and exclusive one CPU core and the disk IO interrupt and exclusive one CPU core, the load of a single CPU can be greatly reduced, and the overall processing efficiency can be improved.
It should be further noted that, based on any embodiment, in order to improve stability of system operation and accuracy of prediction, after determining, based on the target interrupt scheduling policy, a target interrupt processing device corresponding to each interrupt, and after processing each interrupt based on the target interrupt processing device, the method may further include obtaining system performance feedback data after processing the interrupt based on the target interrupt scheduling policy, where the system performance feedback data includes at least one of a central processing unit usage rate, a memory access delay, an input/output operation frequency, a network flow, a waiting time, a resource utilization rate, and an interrupt response time, analyzing the system performance feedback data, determining a target performance parameter corresponding to a current system, comparing the target performance parameter with a performance parameter threshold, when determining that the target performance parameter is less than the performance parameter threshold, not processing, when determining that the target performance parameter is greater than or equal to the performance parameter threshold, determining to send prompt information, so that the client adjusts, based on the prompt information, the interrupt scheduling policy, comparing the system load feedback data with system load feedback data corresponding to system load prediction data, determining a system load feedback data and a system load prediction data, determining a system load prediction difference, and adjusting a system load prediction model based on the system load prediction model. In this embodiment, analyzing the system performance feedback data to determine the target performance parameter corresponding to the current system refers to integrating all the system performance feedback data to determine a more determined target performance parameter, so that the target performance parameter can accurately reflect the state of the current system. In this embodiment, when it is determined that the target performance parameter is equal to or greater than the performance parameter threshold, the system will send a prompt message to the client. These hints may include performance warnings, suggested optimization measures, or adjustment suggestions for interrupting the scheduling policy, etc., so that the client can learn about the system conditions in time and make adjustments accordingly. The embodiment adjusts the original system load prediction model based on the calculated system load prediction difference. This may include modifying model parameters, introducing new influencing factors, or replacing more appropriate predictive algorithms, etc. The adjusted model should be able to more accurately predict future system load conditions, thereby providing powerful support for optimization of interrupt scheduling policies.
According to the interrupt scheduling method provided by the embodiment of the invention, the load and the interrupt processing capacity of each CPU are calculated in real time, and the affinity and the mapping strategy of the interrupt are dynamically adjusted. The innovation is that an adaptive threshold adjustment mechanism is introduced, parameters of a scheduling strategy are dynamically adjusted according to the current state of the system, and the interrupt is ensured to be preferentially distributed to the CPU with lower load and stronger processing capacity. The method has the specific beneficial effects that:
firstly, the system performance is improved, namely, under the high-load network request scene, the interrupt processing time is reduced by 30%, and the overall throughput is improved by 25%.
And secondly, the resource utilization rate is enhanced, namely the waste of CPU and memory resources is reduced by dynamically adjusting the interrupt strategy.
Thirdly, the stability of the system is improved, and the system can still stably run under abnormal conditions through a self-adaptive optimization and fault tolerance mechanism.
For easy understanding, please refer to fig. 3, fig. 3 is a flowchart of an interrupt scheduling method according to an embodiment of the present invention, which specifically includes:
s301, collecting system performance data, wherein the system performance data comprise CPU load, memory utilization rate, interrupt frequency and network flow multidimensional data.
For easy understanding, please refer to fig. 4, fig. 4 is a structural frame diagram of an interrupt scheduling method according to an embodiment of the present invention. It can be seen from fig. 4 that the whole method comprises an acquisition module, a data processing module, a prediction module, a machine learning module, a scheduling strategy and an enhanced feedback module. The data acquisition module of the embodiment is designed according to the principle that the data acquisition module is responsible for monitoring the running state of the system in real time and acquiring multidimensional data comprising CPU load, memory utilization rate, interrupt frequency, network flow and the like. In order to ensure the accuracy and the real-time performance of the data, the module adopts a high-precision sensor and a data acquisition card. The method is characterized in that sensors are deployed on key nodes of a system, and various performance indexes are obtained in real time. The data is transmitted to the central processing unit via the high-speed bus and stored in the distributed database for subsequent processing and analysis.
S302, performing data cleaning, normalization and dimension reduction technology on the system performance data, and processing to obtain target system performance data.
The design principle of the data processing module of the embodiment is that after data acquisition, the system needs to conduct feature extraction and preprocessing on the data so as to identify key factors affecting interrupt scheduling. And the data cleaning, normalization and dimension reduction technology is adopted to remove noise and redundant information and improve the data quality. The method is specifically realized by utilizing a Principal Component Analysis (PCA) technology to extract main characteristics (the main characteristics mainly refer to the number of interruption times, the utilization rate of a memory, the number of devices, the total interruption load and the CPU load) in the data, and converting the data into a uniform scale through standardized processing. The data after feature extraction is stored in a feature database for use by the predictive model. The data processing in this embodiment may also include categorizing the data, e.g., categorizing to determine which NUMA and hardware layout they belong to CPU, memory, network card, and so forth.
S303, calling a prediction model at fixed time intervals, predicting according to the system performance data, and determining system load prediction data.
The design principle of the machine learning module in the embodiment is that a long-short-term memory network (LSTM) is adopted to construct a prediction model, so that time sequence change and a complex mode of system load can be captured. And (3) training the model by using historical data, and improving the generalization capability of the model through cross-validation and super-parameter optimization. The model training is carried out in a GPU acceleration environment, so that training efficiency is improved. The training process comprises data segmentation, model initialization, parameter optimization and model verification. And finally, deploying the trained model on a prediction server, and providing a load prediction result in real time. And (3) designing a closed loop system, and feeding back a load prediction result to a scheduling strategy in real time. By predicting future load changes, the interrupt distribution strategy is adjusted in advance, and the performance bottleneck in the load peak period is avoided. Dynamic parameter adjustment, namely dynamically adjusting key parameters of a scheduling algorithm, such as an interrupt affinity threshold value, a load balancing strategy and the like, according to the predicted load trend. The design principle of the prediction module in this embodiment is to deploy a trained LSTM model to predict the future system load and interrupt requirements in real time (interrupt requirements refer to the requirements to complete the interrupt). Based on the prediction result, the affinity and mapping strategy of the interrupt are dynamically adjusted, and the interrupt is preferentially distributed to the CPU with lower load. The method specifically comprises the steps that a system calls a prediction model at fixed time intervals, and prediction is carried out according to the current system state (feedback) and historical data. The prediction result is used for updating an interrupt schedule, and the schedule is stored in the shared memory for real-time access by the scheduler. Priority ranking-the interrupts are divided into multiple priority hierarchies (e.g., high, medium, low), with the ranking being based on the source, type, and historical processing time of the interrupt. Priority scheduling strategy, when designing scheduling strategy, priority processing high priority interrupt, ensuring timely response of key task. Meanwhile, the processing time of the low-priority interrupt is dynamically adjusted, so that the influence on the overall performance of the system is avoided. And a cache prefetching mechanism for loading related data into the cache in advance when some interrupts are predicted to be about to occur, so that the data loading time during interrupt processing is reduced. And optimizing a cache replacement strategy, namely dynamically adjusting the cache replacement strategy according to the frequency of interrupt processing and the data access mode, and preferentially reserving the data accessed by high frequency.
S304, adjusting the interrupt scheduling strategy according to the system load prediction data to obtain the adjusted interrupt scheduling strategy.
S305, adjusting the adjusted interrupt scheduling strategy according to the interrupt type and the historical processing time to obtain the interrupt scheduling strategy based on the priority.
S306, distributing the interrupt to the corresponding CPU by using the interrupt scheduling strategy based on the priority.
S307, the prediction model and the interrupt scheduling strategy are continuously updated through the interrupt scheduling execution effect.
The self-adaptive optimization module in the embodiment adopts the design principle that a machine learning model is continuously updated through a feedback mechanism to adapt to the change of the system environment. And introducing a Q-learning algorithm, and adjusting strategy parameters according to the scheduling effect to realize self-adaptive optimization. The method is specifically realized in that the system records the scheduling effect after each scheduling, including indexes such as interrupt processing time, CPU utilization rate and the like. By Q-learning algorithm, the system updates the policy parameters based on these feedback to optimize scheduling performance. Strategy exploration and optimization, namely continuously trying different scheduling strategy combinations through an exploration mechanism in reinforcement learning, and recording the influence of the scheduling strategy combinations on the system performance. The reward mechanism is designed, namely a reasonable reward mechanism is designed, interrupt processing efficiency, system load balance and the like are used as reward indexes, and policy optimization directions are guided. The security and fault tolerance mechanism is designed based on the design principle that security strategies are designed to prevent malicious interrupt attack and data leakage. And a fault-tolerant mechanism is introduced to ensure that the system can still stably run under the condition of hardware fault or abnormality. The security of system data is protected by setting an Access Control List (ACL) and a data encryption mechanism. The system periodically performs fault detection and automatically switches to a standby scheme when an abnormality is detected, so as to ensure the stability of the system.
The following describes an interrupt scheduling device provided by an embodiment of the present invention, and the interrupt scheduling device described below and the interrupt scheduling method described above may be referred to correspondingly.
Fig. 5 is a schematic structural diagram of an interrupt scheduling device according to an embodiment of the present invention, which may include:
The data acquisition module 100 is configured to acquire system load prediction data corresponding to the non-uniform memory access system;
the dynamic adjustment module 200 is configured to dynamically adjust the interrupt scheduling policy based on the system load prediction data, so as to obtain a target interrupt scheduling policy;
The interrupt scheduling module 300 is configured to determine, based on the target interrupt scheduling policy, a target interrupt processing device corresponding to each interrupt, and process each interrupt based on the target interrupt processing device.
Further, based on the above embodiment, the interrupt scheduling apparatus may further include:
the priority determining module is used for carrying out priority grading on the interrupt according to the interrupt type and the historical interrupt processing time to obtain the priority of each interrupt;
The priority-based interrupt scheduling policy adjustment module is used for adjusting the target interrupt scheduling policy based on the priority of each interrupt to obtain an adjusted target interrupt scheduling policy;
Accordingly, the interrupt scheduling module 300 includes:
And the interrupt processing unit is used for determining target interrupt processing equipment corresponding to each interrupt based on the adjusted target interrupt scheduling strategy and processing each interrupt based on the target interrupt processing equipment.
Further, based on any of the above embodiments, the priority determining module may include:
The priority grading unit is used for grading the priority of the interrupt based on the interrupt type when the interrupt types are inconsistent, so as to obtain the priority of each interrupt;
And the priority determining unit is used for carrying out priority grading on the interrupt based on the historical interrupt processing time when the interrupt types are consistent, so as to obtain the priority of each interrupt, wherein the higher the historical interrupt processing time is, the higher the priority grade is.
Further, based on any of the above embodiments, the interrupt scheduling apparatus may further include:
The target interrupt and target interrupt time point prediction module is used for predicting target interrupt and target interrupt time points based on historical interrupt data;
And the target cache determining unit is used for loading the data corresponding to the target interrupt into a cache based on the interrupt time point before the target interrupt occurs, so as to obtain a target cache.
Further, based on the above embodiment, the interrupt scheduling apparatus may further include:
The judging module is used for judging whether the interrupt processing frequency corresponding to each data in the target cache is larger than a set processing frequency threshold value or not when the data size in the target cache is determined to be larger than the set data threshold value;
the data clearing module is used for determining to clear the data from the target cache when the interrupt processing frequency is not greater than the set processing frequency threshold;
and the non-processing module is used for determining not to process the data when the interrupt processing frequency is larger than the set processing frequency threshold value.
Further, based on any of the above embodiments, the interrupt scheduling apparatus may further include:
the data filtering module is used for filtering the system load prediction data based on the access control list to obtain target acquisition data;
And the data encryption module is used for encrypting the target acquisition data and the interrupt scheduling result by utilizing a data encryption mechanism to obtain target encryption data.
Further, based on any of the above embodiments, the interrupt scheduling apparatus may further include:
The performance data acquisition module is used for acquiring the current system performance state and the historical system performance data corresponding to the non-uniform memory access system;
And the system load prediction data determining module is used for predicting based on the current system performance state and the historical system performance data and determining the system load prediction data.
Further, based on the above embodiment, the above system load prediction data determining module may include:
the parameter updating unit is used for updating the parameters of the prediction model based on the current system performance state to obtain an updated target prediction model;
and the target prediction model prediction unit is used for predicting by utilizing the target prediction model according to the historical system performance data and determining the system load prediction data.
Further, based on the above embodiment, the parameter updating unit may include:
The parameter updating subunit is used for updating parameters of the long-short-period memory network model based on the current system performance state to obtain the updated target prediction model, wherein the current system performance state is system performance after the interrupt scheduling strategy is executed, and the training process of the long-short-period memory network model comprises data segmentation, model initialization, parameter optimization and model verification;
Wherein, the data segmentation refers to data segmentation according to data types;
the model initialization refers to initializing parameters of a long-term and short-term memory network model;
the parameter optimization refers to adjusting parameters of a model to minimize a loss function;
the model verification refers to determining the prediction performance of a trained long-term and short-term memory network model.
Further, based on the above embodiment, the dynamic adjustment module 200 may include:
and the dynamic adjustment unit is used for dynamically adjusting the interrupt scheduling strategy based on the system load prediction data and the current system performance state to obtain the target interrupt scheduling strategy, wherein the dynamically adjusted parameters comprise an interrupt affinity threshold, and the interrupt affinity comprises interrupt priority scheduling to a memory.
Further, the interrupt scheduling apparatus may further include:
the system comprises a feature extraction module, a characteristic analysis module and a characteristic analysis module, wherein the feature extraction module is used for extracting key feature data in historical system performance data by utilizing a principal component analysis method, wherein the key feature data comprises interrupt times, memory utilization rate, equipment quantity, terminal frequency and a central processing unit load;
the scale unification module is used for performing scale unification processing on the key characteristic data to obtain scale unification performance data;
Correspondingly, the system load prediction data determining module may include:
and the system load prediction data determining unit is used for predicting based on the current system performance state and the performance data with unified scale to determine system load prediction data.
Further, based on any of the above embodiments, the interrupt scheduling apparatus may further include:
The system performance feedback data acquisition module is used for acquiring system performance feedback data after the interruption is processed based on the target interruption scheduling strategy, wherein the system performance feedback data comprises at least one of central processing unit utilization rate, memory access delay, input and output operation frequency, network flow, waiting time, resource utilization rate and interruption response time;
The target performance parameter determining module is used for analyzing the system performance feedback data and determining the target performance parameter corresponding to the current system;
the comparison module is used for comparing the target performance parameter with a performance parameter threshold;
A non-processing module, configured to, when it is determined that the target performance parameter is less than the performance parameter threshold, not perform processing;
the prompt information sending module is used for determining to send prompt information when the target performance parameter is determined to be greater than or equal to the performance parameter threshold value, so that the client adjusts the interrupt scheduling strategy based on the prompt information;
the system load feedback data determining module is used for determining system load feedback data corresponding to the system load prediction data in the system performance feedback data;
The difference value determining module is used for comparing the system load feedback data with the system load prediction data to determine a system load prediction difference value;
and the adjusting module is used for adjusting the corresponding system load prediction model based on the system load prediction difference value to obtain an adjusted system load prediction model.
It should be noted that, the order of the modules and units in the interrupt scheduling apparatus may be changed without affecting the logic.
The description of the features in the embodiment corresponding to fig. 5 may be referred to the related description of the embodiment corresponding to fig. 5, which is not repeated here.
The interrupt scheduling device provided by the embodiment of the invention can comprise a data acquisition module 100 for acquiring system load prediction data corresponding to a non-uniform memory access system, a dynamic adjustment module 200 for dynamically adjusting an interrupt scheduling policy based on the system load prediction data to obtain a target interrupt scheduling policy, and an interrupt scheduling module 300 for determining a target interrupt processing device corresponding to each interrupt based on the target interrupt scheduling policy and processing each interrupt based on the target interrupt processing device. Compared with the traditional method for scheduling the interrupt based on static state, the method and the device for scheduling the interrupt based on the system load prediction data dynamically adjust the interrupt scheduling policy to obtain the target interrupt scheduling policy, and the interrupt can be scheduled based on the dynamically adjusted target interrupt scheduling policy, so that the intellectualization and the dynamics of interrupt scheduling are realized, and the accuracy of interrupt scheduling is improved.
The following describes an interrupt scheduling device provided in the embodiment of the present invention, and the interrupt scheduling device described below and the interrupt scheduling method described above may be referred to correspondingly.
FIG. 6 is a schematic structural diagram of an interrupt service apparatus according to an embodiment of the present invention, as shown in FIG. 6, the interrupt service apparatus includes a memory 60 for storing a computer program;
a processor 61 for implementing the steps of the interrupt scheduling method according to the above embodiment when executing a computer program.
The interrupt scheduling device provided in this embodiment may include, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like.
Processor 61 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor 61 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field-Programmable gate array (fieldprogrammable GATE ARRAY, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 61 may also include a main processor, which is a processor for processing data in a wake-up state, also referred to as a central processor (Central Processing Unit, CPU), and a coprocessor, which is a low-power processor for processing data in a standby state. In some embodiments, the processor 61 may be integrated with an image processor (Graphics Processing Unit, GPU) for rendering and rendering of content to be displayed by the display screen. In some embodiments, the processor 61 may also include an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) processor for processing computing operations related to machine learning.
Memory 60 may include one or more computer-readable storage media, which may be non-transitory. Memory 60 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In this embodiment, the memory 60 is at least used for storing a computer program 601, which, when loaded and executed by the processor 61, is capable of implementing the relevant steps of the interrupt scheduling method disclosed in any of the foregoing embodiments. In addition, the resources stored in the memory 60 may further include an operating system 602, data 603, and the like, where the storage manner may be transient storage or permanent storage. Operating system 602 may include Windows, unix, linux, among other things. The data 603 may include, but is not limited to, data required for interrupt scheduling, and the like.
In some embodiments, the interrupt schedule device may further include a display 62, an input-output interface 63, a communication interface 64, a power supply 65, and a communication bus 66.
Those skilled in the art will appreciate that the structure shown in fig. 6 does not constitute a limitation of the interrupt scheduling apparatus and may include more or fewer components than illustrated.
It will be appreciated that if the interrupt scheduling method in the above embodiment is implemented in the form of a software functional unit and sold or used as a separate product, it may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in part or in whole or in part in the form of a software product stored in a storage medium for performing all or part of the steps of the method according to the embodiments of the present invention. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrically erasable programmable ROM, a register, a hard disk, a removable magnetic disk, a CD-ROM, a magnetic disk, or an optical disk, etc. which can store program codes.
Based on this, the embodiment of the invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the interrupt scheduling method as described above.
Based on this, an embodiment of the present invention also provides a computer program product, including a computer program/instruction, which when executed by a processor implements the steps of the interrupt scheduling method described above.
The above describes in detail an interrupt scheduling device provided by the embodiment of the present invention. In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The method, the device, the equipment and the computer readable storage medium for interrupt scheduling provided by the invention are described in detail. The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present invention and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.