Disclosure of Invention
The technical task of the invention aims at the defects, and provides the intelligent large-scale task scheduling method and system based on the business characteristics of the power host, so that the utilization rate of the power resource can be improved, the task response delay can be reduced, and the user experience and the overall performance of the system can be improved.
The technical scheme adopted for solving the technical problems is as follows:
The intelligent large-scale task scheduling method based on the business characteristics of the power host comprises the following steps:
1) Data acquisition and preprocessing of the task and the computing host node;
2) Carrying out multidimensional quantitative analysis on task business characteristics;
3) Dynamic monitoring of the performance state of the node of the power host;
4) Constructing and optimizing a dynamic calculation force matching model;
5) Auxiliary optimization task scheduling based on an intelligent prediction mechanism;
6) And finally, distributing and executing the tasks.
By introducing a multi-objective optimization algorithm, a dynamic weight distribution strategy and an intelligent prediction mechanism, the utilization rate of computing power resources is improved, the response delay of tasks is reduced, and the user experience and the overall performance of a system are improved.
Further, the data acquisition and preprocessing includes:
1.1 Task data acquisition, wherein the scheduling system receives and stores the key information of the task in real time, and the method comprises the following steps:
determining the time urgency of a task and providing a basis for real-time demand quantification;
the concurrent execution record of the task is extracted from the database and is used for concurrent demand prediction;
Analyzing the resource calling frequency and algorithm complexity of tasks, and determining the computation density degree of different tasks;
service Level Agreements (SLAs) define task priorities according to an agreement, e.g., VIP service tasks have higher weights;
1.2 Data acquisition of the power host nodes, namely, acquiring performance data of each power host node in real time through a distributed monitoring system, wherein the data acquisition comprises the following steps:
CPU occupancy rate, which represents the use condition of computing resources and reflects the load capacity of nodes;
the memory utilization rate is that the allocation state of the node memory is displayed, and the suitability of task allocation is evaluated;
network delay, namely describing communication time among nodes and influencing scheduling of cross-node tasks;
The energy consumption is that the energy consumption of the node in operation is monitored and is used for optimizing the energy efficiency ratio;
1.3 Data preprocessing:
in order to enable the task characteristics and the node performance to be directly used for model calculation, normalization processing is needed, and all indexes are uniformly mapped to the [0,1] interval through the following formula:
the normalized data avoid the influence of dimension difference on the subsequent model, and consistent input is provided for scheduling optimization.
Further, the service characteristics comprise real-time requirements, concurrency requirements, computational complexity and service level agreements;
Carrying out multidimensional quantitative analysis on task business characteristics to determine the scheduling requirements:
2.1 Real-time demand quantification:
in order to measure the time urgency of the task, the resource priority allocation is guided, the scheduling is realized by calculating the ratio of the current time to the task deadline, and the formula is as follows:
if RTD approaches 1, the higher the task priority is, the task needs to be immediately scheduled;
If RTD approaches 0, indicating that the task is not urgent, and is suitable for later scheduling;
2.2 Concurrent demand prediction:
and optimizing a resource allocation strategy for perceiving possible concurrency pressure of tasks in advance. Predicting future concurrency demands by combining historical maximum concurrency with average values:
CD=α·Hmax+(1-α)·Emean
The user adjustable factor is used for adjusting the importance of the maximum concurrency and average concurrency of the history;
the high concurrency task allocates more nodes to meet the demand;
2.3 Calculation complexity evaluation):
To obtain the level of occupancy of computational resources by the quantization tasks. By analyzing the subtask complexity after task decomposition, the comprehensive complexity is calculated:
Wherein, Ci is the complexity index (such as calculated amount, I/O operation, etc.) of the I-th part task;
Wi, weight, represent the importance degree of subtasks;
2.4 Service level agreement priority calculation:
weights are directly assigned based on user requirements and terms of service (e.g., high priority tasks require quick response).
Further, the dynamic monitoring of the performance state of the node of the power host includes:
3.1 Real-time monitoring index:
Continuously collecting indexes comprising CPU occupancy rate and memory utilization rate through monitoring probes deployed on all the computing host nodes to form a node performance database with high frequency dynamic update;
3.2 Standardization and weight allocation):
Normalizing the node performance index according to a formula to enable the node performance index to have comparability;
Dynamic weights are assigned to different metrics (e.g., CPU occupancy may be more important than memory usage), and the weight values may be adjusted based on actual load.
Further, the dynamic computing power matching model optimizes task allocation rules based on a multi-objective optimization algorithm and a dynamic weight allocation strategy;
the dynamic computing force matching model construction and optimization specifically comprises the following steps:
4.1 Objective function definition:
the dynamic computing force matching model is combined with the following two main targets to optimize the resource utilization efficiency and response time:
the resource waste is minimized, namely, the distributed resources are ensured to be close to the actual demands, and idle is avoided;
minimizing response time by approximating task completion time to user expectations;
4.2 Dynamic weight policy):
According to the real-time system load, the weights of two targets are dynamically adjusted, and the formula is as follows:
Wfinal=λ1·WRW+λ2·WRD
Wherein, lambda1、λ2 is a weight factor which can be automatically adjusted by a threshold condition;
4.3 Genetic algorithm optimization:
generating initial populations, wherein each population represents an allocation scheme of tasks and nodes;
Calculating fitness of each scheme based on the objective function;
crossover and mutation, namely introducing mutation factors into the population, improving the diversity of the scheme and avoiding sinking into local optimum;
iterative optimization, namely circularly updating the population until the objective function converges.
Furthermore, the intelligent prediction mechanism adopts a time sequence model to analyze historical task data and perceives the load change trend in advance;
the specific implementation of auxiliary optimization task scheduling based on the intelligent prediction mechanism comprises the following steps:
5.1 A load prediction model is constructed, task load change is predicted through a time sequence, and the formula is as follows:
Rfuture=β·Rcurrent+(1-β)·Rhistorical
The weight factor is used for carrying out weighted average between the current value Rcurrent and the historical value Rhistorical, and the influence of the current trend and the historical data on future prediction is balanced by adjusting the value of the beta;
When β approaches 1, the predicted value Rfuture depends more on the current value Rcurrent, i.e., the current trend is considered to have a greater impact on the future;
When β approaches 0, the predicted value Rfuture depends more on the history value Rhistorical, i.e., the history data is considered to have a greater impact on the future;
When β is equal to 0.5, the predicted value Rfuture is a simple average of the current and historical values, i.e., the effect of both is equal.
Optimizing a resource allocation strategy in advance by predicting a future high load period;
5.2 Resource warm-up and task migration):
for predicted high-priority task loads, resources are allocated in advance to reduce starting delay;
and dynamically migrating the non-critical tasks to the low-load nodes so as to reduce resource occupation conflict.
Further, the task allocation and execution includes:
Prioritizing tasks according to real-time Requirements (RTD), concurrency requirements (CD), and SLA priorities;
Performing resource allocation according to the output of the dynamic calculation matching model, and continuously monitoring the effect;
And (3) adaptively adjusting, namely recalculating task priority and resource allocation strategies according to system load changes.
The invention also claims an intelligent large-scale task scheduling system based on the business characteristics of the power host, which comprises:
The data acquisition and preprocessing module is used for realizing data acquisition and preprocessing of the task and the computing host node;
The task business characteristic quantitative analysis module is used for carrying out multidimensional quantitative analysis on the task business characteristics;
The power host node performance state monitoring module is used for dynamically monitoring the power host node performance state;
The dynamic computing force matching model construction and optimization module is used for realizing the construction and optimization of the dynamic computing force matching model;
the intelligent prediction mechanism auxiliary optimization module is used for auxiliary optimization task scheduling based on the intelligent prediction mechanism;
the task allocation and execution module is used for realizing final task allocation and execution;
The system can realize the method.
The invention also claims an intelligent large-scale task scheduling realization device based on the business characteristics of the power host, which comprises at least one memory and at least one processor;
the at least one memory for storing a machine readable program;
the at least one processor is configured to invoke the machine-readable program to implement the method described above.
The invention also claims a computer readable medium having stored thereon computer instructions which, when executed by a processor, are capable of carrying out the above-described method.
Compared with the prior art, the intelligent large-scale task scheduling method and system based on the business characteristics of the power host have the following beneficial effects:
1. and the dispatching accuracy is improved:
Through multidimensional quantitative analysis of the service characteristics (such as real time performance, concurrency requirements, computational complexity and service level agreements) of tasks and the performance states (such as CPU occupancy rate, memory utilization rate, network delay and energy consumption) of nodes, accurate matching of the tasks and resources of a computing host is achieved, and resource waste and task accumulation are effectively reduced.
2. System adaptability is enhanced:
the dynamic calculation force matching model and the multi-objective optimization algorithm are introduced, and the task allocation rule is adjusted by combining the real-time load and the service priority, so that the system can flexibly cope with complex task demands and load changes, and the robustness and the reliability of the system are improved.
3. Reducing response delay:
the intelligent prediction mechanism senses the load change trend in advance by analyzing the historical task data, performs resource preheating and task migration optimization, remarkably shortens the response time of the task and improves the user experience.
4. Optimizing the resource utilization rate:
The dynamic weight distribution strategy balances the resource utilization rate and the task execution efficiency in multi-objective optimization, maximizes the use efficiency of the nodes of the power host, and is particularly suitable for large-scale concurrent task scenes.
Detailed Description
The invention will be further illustrated with reference to specific examples.
The embodiment of the invention provides an intelligent large-scale task scheduling method based on the business characteristics of a power host, which comprises the steps of carrying out multidimensional quantitative analysis on the business characteristics; the method comprises the steps of dynamically monitoring the performance state of a computing host node, constructing a dynamic computing matching model, and optimizing task scheduling based on an intelligent prediction mechanism. The business characteristics include real-time requirements, concurrency requirements, computational complexity, and service level agreements. The dynamic computing power matching model optimizes task allocation rules based on a multi-objective optimization algorithm and a dynamic weight allocation strategy. The intelligent prediction mechanism adopts a time sequence model to analyze historical task data, and perceives the load change trend in advance. Task scheduling optimization includes resource warm-up and task migration policies. The method comprises the following steps:
s1, acquiring data of a task and a power host node;
S2, quantitatively analyzing task business characteristics;
s3, monitoring the performance state of the node of the power host;
S4, constructing and optimizing a dynamic calculation force matching model;
s5, auxiliary optimization of an intelligent prediction mechanism;
S6, final task allocation and execution.
The above implementation steps are described in detail below.
And the first step is data acquisition and preprocessing.
1. The task data is collected and the data is stored,
The scheduling system receives and stores key information of tasks in real time:
determining the time urgency of a task and providing a basis for real-time demand quantification;
the concurrent execution record of the task is extracted from the database and is used for concurrent demand prediction;
Analyzing the resource calling frequency and algorithm complexity of tasks, and determining the computation density degree of different tasks;
Service Level Agreements (SLAs) define task priorities according to an agreement, e.g. VIP service tasks have a higher weight.
2. The data of the nodes of the power main machine are collected,
Through distributed monitored control system, gather every power host computer node's performance data in real time, include:
CPU occupancy rate, which represents the use condition of computing resources and reflects the load capacity of nodes;
the memory utilization rate is that the allocation state of the node memory is displayed, and the suitability of task allocation is evaluated;
network delay, namely describing communication time among nodes and influencing scheduling of cross-node tasks;
And the energy consumption is that the energy consumption of the node in operation is monitored and is used for optimizing the energy efficiency ratio.
3. The data is pre-processed and the data is pre-processed,
In order to enable the task characteristics and the node performance to be directly used for model calculation, normalization processing is needed, and all indexes are uniformly mapped to the [0,1] interval through the following formula:
the normalized data avoid the influence of dimension difference on the subsequent model, and consistent input is provided for scheduling optimization.
And secondly, quantitatively analyzing task business characteristics.
Carrying out multidimensional quantitative analysis on task business characteristics to determine the scheduling requirements:
1. The real-time demand is quantified and the real-time demand is quantified,
In order to measure the time urgency of the task, the resource priority allocation is guided, the scheduling is realized by calculating the ratio of the current time to the task deadline, and the formula is as follows:
if RTD approaches 1, the higher the task priority is, the task needs to be immediately scheduled;
if RTD approaches 0, indicating that the task is not urgent, it is suitable for later scheduling.
2. The concurrent demand forecast is based on the result of the demand forecast,
To sense possible concurrency pressure of tasks in advance, optimizing a resource allocation strategy, and predicting future concurrency demands by combining the historical maximum concurrency quantity and the average value:
CD=α·Hmax+(1-α)·Emean
And alpha is a user adjustable factor for adjusting the importance of the maximum concurrency and average concurrency of the history.
The high concurrency task allocates more nodes to meet the demand.
3. The evaluation of the computational complexity is carried out,
In order to obtain the occupation level of the quantized task on the computational power resource, the comprehensive complexity is calculated by analyzing the subtask complexity after task decomposition:
Wherein, Ci is the complexity index (such as calculated amount, I/O operation, etc.) of the I-th part task;
wi, weight, represents the importance of subtasks.
4. Service level agreement priority calculation,
Weights are directly assigned based on user requirements and terms of service (e.g., high priority tasks require quick response).
And thirdly, monitoring the performance state of the node of the power host.
1. The index is monitored in real time,
And continuously collecting indexes comprising CPU occupancy rate and memory utilization rate through monitoring probes deployed on all the computing host nodes to form a node performance database with high frequency dynamic update.
2. The standardization and the weight distribution are carried out,
And normalizing the node performance index according to a formula to enable the node performance index to have comparability.
Dynamic weights are assigned to different metrics (e.g., CPU occupancy may be more important than memory usage), and the weight values may be adjusted based on actual load.
And fourthly, constructing and optimizing a dynamic calculation force matching model.
1. The objective function definition:
the dynamic computational power matching model aims at optimizing the resource use efficiency and response time, and combines the following two main targets:
the resource waste is minimized, namely, the distributed resources are ensured to be close to the actual demands, and idle is avoided;
minimizing response time by approximating task completion time to user expectations;
2. dynamic weight policy:
According to the real-time system load, the weights of two targets are dynamically adjusted, and the formula is as follows:
Wfinal=λ1·WRW+λ2·WRD
Wherein, lambda1、λ2 is a weight factor which can be automatically adjusted by a threshold condition.
3. Genetic algorithm optimization:
generating initial populations, wherein each population represents an allocation scheme of tasks and nodes;
Calculating fitness of each scheme based on the objective function;
crossover and mutation, namely introducing mutation factors into the population, improving the diversity of the scheme and avoiding sinking into local optimum;
iterative optimization, namely circularly updating the population until the objective function converges.
And fifthly, the intelligent prediction mechanism assists in optimizing.
1. A load prediction model is constructed and a load prediction model is constructed,
Predicting task load change through time sequence, wherein the formula is as follows:
Rfuture=β·Rcurrent+(1-β)·Rhistorical
Wherein β is a weight factor for weighted averaging between the current value Rcurrent and the historical value Rhistorical, and the influence of the current trend and the historical data on the future predictions can be balanced by adjusting the value of β, specifically as follows:
When β approaches 1, the predicted value Rfuture depends more on the current value Rcurrent, i.e., the current trend is considered to have a greater impact on the future;
When β approaches 0, the predicted value Rfuture depends more on the history value Rhistorical, i.e., the history data is considered to have a greater impact on the future;
When β is equal to 0.5, the predicted value Rfuture is a simple average of the current and historical values, i.e., the effect of both is equal.
By predicting the future high load period, the resource allocation strategy is optimized in advance.
2. The resource preheating and the task migration are carried out,
For predicted high-priority task loads, resources are allocated in advance to reduce starting delay;
and dynamically migrating the non-critical tasks to the low-load nodes so as to reduce resource occupation conflict.
And sixthly, task allocation and execution.
Prioritizing tasks according to real-time Requirements (RTD), concurrency requirements (CD), and SLA priorities;
Performing resource allocation according to the output of the dynamic calculation matching model, and continuously monitoring the effect;
And (3) adaptively adjusting, namely recalculating task priority and resource allocation strategies according to system load changes.
The method solves the problems of resource waste, task accumulation and response delay caused by lack of accurate identification and adaptation of different service characteristics in a scheduling strategy when the existing power-computing host processes large-scale concurrent tasks. And (3) analyzing service characteristics through multi-dimension quantification, constructing a dynamic computing power matching model by combining the performance states of the computing power host nodes, and intelligently adjusting task allocation rules by adopting a multi-objective optimization algorithm and a dynamic weight allocation strategy. Meanwhile, an intelligent prediction mechanism is introduced, the load change trend is perceived in advance, and resource preheating and task migration optimization are realized. The method improves the accuracy and adaptability of task scheduling and has the technical advantages of high efficiency, flexibility and reliability.
The embodiment of the invention also provides an intelligent large-scale task scheduling system based on the business characteristics of the power host, which comprises the following steps:
The data acquisition and preprocessing module is used for realizing data acquisition and preprocessing of the task and the computing host node;
The task business characteristic quantitative analysis module is used for carrying out multidimensional quantitative analysis on the task business characteristics;
The power host node performance state monitoring module is used for dynamically monitoring the power host node performance state;
The dynamic computing force matching model construction and optimization module is used for realizing the construction and optimization of the dynamic computing force matching model;
the intelligent prediction mechanism auxiliary optimization module is used for auxiliary optimization task scheduling based on the intelligent prediction mechanism;
the task allocation and execution module is used for realizing final task allocation and execution;
the system can realize the intelligent large-scale task scheduling method based on the business characteristics of the power host machine, which is described in the embodiment. The specific implementation is as follows:
1. the data acquisition and preprocessing module comprises:
1. The task data is collected and the data is stored,
The scheduling system receives and stores key information of tasks in real time:
determining the time urgency of a task and providing a basis for real-time demand quantification;
the concurrent execution record of the task is extracted from the database and is used for concurrent demand prediction;
Analyzing the resource calling frequency and algorithm complexity of tasks, and determining the computation density degree of different tasks;
Service Level Agreements (SLAs) define task priorities according to an agreement, e.g. VIP service tasks have a higher weight.
2. The data of the nodes of the power main machine are collected,
Through distributed monitored control system, gather every power host computer node's performance data in real time, include:
CPU occupancy rate, which represents the use condition of computing resources and reflects the load capacity of nodes;
the memory utilization rate is that the allocation state of the node memory is displayed, and the suitability of task allocation is evaluated;
network delay, namely describing communication time among nodes and influencing scheduling of cross-node tasks;
And the energy consumption is that the energy consumption of the node in operation is monitored and is used for optimizing the energy efficiency ratio.
3. The data is pre-processed and the data is pre-processed,
In order to enable the task characteristics and the node performance to be directly used for model calculation, normalization processing is needed, and all indexes are uniformly mapped to the [0,1] interval through the following formula:
the normalized data avoid the influence of dimension difference on the subsequent model, and consistent input is provided for scheduling optimization.
2. The task business characteristic quantitative analysis module is used for analyzing the task business characteristics,
Carrying out multidimensional quantitative analysis on task business characteristics to determine the scheduling requirements:
1. The real-time demand is quantified and the real-time demand is quantified,
In order to measure the time urgency of the task, the resource priority allocation is guided, the scheduling is realized by calculating the ratio of the current time to the task deadline, and the formula is as follows:
if RTD approaches 1, the higher the task priority is, the task needs to be immediately scheduled;
if RTD approaches 0, indicating that the task is not urgent, it is suitable for later scheduling.
2. The concurrent demand forecast is based on the result of the demand forecast,
To sense possible concurrency pressure of tasks in advance, optimizing a resource allocation strategy, and predicting future concurrency demands by combining the historical maximum concurrency quantity and the average value:
CD=α·Hmax+(1-α)·Emean
And alpha is a user adjustable factor for adjusting the importance of the maximum concurrency and average concurrency of the history.
The high concurrency task allocates more nodes to meet the demand.
3. The evaluation of the computational complexity is carried out,
In order to obtain the occupation level of the quantized task on the computational power resource, the comprehensive complexity is calculated by analyzing the subtask complexity after task decomposition:
Wherein, Ci is the complexity index (such as calculated amount, I/O operation, etc.) of the I-th part task;
wi, weight, represents the importance of subtasks.
4. Service level agreement priority calculation,
Weights are directly assigned based on user requirements and terms of service (e.g., high priority tasks require quick response).
3. The power host node performance state monitoring module comprises:
1. The index is monitored in real time,
And continuously collecting indexes comprising CPU occupancy rate and memory utilization rate through monitoring probes deployed on all the computing host nodes to form a node performance database with high frequency dynamic update.
2. The standardization and the weight distribution are carried out,
And normalizing the node performance index according to a formula to enable the node performance index to have comparability.
Dynamic weights are assigned to different metrics (e.g., CPU occupancy may be more important than memory usage), and the weight values may be adjusted based on actual load.
4. The dynamic computing force matching model construction and optimization module comprises:
1. The objective function definition:
the dynamic computational power matching model aims at optimizing the resource use efficiency and response time, and combines the following two main targets:
the resource waste is minimized, namely, the distributed resources are ensured to be close to the actual demands, and idle is avoided;
minimizing response time by approximating task completion time to user expectations;
2. dynamic weight policy:
According to the real-time system load, the weights of two targets are dynamically adjusted, and the formula is as follows:
Wfinal=λ1·WRW+λ2·WRD
Wherein, lambda1、λ2 is a weight factor which can be automatically adjusted by a threshold condition.
3. Genetic algorithm optimization:
generating initial populations, wherein each population represents an allocation scheme of tasks and nodes;
Calculating fitness of each scheme based on the objective function;
crossover and mutation, namely introducing mutation factors into the population, improving the diversity of the scheme and avoiding sinking into local optimum;
iterative optimization, namely circularly updating the population until the objective function converges.
5. The intelligent prediction mechanism assists the optimization module, including:
1. a load prediction model is constructed and a load prediction model is constructed,
Predicting task load change through time sequence, wherein the formula is as follows:
Rfuture=β·Rcurrent+(1-β)·Rhistorical
Wherein β is a weight factor for weighted averaging between the current value Rcurrent and the historical value Rhistorical, and the influence of the current trend and the historical data on the future predictions can be balanced by adjusting the value of β, specifically as follows:
When β approaches 1, the predicted value Rfuture depends more on the current value Rcurrent, i.e., the current trend is considered to have a greater impact on the future;
When β approaches 0, the predicted value Rfuture depends more on the history value Rhistorical, i.e., the history data is considered to have a greater impact on the future;
When β is equal to 0.5, the predicted value Rfuture is a simple average of the current and historical values, i.e., the effect of both is equal.
By predicting the future high load period, the resource allocation strategy is optimized in advance.
By predicting the future high load period, the resource allocation strategy is optimized in advance.
2. The resource preheating and the task migration are carried out,
For predicted high-priority task loads, resources are allocated in advance to reduce starting delay;
and dynamically migrating the non-critical tasks to the low-load nodes so as to reduce resource occupation conflict.
6. A task allocation and execution module comprising:
Prioritizing tasks according to real-time Requirements (RTD), concurrency requirements (CD), and SLA priorities;
Performing resource allocation according to the output of the dynamic calculation matching model, and continuously monitoring the effect;
And (3) adaptively adjusting, namely recalculating task priority and resource allocation strategies according to system load changes.
The embodiment of the invention also provides an intelligent large-scale task scheduling realization device based on the business characteristics of the computing host, which comprises at least one memory and at least one processor;
the at least one memory for storing a machine readable program;
The at least one processor is configured to invoke the machine-readable program to implement the intelligent large-scale task scheduling method based on the business characteristics of the computing host described in the foregoing embodiments.
The embodiment of the invention also provides a computer readable medium, wherein the computer readable medium is stored with computer instructions, and the computer instructions realize the intelligent large-scale task scheduling method based on the business characteristics of the power host described in the embodiment when being executed by a processor. Specifically, a system or apparatus provided with a storage medium on which a software program code realizing the functions of any of the above embodiments is stored, and a computer (or CPU or MPU) of the system or apparatus may be caused to read out and execute the program code stored in the storage medium.
In this case, the program code itself read from the storage medium may realize the functions of any of the above-described embodiments, and thus the program code and the storage medium storing the program code form part of the present invention.
Examples of storage media for providing program code include floppy disks, hard disks, magneto-optical disks, optical disks (e.g., CD-ROMs, CD-R, CD-RWs, DVD-ROMs, DVD-RAMs, DVD-RWs, DVD+RWs), magnetic tapes, nonvolatile memory cards, and ROMs. Alternatively, the program code may be downloaded from a server computer by a communication network.
Further, it should be apparent that the functions of any of the above-described embodiments may be implemented not only by executing the program code read out by the computer, but also by causing an operating system or the like operating on the computer to perform part or all of the actual operations based on the instructions of the program code.
Further, it is understood that the program code read out by the storage medium is written into a memory provided in an expansion board inserted into a computer or into a memory provided in an expansion unit connected to the computer, and then a CPU or the like mounted on the expansion board or the expansion unit is caused to perform part and all of actual operations based on instructions of the program code, thereby realizing the functions of any of the above embodiments.
While the invention has been illustrated and described in detail in the drawings and in the preferred embodiments, the invention is not limited to the disclosed embodiments, and it will be appreciated by those skilled in the art that the code audits of the various embodiments described above may be combined to produce further embodiments of the invention, which are also within the scope of the invention.