Disclosure of Invention
The technical problem to be solved by the embodiment of the application is to provide a computing power network node distribution method, a computing power network node distribution device, electronic equipment and a storage medium, so as to select the most suitable computing power network node for a task to be distributed.
In a first aspect, an embodiment of the present application provides a method for distributing computing power network nodes, where the method includes:
acquiring equipment parameters of equipment corresponding to a task to be allocated and node parameters of a computing power network node in an area where the task to be allocated is located;
determining task time delay corresponding to the task to be distributed based on the equipment parameter and the node parameter;
processing task data of the task to be allocated based on a pre-constructed computational power scheduling model to obtain a computational power demand value of the task to be allocated;
Determining a target computing power network node in the computing power network nodes according to the task time delay and the computing power demand value;
and distributing the task to be distributed to the target power computing network node according to the task weight corresponding to the task to be distributed.
Optionally, the determining, based on the device parameter and the node parameter, a task delay corresponding to the task to be allocated includes:
Calculating to obtain a first transmission delay between equipment and the power computing network node according to equipment transmission power, channel gain parameters between the equipment and the power computing network node and data transmission bandwidth of a communication link between the equipment and the power computing network node;
calculating to obtain a first calculation time delay of the task to be distributed according to the calculation force resources distributed to the equipment by the calculation force network node and the proportion of the task to be distributed to the calculation force network node;
According to the network bandwidth provided by the computing power network node for the task to be distributed, calculating to obtain a second transmission time delay between the computing power network node and a cloud data center;
Calculating to obtain a second calculation time delay of the cloud data center according to calculation resources allocated by the cloud data center to the task to be allocated;
And determining task time delay corresponding to the task to be allocated based on the first transmission time delay, the first calculation time delay, the second transmission time delay and the second calculation time delay.
Optionally, the determining, based on the first transmission delay, the first computation delay, the second transmission delay, and the second computation delay, the task delay corresponding to the task to be allocated includes:
Calculating to obtain the time delay sum of the first transmission time delay, the first calculation time delay, the second transmission time delay and the second calculation time delay, and taking the time delay sum as the task time delay corresponding to the task to be distributed, or
And carrying out weighted average on the first transmission delay, the first calculation delay, the second transmission delay and the second calculation delay to obtain average delay, and taking the average delay as task delay corresponding to the task to be allocated.
Optionally, the processing the task data of the task to be allocated based on the pre-constructed computing power scheduling model to obtain a computing power requirement value of the task to be allocated includes:
acquiring task data volume corresponding to the task to be allocated;
and processing the task data amount, the calculation chip resources and the chip calculation capacity value based on the calculation power scheduling model to obtain the calculation power requirement value of the task to be allocated.
Optionally, the determining a target computing power network node of the computing power network nodes according to the task time delay and the computing power requirement value includes:
acquiring idle computing power of the computing power network node;
screening candidate computing force network nodes with idle computing force larger than the computing force required value from the computing force network nodes according to the idle computing force;
And screening out the target computing power network node corresponding to the task to be distributed from the candidate computing power network nodes according to the task time delay and the business demand index.
Optionally, the screening the target computing power network node corresponding to the task to be allocated from the candidate computing power network nodes according to the task time delay and the service demand index includes:
In case the business demand index indicates to process the task with optimal resource, selecting the computing network node with the lowest task time delay from the candidate computing network nodes as the target computing network node, or
And under the condition that the business demand index indicates that the task is processed at the lowest cost, selecting the computing power network node with the highest task time delay from the candidate computing power network nodes as the target computing power network node.
Optionally, the allocating the task to be allocated to the target computing power network node according to the task weight corresponding to the task to be allocated includes:
storing the task to be allocated and a target computing power network node corresponding to the task to be allocated in a task queue in a key value pair mode;
And calling the Flink stream data to transfer the tasks to be distributed in the task queue to a target computing power network node corresponding to the tasks to be distributed according to the sequence of the task weight from high to low.
In a second aspect, an embodiment of the present application provides a computing power network node allocation apparatus, the apparatus including:
the parameter acquisition module is used for acquiring equipment parameters of equipment corresponding to the task to be allocated and node parameters of the computing power network node in the area where the task to be allocated is located;
the time delay determining module is used for determining the task time delay corresponding to the task to be allocated based on the equipment parameter and the node parameter;
The computing power demand acquisition module is used for processing the task data of the task to be distributed based on a pre-constructed computing power scheduling model to obtain a computing power demand value of the task to be distributed;
the target node determining module is used for determining a target computing power network node in the computing power network nodes according to the task time delay and the computing power demand value;
And the task allocation module is used for allocating the task to be allocated to the target computing power network node according to the task weight corresponding to the task to be allocated.
Optionally, the delay determining module includes:
A first transmission delay calculation unit, configured to calculate a first transmission delay between the device and the power calculation network node according to a device transmission power, a channel gain parameter between the device and the power calculation network node, and a data transmission bandwidth of a communication link between the device and the power calculation network node;
The first calculation time delay calculation unit is used for calculating and obtaining the first calculation time delay of the task to be distributed according to the calculation force resources distributed to the equipment by the calculation force network node and the proportion of the task to be distributed to the calculation force network node;
The second transmission delay calculation unit is used for calculating and obtaining the second transmission delay between the computing network node and the cloud data center according to the network bandwidth provided by the computing network node for the task to be distributed;
the second calculation time delay calculation unit is used for calculating the second calculation time delay of the cloud data center according to the calculation resources distributed by the cloud data center for the task to be distributed;
And the task time delay determining unit is used for determining the task time delay corresponding to the task to be allocated based on the first transmission time delay, the first calculation time delay, the second transmission time delay and the second calculation time delay.
Optionally, the task delay determining unit includes:
A first task delay obtaining subunit, configured to calculate a delay sum of the first transmission delay, the first calculation delay, the second transmission delay, and the second calculation delay, and use the delay sum as a task delay corresponding to the task to be allocated;
and the second task time delay acquisition subunit is used for carrying out weighted average on the first transmission time delay, the first calculation time delay, the second transmission time delay and the second calculation time delay to obtain average time delay, and taking the average time delay as the task time delay corresponding to the task to be allocated.
Optionally, the computing power demand acquisition module includes:
The task data volume acquisition unit is used for acquiring task data volume corresponding to the task to be allocated;
And the computing power demand value acquisition unit is used for processing the task data quantity, the computing chip resources and the chip computing capacity value based on the computing power scheduling model to obtain the computing power demand value of the task to be distributed.
Optionally, the target node determining module includes:
An idle computing power obtaining unit, configured to obtain an idle computing power of the computing power network node;
a candidate node screening unit, configured to screen candidate computing power network nodes with idle computing power greater than the computing power requirement value from the computing power network nodes according to the idle computing power;
and the target node screening unit is used for screening the target computing network node corresponding to the task to be distributed from the candidate computing network nodes according to the task time delay and the business demand index.
Optionally, the target node screening unit includes:
A first target node obtaining subunit, configured to screen, when the service demand index indicates that a task is processed with an optimal resource, from the candidate power network nodes, a power network node with the lowest task delay as the target power network node;
and the second target node obtaining subunit is used for screening out the computing power network node with the highest task time delay from the candidate computing power network nodes as the target computing power network node under the condition that the business demand index indicates that the task is processed at the lowest cost.
Optionally, the task allocation module includes:
The task storage unit is used for storing the task to be allocated and the target computing power network node corresponding to the task to be allocated in a task queue in the form of key value pairs;
The task allocation unit is used for calling the Flink stream data and transmitting the tasks to be allocated in the task queue to the target computing power network nodes corresponding to the tasks to be allocated according to the sequence of the task weights from high to low.
In a third aspect, an embodiment of the present application provides an electronic device, including:
a processor, a memory and a computer program stored on the memory and executable on the processor, the processor implementing the method of computing power network node allocation of any one of the above when executing the program.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of computing power network node allocation as described in any of the preceding claims.
Compared with the prior art, the embodiment of the application has the following advantages:
In the embodiment of the application, the equipment parameters of the equipment corresponding to the task to be allocated and the node parameters of the computing power network node in the area where the task to be allocated is located are obtained. And determining the task time delay corresponding to the task to be allocated based on the equipment parameter and the node parameter. And processing task data of the task to be allocated based on a pre-constructed computational power scheduling model to obtain a computational power demand value of the task to be allocated. And determining a target computing power network node in the computing power network nodes according to the task time delay and the computing power demand value. And distributing the task to be distributed to the target power computing network node according to the task weight corresponding to the task to be distributed. According to the embodiment of the application, the task time delay of the task to be allocated and the task time delay of the task to be allocated are combined, and the needed task time delay are combined, so that the target task to be allocated can be selected, the optimal task to be allocated can be selected for the task to be allocated, meanwhile, the task to be allocated is allocated to the target task to be allocated according to the task weight corresponding to the task to be allocated, the processing efficiency of the task with high priority can be improved, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
Referring to fig. 1, a step flow chart of a method for allocating a computing power network node according to an embodiment of the present application is shown, and as shown in fig. 1, the method for allocating a computing power network node may include a step 101, a step 102, a step 103, a step 104, and a step 105.
Step 101, acquiring equipment parameters of equipment corresponding to a task to be allocated and node parameters of a computing power network node in an area where the task to be allocated is located.
The embodiment of the application can be applied to the scene of distributing the optimal computing power network node for the task to be distributed.
In a specific implementation, a real-time data warehouse based on iceberg data lake technology may be deployed at a central server. Formally partitioning iceberg is a tabular format. I.e. one intermediate layer based on the computation layer (Flink, spark, hive, presto) and the storage layer (ORC, parquet, AVRO). And a basic environment is provided for the performance adaptation optimization of big data operation. The architecture for a deployed real-time data warehouse may be as shown in fig. 8. As shown in FIG. 8, the compute layer includes Spark, flink, hive and Presto and the storage layer may include ORC, parquet, AVRO. The Table Format layer is the Table Format, and stores the key-value mapping relation through the Redis to indicate the corresponding relation between the task and the computing network node.
In actual practice, distributed computing task nodes (i.e., computing force network nodes) may be created. And calculating the total time delay of one calculation task through a model algorithm, wherein the total time delay comprises the transmission time delay from a user to an edge calculation node, the processing time delay occupied by the edge node, the transmission time delay of the edge node and a cloud data center calculation node and the calculation time delay of the cloud calculation node, namely the total time delay of the calculation task reaching the edge calculation force node in the system. And the optimized average time delay. And obtaining the average time delay from the network buried point to the table format, and selecting a network computing node to complete the computing task according to the weight of the buried point.
The device parameters refer to parameters of the device that issues the task to be allocated, and in this example, the device parameters may include parameters such as transmission power of the device.
The node parameters refer to parameters of the computing power network nodes in the area where the task to be allocated is located, and in this example, the node parameters may include parameters such as computing power resources allocated for the task, channel gains, and the like.
After the task to be allocated is acquired, equipment parameters of equipment corresponding to the task to be allocated and node parameters of the computing network node in the area where the task to be allocated is located can be acquired. Specifically, when the task to be allocated is allocated to the power network node, the task to be allocated in the a-province can only be allocated from the a-province, the task to be allocated in the B-province can only be allocated from the B-province, and the power network node cannot be selected across the provinces.
It will be appreciated that the above examples are only examples listed for better understanding of the technical solution of the embodiments of the present application, and are not to be construed as the only limitation of the present embodiments.
After acquiring the device parameters of the device corresponding to the task to be allocated and the node parameters of the computing power network node in the area where the task to be allocated is located, step 102 is executed.
Step 102, determining task time delay corresponding to the task to be allocated based on the equipment parameter and the node parameter.
After acquiring the equipment parameters of the equipment corresponding to the task to be allocated and the node parameters of the computing power network node in the area where the task to be allocated is located, determining the task time delay corresponding to the task to be allocated according to the equipment parameters and the node parameters. In this example, the task latency may be calculated by a transmission latency of a User (i.e., user Equipment (UE)) to an edge computing node, a processing latency of the edge node, a transmission latency of the edge node and a cloud data center computing node, and a computation latency of the cloud data center computing node. The acquisition process for task latency may be described in detail below in conjunction with fig. 2.
Referring to fig. 2, a step flowchart of a task delay determining method provided by an embodiment of the present application is shown, and as shown in fig. 2, the task delay determining method may include a step 201, a step 202, a step 203, a step 204, and a step 205.
Step 201, calculating to obtain a first transmission delay between the equipment and the power computing network node according to the equipment transmitting power, the channel gain parameter between the equipment and the power computing network node and the data transmission bandwidth of the communication link between the equipment and the power computing network node.
In this embodiment, the first transmission delay refers to a delay of a transmission task between the device and the computing network node.
The obtained device parameters comprise device sending power, and the node parameters can comprise channel gain parameters between the device and the computing power network node and data transmission bandwidths of communication links between the device and the computing power network node.
After the device parameter and the node parameter are acquired, a first transmission delay between the device and the computing network node can be calculated according to the device sending power contained in the device parameter, the channel gain parameter between the device and the computing network node contained in the node parameter, and the data transmission bandwidth of the communication link between the device and the computing network node. The calculation for the first transmission delay may be as shown in the following equation (1):
in the above-mentioned formula (1),For the first transmission delay, pi is the transmission power of the ith device, hk,i is the channel gain from the ith ue to the kth edge node, which is a random independent co-distributed variable, σ2 is the additive white gaussian noise power, and B is the data transmission bandwidth of the wireless communication link.
Step 202, calculating to obtain a first calculation time delay of the task to be distributed according to the calculation force resources distributed to the equipment by the calculation force network node and the proportion of the task to be distributed to the calculation force network node.
The first computation delay refers to the delay of the computation force network node to compute the task to be distributed.
The obtained node parameters comprise the computing power resources allocated to the equipment by the computing power network node and the proportion of the tasks to be allocated to the computing power network node.
After the node parameters are obtained, the first calculation time delay of the task to be distributed can be calculated according to the calculation power resources distributed to the equipment by the calculation power network nodes and the proportion of the task to be distributed to the calculation power network nodes, wherein the calculation power resources are contained in the node parameters. The calculation manner for the first calculation delay can be as follows in formula (2):
in the above-mentioned formula (2),For the first computation delay, lambdai is the proportion of the computation task of the ith user equipment to the corresponding edge computation node (namely the computation network node), lambdai belongs to the interval of [0,1], 1-lambdai is unloaded to the cloud data center, and the computation task of the proportion is usedRepresenting the computational power resources allocated to the ith user equipment by the kth edge computing node.
Step 203, calculating to obtain a second transmission delay between the computing network node and the cloud data center according to the network bandwidth provided by the computing network node for the task to be distributed.
The second transmission delay refers to the delay of the transmission of the task to be distributed between the computing power network node and the cloud data center.
The obtained node parameters comprise network bandwidth provided by the computing network node for the task to be distributed. After the node parameters are obtained, a second transmission delay between the computing network node and the cloud data center can be calculated according to the network bandwidth provided by the computing network node contained in the node parameters for the task to be distributed. The second transmission delay may be calculated as shown in the following formula (3):
in the above-mentioned formula (3),For the second transmission delay, Wk,i is the network bandwidth provided by the computing network node for the task to be allocated, and λi is the proportion of the computing task of the ith user equipment allocated to its corresponding edge computing node (i.e., computing network node).
And 204, calculating to obtain a second calculation time delay of the cloud data center according to the calculation resources allocated by the cloud data center to the task to be allocated.
The second calculation time delay refers to the time delay of calculating the task to be allocated for the calculation resources allocated by the data center to be allocated.
The obtained node parameters comprise computing resources distributed by the cloud data center for the tasks to be distributed. After the node parameters are obtained, a second calculation time delay of the cloud data center can be obtained through calculation according to calculation resources distributed by the cloud data center to the tasks to be distributed, wherein the calculation resources are contained in the node parameters. The calculation manner for the second calculation delay may be combined with the following formula (4):
In the above-mentioned formula (4),For the second computation delay, fic is the computation resource allocated by the cloud data center for the task to be allocated, and λi is the proportion of the computation task of the ith user equipment allocated to its corresponding edge computation node (i.e. computation network node).
After the first calculated delay, the second calculated delay, the first transmission delay, and the second transmission delay are obtained through the above steps, step 205 is performed.
Step 205, determining a task delay corresponding to the task to be allocated based on the first transmission delay, the first calculation delay, the second transmission delay and the second calculation delay.
After the first computation time delay, the second computation time delay, the first transmission time delay and the second transmission time delay are obtained, the task time delay corresponding to the task to be allocated can be determined based on the first computation time delay, the second computation time delay, the first transmission time delay and the second transmission time delay.
In a specific implementation, the first computation time delay, the second computation time delay, the total time delay of the first transmission time delay and the second transmission time delay can be computed to serve as the task time delay corresponding to the task to be allocated. The first calculation time delay, the second calculation time delay, the first transmission time delay and the second transmission time delay can be weighted and averaged to obtain an average time delay, so that the average time delay is used as a task time delay corresponding to a task to be allocated. This implementation may be described in detail below in conjunction with fig. 3.
Referring to fig. 3, a step flowchart of a task delay acquiring method according to an embodiment of the present application is shown, and as shown in fig. 3, the task delay acquiring method may include a step 301 and a step 302.
Step 301, calculating to obtain a time delay sum of the first transmission time delay, the first calculation time delay, the second transmission time delay and the second calculation time delay, and taking the time delay sum as a task time delay corresponding to the task to be allocated.
In this embodiment, after the first computation delay, the second computation delay, the first transmission delay and the second transmission delay are obtained, a delay sum of the first computation delay, the second computation delay, the first transmission delay and the second transmission delay may be computed, and the delay sum is used as a task delay corresponding to a task to be allocated. The calculation formula is shown in the following formula (5):
in the above formula (5), ti is the sum of time delays,For the first transmission delay time, the first delay time is,For the first time delay of the computation,For the second transmission delay time,For a second computation delay.
Step 302, performing weighted average on the first transmission delay, the first calculation delay, the second transmission delay and the second calculation delay to obtain an average delay, and taking the average delay as a task delay corresponding to the task to be allocated.
In a specific implementation, in a wireless time domain, the time delay is one of important characteristics for measuring the performance of the system, and the time delay characteristic of the system can be measured through the sum of the task queue lengths of all the sections in the cloud, the network, the edge and the end. Considering the dynamic queue characteristics of the edge nodes and the cloud data center nodes, the average time delay can be used as the task time delay of the tasks to be distributed.
After the first computation time delay, the second computation time delay, the first transmission time delay and the second transmission time delay are obtained, the first computation time delay, the second computation time delay, the first transmission time delay and the second transmission time delay can be weighted and averaged to obtain an average time delay, and the average time delay is used as a task time delay corresponding to a task to be allocated. The mean delay can be calculated as shown in the following formula (6):
In the above formula (6), Sk (t) is a calculation task queue offloaded to the cloud data center server at time t, Qk (t) is a task queue existing on the computing network node, and t represents the t decision time.
After determining the task delay corresponding to the task to be allocated based on the device parameter and the node parameter, step 104 is performed.
And 103, processing the task data of the task to be allocated based on a pre-constructed computational power scheduling model to obtain the computational power requirement value of the task to be allocated.
The calculation force scheduling model refers to a model constructed in advance for calculating the required calculation force of a task to be allocated.
The calculation force demand value refers to a calculation force value required for processing a task to be allocated. The computational power (also known as hash rate) is a unit of measure of the processing power of a bitcoin network. I.e. the speed at which the hash function output is calculated for the computer. The bitcoin network must perform intensive mathematical and encryption related operations for security purposes. For example, when the network reaches a hash rate of 10Th/s, it means that it can make 10 trillion computations per second.
After the task to be allocated is acquired, task data of the task to be allocated can be processed based on a pre-constructed computing power scheduling model so as to obtain a computing power requirement value of the task to be allocated. The process of obtaining the calculation force demand value may be described in detail as follows with reference to fig. 4.
Referring to fig. 4, a flowchart illustrating steps of a method for obtaining a calculated force demand value according to an embodiment of the present application may include, as shown in fig. 4, steps 401 and 402.
Step 401, obtaining task data quantity corresponding to the task to be distributed.
In this embodiment, after the task to be allocated is acquired, the task data amount of the task to be allocated may be acquired.
After the task data amount of the task to be allocated is acquired, step 402 is performed.
And step 402, processing the task data volume, the computing chip resources and the chip computing capacity value based on the computing power scheduling model to obtain the computing power requirement value of the task to be distributed.
After the task data volume of the task to be allocated is obtained, the computing power scheduling model can be called to process the task data volume, the computing chip resources and the chip computing capacity value so as to obtain the computing power requirement value of the task to be allocated.
In this example, the force dispatch model is shown in equation (7) below:
In the above formula (7), Cbr is the total calculation force demand, f (x) is the mapping function, and q is the redundant calculation force. Taking the parallel computing capability as an example, assuming that b1, b2 and b3 exist, and 3 different types of parallel computing chip resources exist, f (betaj) represents a mapping function of the parallel computing capability available by the jth parallel computing chip, and q2 represents redundant computing capability of parallel computing.
And processing task data of the task to be allocated based on a pre-constructed computational power scheduling model to obtain a computational power demand value of the task to be allocated, and executing step 104.
And 104, determining a target computing power network node in the computing power network nodes according to the task time delay and the computing power demand value.
The target computing power network node refers to a computing power network node used for processing tasks to be distributed in the computing power network node.
After the task time delay and the calculation force demand value corresponding to the task to be allocated are obtained, the target calculation force network node in the calculation force network nodes can be determined according to the task time delay and the calculation force demand value.
In a specific implementation, candidate power network nodes larger than the power demand value of the task to be distributed can be screened out from the power network nodes according to the residual power of the power network nodes, and then target power network nodes corresponding to the task to be distributed are screened out from the candidate power network nodes according to the task time delay and the service demand index. This implementation may be described in detail below in conjunction with fig. 5.
Referring to fig. 5, a step flow chart of a target power network node screening method according to an embodiment of the present application is shown, and as shown in fig. 5, the target power network node screening method may include a step 501, a step 502, and a step 503.
Step 501, acquiring idle computing power of the computing power network node.
In this embodiment, the idle computing power refers to the computing power remaining available for the computing power network node in the area where the task to be allocated is located.
After the computing power network node in the area where the task to be distributed is located is obtained, the idle computing power of the computing power network node can be obtained. In a specific implementation, the remaining available idle computing power can be reported by the computing power network node, or the idle computing power of the computing power network node can be calculated according to the computing power currently used by the computing power network node and the total computing power of the computing power network node. Specifically, the specific acquisition mode of the computing power network node may be determined according to the service requirement, which is not limited in this embodiment.
After the idle computing power of the computing power network node is acquired, step 502 is performed.
Step 502, according to the idle computing power, selecting candidate computing power network nodes with idle computing power larger than the computing power requirement value from the computing power network nodes.
The candidate computing power network node refers to a network node with idle computing power larger than a computing power demand value required by the task to be distributed in the computing power network node in the area where the task to be distributed is located.
After the idle computing power of the computing power network node is obtained, candidate computing power network nodes with the idle computing power larger than the computing power requirement value can be screened out from the computing power network nodes according to the idle computing power of the computing power network node. For example, the computing power network nodes in the area where the tasks to be distributed are located include a node 1, a node 2, a node 3, a node 4 and a node 5, the computing power requirement value of the tasks to be distributed is 15, the idle computing power of the node 1 is 12, the idle computing power of the node 2 is 18, the idle computing power of the node 3 is 19, the idle computing power of the node 4 is 29, and the idle computing power of the node 5 is 44, and at this time, the node 2, the node 3, the node 4 and the node 5 can be used as candidate computing power network nodes.
It will be appreciated that the above examples are only examples listed for better understanding of the technical solution of the embodiments of the present application, and are not to be construed as the only limitation of the present embodiments.
After screening candidate computing power network nodes with idle computing power greater than the computing power requirement value from the computing power network nodes according to the idle computing power, step 503 is executed.
And 503, screening out the target power calculation network node corresponding to the task to be distributed from the candidate power calculation network nodes according to the task time delay and the business demand index.
The business need index may be used to indicate a need index of a device to be tasked, in this example, the business need index may include an index to process tasks at optimal resources, and an index to process tasks at minimum cost.
After candidate computing power network nodes with idle computing power larger than computing power demand value are screened out of computing power network nodes according to idle computing power, target computing power network nodes corresponding to tasks to be distributed can be screened out of the candidate computing power network nodes according to task time delay and service demand indexes. Specifically, the screening of the target power network node may be performed according to the difference of the business requirement indexes, and the implementation process may be described in detail below in conjunction with fig. 6.
Referring to fig. 6, a flowchart illustrating steps of a target computing power network node obtaining method according to an embodiment of the present application is shown, where, as shown in fig. 6, the target computing power network node obtaining method may include a step 601 and a step 602.
And step 601, screening out the computing power network node with the lowest task time delay from the candidate computing power network nodes as the target computing power network node under the condition that the business demand index indicates to process tasks with optimal resources.
In this embodiment, when the traffic demand index indicates that the task is handled with the optimal resource, the computing network node with the lowest task delay may be selected from the candidate computing network nodes as the target computing network node. I.e. processing tasks at the fastest speed.
Step 602, in the case that the business demand index indicates that the task is processed at the lowest cost, selecting the computing power network node with the highest task time delay from the candidate computing power network nodes as the target computing power network node.
When the business requirement index indicates that the task is processed at the lowest cost, the computing network node with the highest task time delay can be screened from the candidate computing network nodes to serve as a target computing network node. I.e. to handle tasks in the most cost-efficient way.
After the target power network node corresponding to the task to be allocated is selected from the candidate power network nodes according to the task delay and the business demand index, step 105 is executed.
And 105, distributing the task to be distributed to the target power network node according to the task weight corresponding to the task to be distributed.
The task weight may be used to indicate the importance of the task to be allocated, in this example, the higher the task weight, the more important the task to be allocated, and conversely, the lower the task weight, the less important the task to be allocated.
After the target power network node corresponding to the task to be allocated is screened out from the candidate power network nodes according to the task time delay and the service demand index, the task to be allocated can be allocated to the target power network node according to the task weight corresponding to the task to be allocated. In this embodiment, the link data stream may be invoked to transfer the task to be allocated according to the task weight, and the implementation process may be described in detail with reference to fig. 7.
Referring to fig. 7, a step flowchart of a task allocation method to be allocated provided in an embodiment of the present application is shown, and as shown in fig. 7, the task allocation method to be allocated may include a step 701 and a step 702.
And 701, storing the task to be allocated and a target computing power network node corresponding to the task to be allocated in a task queue in the form of a key value pair.
After determining the target computing power network node corresponding to the task to be allocated, the task to be allocated and the target computing power network node corresponding to the task to be allocated may be stored in the task queue in the form of a key value pair. I.e. key-value.
After storing the task to be allocated and the target computing power network node corresponding to the task to be allocated in the task queue in the form of a key value pair, step 702 is performed.
Step 702, calling the Flink stream data, and transmitting the tasks to be distributed in the task queue to a target computing power network node corresponding to the tasks to be distributed according to the order of the task weights from high to low.
After the tasks to be allocated and the target computing power network nodes corresponding to the tasks to be allocated are stored in the task queue in the form of key value pairs, the Flink stream data can be called to transfer the tasks to be allocated in the task queue to the target computing power network nodes corresponding to the tasks to be allocated according to the order of the task weights from high to low.
In a specific implementation, a key-value storage form of redis may be utilized to interface with a Flink to form a Flink-connection-redis. And forming a session table by establishing a key and value mapping relation. For FlinkSQL calls. The workflow for the Flink SQL engine may be as shown in fig. 9.
As shown in fig. 9, SQL/TableAPI has mainly the following steps from input to compilation into executable JobGraph:
1. Convert SQL text/TableAPI code to a Logical execution Plan (Logical Plan);
2. the logical execution plan is converted to a physical execution plan by an optimizer (PHYSICAL PLAN);
3. The Transformation is generated by the code generation technology and then further compiled into executable JobGraph submitted running.
JobGraph is to optimize STREAMGRAPH, such as setting which operators can be chain, to reduce network overhead. In the graph structure of the flank task, part of operators are chain together (reducing serialization and network overhead and improving efficiency).
SQL implementation details:
S1, converting SQL text/TableAPI codes into a logic execution plan;
s2, SQL/TableAPI converts SQL analysis into an AST abstract syntax tree through a calcite framework;
s3, acquiring metadata in the catalyst by the SQL validizer, checking the expression, table information and the like, and converting the expression, the table information and the like into a relational algebra expression (RelNode);
S4, converting the relational algebra expression into a logic execution plan of an initial state by an Optimizer.
The embodiment of the application utilizes rediskey-value storage form to complete data statistics in combination with FlinkSQL. One of the key links to solve the data lake solution is the adaptation problem between the data storage and the compute engine. Meanwhile, after the service requirement of AI analysis is received, the network embedded point information and the weight are transmitted to the most suitable computing network node in a flink flow data form through redis.
According to the method for distributing the computing power network nodes, equipment parameters of equipment corresponding to the task to be distributed and node parameters of the computing power network nodes in the area where the task to be distributed is located are obtained. And determining the task time delay corresponding to the task to be allocated based on the equipment parameter and the node parameter. And processing task data of the task to be allocated based on a pre-constructed computational power scheduling model to obtain a computational power demand value of the task to be allocated. And determining a target computing power network node in the computing power network nodes according to the task time delay and the computing power demand value. And distributing the task to be distributed to the target power computing network node according to the task weight corresponding to the task to be distributed. According to the embodiment of the application, the task time delay of the task to be allocated and the task time delay of the task to be allocated are combined, and the needed task time delay are combined, so that the target task to be allocated can be selected, the optimal task to be allocated can be selected for the task to be allocated, meanwhile, the task to be allocated is allocated to the target task to be allocated according to the task weight corresponding to the task to be allocated, the processing efficiency of the task with high priority can be improved, and the user experience is improved.
Referring to fig. 10, a schematic structural diagram of a computing power network node allocation apparatus provided by an embodiment of the present application is shown, and as shown in fig. 10, the computing power network node allocation apparatus 1000 may include the following modules:
The parameter obtaining module 1001 is configured to obtain an equipment parameter of an equipment corresponding to a task to be allocated, and a node parameter of an computing power network node in an area where the task to be allocated is located;
A time delay determining module 1002, configured to determine a task time delay corresponding to the task to be allocated based on the device parameter and the node parameter;
the calculation power demand acquisition module 1003 is configured to process task data of the task to be allocated based on a pre-constructed calculation power scheduling model, so as to obtain a calculation power demand value of the task to be allocated;
a target node determining module 1004, configured to determine a target computing power network node of the computing power network nodes according to the task delay and the computing power requirement value;
The task allocation module 1005 is configured to allocate the task to be allocated to the target computing power network node according to a task weight corresponding to the task to be allocated.
Optionally, the delay determining module 1002 includes:
A first transmission delay calculation unit, configured to calculate a first transmission delay between the device and the power calculation network node according to a device transmission power, a channel gain parameter between the device and the power calculation network node, and a data transmission bandwidth of a communication link between the device and the power calculation network node;
The first calculation time delay calculation unit is used for calculating and obtaining the first calculation time delay of the task to be distributed according to the calculation force resources distributed to the equipment by the calculation force network node and the proportion of the task to be distributed to the calculation force network node;
The second transmission delay calculation unit is used for calculating and obtaining the second transmission delay between the computing network node and the cloud data center according to the network bandwidth provided by the computing network node for the task to be distributed;
the second calculation time delay calculation unit is used for calculating the second calculation time delay of the cloud data center according to the calculation resources distributed by the cloud data center for the task to be distributed;
And the task time delay determining unit is used for determining the task time delay corresponding to the task to be allocated based on the first transmission time delay, the first calculation time delay, the second transmission time delay and the second calculation time delay.
Optionally, the task delay determining unit includes:
A first task delay obtaining subunit, configured to calculate a delay sum of the first transmission delay, the first calculation delay, the second transmission delay, and the second calculation delay, and use the delay sum as a task delay corresponding to the task to be allocated;
and the second task time delay acquisition subunit is used for carrying out weighted average on the first transmission time delay, the first calculation time delay, the second transmission time delay and the second calculation time delay to obtain average time delay, and taking the average time delay as the task time delay corresponding to the task to be allocated.
Optionally, the computing power demand acquisition module 1003 includes:
The task data volume acquisition unit is used for acquiring task data volume corresponding to the task to be allocated;
And the computing power demand value acquisition unit is used for processing the task data quantity, the computing chip resources and the chip computing capacity value based on the computing power scheduling model to obtain the computing power demand value of the task to be distributed.
Optionally, the target node determining module 1004 includes:
An idle computing power obtaining unit, configured to obtain an idle computing power of the computing power network node;
a candidate node screening unit, configured to screen candidate computing power network nodes with idle computing power greater than the computing power requirement value from the computing power network nodes according to the idle computing power;
and the target node screening unit is used for screening the target computing network node corresponding to the task to be distributed from the candidate computing network nodes according to the task time delay and the business demand index.
Optionally, the target node screening unit includes:
A first target node obtaining subunit, configured to screen, when the service demand index indicates that a task is processed with an optimal resource, from the candidate power network nodes, a power network node with the lowest task delay as the target power network node;
and the second target node obtaining subunit is used for screening out the computing power network node with the highest task time delay from the candidate computing power network nodes as the target computing power network node under the condition that the business demand index indicates that the task is processed at the lowest cost.
Optionally, the task allocation module 1005 includes:
The task storage unit is used for storing the task to be allocated and the target computing power network node corresponding to the task to be allocated in a task queue in the form of key value pairs;
The task allocation unit is used for calling the Flink stream data and transmitting the tasks to be allocated in the task queue to the target computing power network nodes corresponding to the tasks to be allocated according to the sequence of the task weights from high to low.
According to the power calculation network node distribution device provided by the embodiment of the application, the equipment parameters of the equipment corresponding to the task to be distributed and the node parameters of the power calculation network nodes in the area where the task to be distributed is located are obtained. And determining the task time delay corresponding to the task to be allocated based on the equipment parameter and the node parameter. And processing task data of the task to be allocated based on a pre-constructed computational power scheduling model to obtain a computational power demand value of the task to be allocated. And determining a target computing power network node in the computing power network nodes according to the task time delay and the computing power demand value. And distributing the task to be distributed to the target power computing network node according to the task weight corresponding to the task to be distributed. According to the embodiment of the application, the task time delay of the task to be allocated and the task time delay of the task to be allocated are combined, and the needed task time delay are combined, so that the target task to be allocated can be selected, the optimal task to be allocated can be selected for the task to be allocated, meanwhile, the task to be allocated is allocated to the target task to be allocated according to the task weight corresponding to the task to be allocated, the processing efficiency of the task with high priority can be improved, and the user experience is improved.
The embodiment of the application also provides electronic equipment, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the computer program realizes the method for distributing the computing power network nodes when being executed by the processor.
Fig. 11 shows a schematic structural diagram of an electronic device 1100 according to an embodiment of the present invention. As shown in fig. 11, the electronic device 1100 includes a Central Processing Unit (CPU) 1101 that can perform various suitable actions and processes according to computer program instructions stored in a Read Only Memory (ROM) 1102 or computer program instructions loaded from a storage unit 1108 into a Random Access Memory (RAM) 1103. In the RAM1103, various programs and data required for the operation of the electronic device 1100 can also be stored. The CPU1101, ROM1102, and RAM1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
Various components in the electronic device 1100 are connected to the I/O interface 1105, including an input unit 1106, e.g., keyboard, mouse, microphone, etc., an output unit 1107, e.g., various types of displays, speakers, etc., a storage unit 1108, e.g., magnetic disk, optical disk, etc., and a communication unit 1109, e.g., network card, modem, wireless communication transceiver, etc. The communication unit 1109 allows the electronic device 1100 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunications networks.
The various processes and treatments described above may be performed by the processing unit 1101. For example, the methods of any of the embodiments described above may be implemented as a computer software program tangibly embodied on a computer-readable medium, such as storage unit 1108. In some embodiments, some or all of the computer programs may be loaded and/or installed onto electronic device 1100 via ROM1102 and/or communication unit 1109. When the computer program is loaded into the RAM1103 and executed by the CPU1101, one or more actions of the methods described above may be performed.
Additionally, the embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the above-mentioned method for distributing the computing network nodes.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminals (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the application.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The foregoing describes in detail a method for distributing a computing power network node, a device for distributing a computing power network node, an electronic device and a computer readable storage medium, wherein the foregoing describes the principles and embodiments of the present application by applying specific examples, and the foregoing examples are provided for aiding in understanding of the method and core concept of the present application, and meanwhile, the present application should not be construed as being limited to the present application by those skilled in the art, based on the concept of the present application, where there is a change in the specific embodiments and application scope.