Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In the technical scheme of the application, the related processes such as collection, storage, use, processing, transmission, provision, disclosure and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order is not violated.
Fig. 1 shows a flowchart of an example of a dynamic load balancing method of an optical communication apparatus according to an embodiment of the present application.
The execution main body of the method of the embodiment of the application can be any controller or processor with calculation or processing capability, and by introducing artificial intelligence technologies such as a graph neural network, a flow prediction model and the like, the dynamic load balancing operation of each device in the optical communication network can be realized, and the resource utilization rate, the stability and the reliability of the network are effectively improved.
In some examples, it may be an optical communication load balancing platform and may be integrally configured in an electronic device, a terminal or a server by means of software, hardware or a combination of software and hardware, and the type of terminal, electronic device or server may be diversified, such as a mobile phone, a tablet computer or a desktop computer, etc.
As shown in fig. 1, in step S110, multi-dimensional time series data of each optical communication device in the optical communication network to be managed is obtained, where the multi-dimensional time series data includes a plurality of historical time steps and corresponding multi-dimensional data.
In some embodiments, the multidimensional data includes traffic load data, device status data, and external environment data. The traffic load data includes bandwidth usage, total traffic, upload rate, download rate, packet loss rate, and transmission delay. The device status data includes at least one of device failure information, computing resource consumption information, and memory resource consumption information. The external environment data includes at least one of temperature, weather forecast information, and voltage fluctuation information.
More specifically, the multi-dimensional time sequence data of each optical communication device is collected in real time by deploying a plurality of sensors and monitoring systems in the optical communication network, the flow load data is collected by the flow monitoring module, the equipment state data is collected by the equipment management system, and the external environment data is collected by the environment monitoring module. Further, all the acquired data are stored in time step units to form multi-dimensional time sequence data. Thus, by collecting and storing multi-dimensional time sequence data, a comprehensive and detailed historical data base is realized, and accurate data support is provided for subsequent flow prediction and load balancing.
It should be noted that, based on the device state data and the external environment data in the multi-dimensional data, the accuracy of the future flow prediction result and the robustness of the risk coping capability can be effectively improved. Aiming at the equipment state data, the performance degradation caused by equipment failure or equipment resource pressure can be effectively identified and positioned by monitoring the equipment failure information, the computing resource utilization rate and the storage resource utilization rate in real time, and the abnormal change of the network flow caused by abnormal working conditions can be accurately analyzed, so that the high accuracy of the flow prediction result is optimized. In addition, for external environment data, the interference of abnormal environment events on the stable operation of the equipment is rapidly captured by monitoring the environment temperature, weather forecast information and voltage fluctuation, so that the high accuracy of a flow prediction result is further optimized.
According to the embodiment, multidimensional time sequence data is introduced, the flow load data, the equipment state data and the external environment data are covered, and the running state of the optical communication device in different time periods can be reflected more comprehensively and accurately. By analyzing and predicting the multi-dimensional time sequence data, the flow load condition of each optical communication device in a future period of time can be accurately predicted.
In step S120, for each historical time step, multi-dimensional data of each optical communication device corresponding to the historical time step in the multi-dimensional time sequence data is extracted, a corresponding historical graph structure is constructed according to each extracted multi-dimensional data, and then node characteristics of each graph node in the historical graph structure are updated through the graph neural network.
Here, the history graph structure includes a plurality of graph nodes and edge connections, and node characteristics of each graph node are respectively defined by multidimensional data of the corresponding optical communication device, including traffic load, equipment status, and external environment data of each optical communication device at the history time step. Each side is connected with a graph node corresponding to the optical communication device with the link transmission relation, and the side connection in the graph structure is constructed according to the actual link topology.
Here, the types of the graph neural network (Graph Neural Network, GNN) may be diversified, such as GAT (Graph Attention Network, graph annotation network) or GCN (Graph Convolutional Network, graph convolution network), and the like. More preferably, the graph neural network employs a depth map convolution model, more details of which will be developed below in connection with other examples.
In some embodiments, the history graph structure is input into a trained graph neural network, the characteristics of each node are updated through multi-layer propagation, and the characteristics of each node are updated through a message passing mechanism so that the interaction and the association between each device in the network can be reflected.
By constructing and updating the history graph structure of each history time step, the embodiment can capture the topological relation and the data transmission characteristic of the optical communication device at different time steps, and can comprehensively capture the link relation between the devices. The node characteristics are updated by using the graph neural network, the mutual influence among all devices is fully considered, the hidden relation in the multidimensional data can be fully mined, and the accuracy and the expressive force of characteristic representation are improved.
In step S130, for each optical communication apparatus, a time-series fusion feature is determined based on the node features of the corresponding graph nodes of each updated history graph structure, and the time-series fusion feature is input to the traffic prediction model to determine predicted traffic load data corresponding to a preset time period in the future.
In some embodiments, for each optical communication device, a corresponding graph node is extracted from each updated history graph structure, and node features of each graph node and a corresponding time step are spliced and fused, so that corresponding time sequence fusion features are obtained. Here, the time sequence fusion feature comprehensively considers multidimensional data, time sequence change and network global relation, so that future traffic load trend of the device can be reflected more accurately. And then, inputting the time sequence fusion characteristics into a pre-trained flow prediction model to predict flow load data in a preset time period in the future, wherein the flow prediction model can adopt various time sequence models, such as LSTM, GRU and the like, and the time sequence fusion characteristics are extracted through the time sequence model, so that the time dynamic change of the node characteristics can be effectively integrated, and the timeliness and the accuracy of a flow prediction result are improved. In some examples of embodiments of the application, the traffic prediction model may employ an LSTM model based on a multi-head attention mechanism, and more details will be developed below in connection with other examples.
In step S140, a dynamic load balancing operation is performed based on the predicted traffic load data corresponding to each optical communication apparatus.
It should be noted that various non-limiting dynamic load balancing operations may be used herein, for example, an optimization algorithm (such as linear programming, genetic algorithm, deep reinforcement learning, etc.) may be used to make dynamic load balancing decisions based on the predicted traffic load data, to determine an optimal resource allocation scheme for each optical communication device. Therefore, according to the optimization decision result, specific load balancing operation is executed by adjusting the load distribution parameters of each optical communication device in real time, so that the balance of network flow distribution is ensured, and the utilization efficiency of network resources can be improved.
With respect to implementation details of the above step S140, in some embodiments, the predicted traffic load data corresponding to each optical communication device is input to the load balancing model to determine the load optimization adjustment instruction for at least one target optical communication device. Here, the load balancing model employs a meta-learning model and the load optimization adjustment instructions include at least one of adjusting traffic routing, enabling standby nodes, dynamically adjusting bandwidth allocation, and adjusting transmission paths.
It should be noted that the core of the Meta-Learning Model (Meta-Learning Model) is to train on a plurality of tasks, so that the Model can adapt to new tasks quickly. In the embodiment of the application, the meta-learning model is utilized to implement dynamic load balancing operation, so that the adaptability and generalization capability for a plurality of tasks can be realized. More specifically, the plurality of tasks are defined as load balancing requirements under different network load scenarios or network load types, respectively. Accordingly, the data sample set of the meta-learning model includes a plurality of data subsets, each data subset having a respective load type. Specifically, the plurality of data subsets includes a peak load data subset, a valley load data subset, a regular load data subset, and a gradual load data subset. Peak load scenarios may represent scenarios of high bandwidth usage, high latency, rising packet loss rates, e.g., where the network is carrying a large amount of data traffic during peak hours (e.g., the morning and evening peaks of weekdays), requiring fast response and load distribution. The off-peak load scenario may represent a low bandwidth usage, low latency scenario, such as where the network is less traffic during off-peak periods (e.g., late night or holidays), but still requires that the device operate efficiently. A conventional load scenario may represent a scenario where traffic is smooth and bandwidth demands are common, such as network traffic on a common workday. The gradual load scenario may represent a gradual change in flow, which may be a gradual increase or decrease, such as a change in flow before the end of a shift peak or workday.
It should be appreciated that the requirements for load balancing control are different for different load scenarios, in order to enable adaptive regulation for different load scenarios. Here, the meta-learning model is adopted to realize that good performance can be generated on a plurality of tasks, and fine adjustment is performed on each task to generate an optimal load distribution strategy, so that scene self-adaptive dynamic load balancing is realized. Thus, by training on multiple scene data sets (e.g., peak load, valley load, regular load, and gradual load), the meta-learning model can quickly adapt to new tasks and new environments, ensuring efficient execution of load balancing policies. In addition, based on the ability of multi-task learning, the model can predict and cope with various complex network environments and emergencies, and the overall adaptability and the robustness of the network are improved.
FIG. 2 illustrates a schematic diagram of structural connections of an example of a meta learning model according to an embodiment of the present application.
As shown in fig. 2, meta-learning model 200 includes random forest classifier 210, meta-learner 220, and a plurality of base model modules (231, 233..23n), each having a respective load type, e.g., base model module 231 for processing predicted peak load scenarios, base model module 233 for processing predicted valley load scenarios, and so forth.
FIG. 3 illustrates an example operational flow diagram for determining load optimization adjustment instructions via a load balancing model according to an example of an embodiment of the application.
As shown in fig. 3, in step S310, a target load type matching each of the input predicted traffic load data is determined by a random forest classifier.
In some embodiments, the random forest classifier performs feature extraction and analysis on the predicted traffic load data in each optical communication device in the input network through the combination of a plurality of decision trees, and determines a target load type matched with the input data from peak load type, valley load type, regular load type and gradual load type.
The random forest classifier can effectively improve the accuracy of load type classification, reduce the situation of misclassification, rapidly process input data and timely determine the target load type by integrating the judgment results of a plurality of decision trees.
In step S320, each predicted traffic load data is processed according to a target base model module that matches the target load type among the plurality of base model modules, so as to obtain a corresponding load optimization adjustment instruction.
According to the embodiment, the predicted traffic load data of each optical communication device in the network is analyzed through the random forest classifier, so that the corresponding load type is obtained, and the matched basic model module is called to output the corresponding load optimization adjustment instruction, so that the load optimization adjustment strategy is ensured to be suitable for the current network load condition or type, and the optimization effect of the network global load balance is improved. In addition, along with the real-time change of the network flow, the system can rapidly identify the load type and adjust the load distribution strategy, and the dynamic response capability of the network balance control is improved.
In some examples of embodiments of the application, the construction of the meta-learning model includes a continuous first-stage construction operation for constructing a random forest classifier and a second-stage construction operation for updating the respective base model modules.
For specific implementation details of the first stage build operation, in some embodiments, the data set is divided into a training set and a testing set. A random forest classifier model is trained using a training dataset, specifically, a plurality of samples are randomly extracted from the training dataset with a put-back ground, and a plurality of subsets are generated. For each subset, a decision tree is built separately, specifically, feature subsets are randomly selected and features are recursively partitioned until a stop condition (e.g., maximum depth or minimum number of samples) is met. Further, all decision trees are combined into a random forest. After the independent training is finished, the random forest classifier is applied to actual flow prediction data, corresponding load types (such as peak load, valley load, normal load and gradual load) are identified, and the load types corresponding to future flow data can be accurately predicted.
For specific implementation details of the second stage construction operation, according to the output result (load type) of the classifier, corresponding basic model modules are selected and trained, so that each basic model module is optimized for a specific load type, and dynamic adjustment of a load balancing strategy is achieved.
In the embodiment of the application, the meta-learning model is trained and constructed in stages, and the meta-learning device and the basic model module are trained after the training and construction of the random forest classifier are finished. Therefore, different load types can be accurately identified through advanced training and optimization of the classifier, an accurate basis is provided for selection of the basic model module, and the basic model module is dynamically selected according to the output result of the classifier, so that the system can flexibly cope with different load conditions, and the overall adaptability of the system is improved. In addition, the first stage only needs to pay attention to the training of the classifier, and the second stage focuses on the optimization of the basic model module, so that the tasks of each stage are more definite and concentrated, the coupling between the classifier and the basic model module is reduced through staged construction, and the implementation difficulty and the training complexity are reduced.
FIG. 4 illustrates an operational flow diagram of an example of a second stage build operation in the construction of a meta-learning model in accordance with an embodiment of the present application.
As shown in fig. 4, in step S410, model parameters of each basic model module are initialized based on the shared parameters of the meta learner.
Here, by sharing parametersAnd initializing each basic model module, realizing common optimization on a plurality of tasks, providing good initial parameters, and enabling the basic model to be capable of rapidly adapting to different load types.
In step S420, for each subset of data, a plurality of gradient descent optimizations are performed on a base model module having a matching load type using the subset of data to update model parameters of the base model module.
More specifically, each base model moduleOptimizing on corresponding load type data subset, and updating model parameters thereof. By way of example, the corresponding basic model modules are respectively trained and optimized through peak load data subsets, valley load data subsets and the like, and parameters of the basic model modules are updated so that the parameters can generate good performance on the data subsets of the respective tasks.
Here, through the initialization setting of the sharing parameters, the basic model module has good adaptability in the initial stage, and can be quickly adapted to different load types. In addition, the shared parameters are optimized together on a plurality of tasks, so that the quality of initial parameters of the model is improved, and the basic model can reach an optimal state in fewer iterations.
In step S430, meta-losses for all the base model modules are calculated, and in case it is detected that the meta-losses do not satisfy the meta-loss convergence condition, the shared parameters are updated based on the calculated meta-losses, and model parameters of the respective base model modules are reset using the updated shared parameters to iteratively update the model parameters of the respective base model modules.
In some embodiments, after training of each base Model module is completed, model parameters and losses of each base Model module are passed to a meta-learner update module, the shared parameters are updated by a meta-Learning algorithm (e.g., model-Agnostic Meta-Learning, MAML), and the optimized meta-learner parameters are used for the next round of base Model training, such that the shared parametersUpdating is performed through gradient descent of the meta-loss until the meta-loss convergence condition is satisfied, for example, the optimization amplitude of the meta-loss corresponding to two iterations is smaller than a threshold value.
Here, through the calculation of the meta-loss and gradient descent optimization, the global performance of the model over multiple tasks is ensured to be optimal. In addition, the optimization of each basic model module on each load type obviously improves the accuracy and effect of the load balancing strategy.
In some examples of embodiments of the present application, the loss of all base model modules may be comprehensively considered by means of weighted summation, resulting in meta-loss. More specifically, the meta-loss for all base model modules is calculated by:
Formula (1);
Formula (2);
Formula (3);
In the formula,Representing the meta-loss for all of the underlying model modules,Representing the total number of basic model modules contained in the meta-learning model; Represent the firstThe type of the individual loads to be applied,Representation ofThe weight of the corresponding weight is set to be equal to the weight,Representation ofA loss function of the corresponding basic model module; Is atThe corresponding data subset is trained to form a basic model module, and the parameters are as follows;Representation ofThe corresponding load type task is lost; Representation ofThe total number of samples in the corresponding data subset; Representation ofThe corresponding data subset is the firstA data sample and a corresponding tag; representing a loss function for a single sample; Representation ofGradient norms of the loss function of the corresponding base model module,The sum of the gradient norms of the loss functions for all base model modules is represented as a normalization factor.
In the meta-loss function used in the embodiment of the application, the model performance is optimized from the global perspective by comprehensively considering the losses of all the basic model modules. Furthermore, the weight of the individual tasks or base model modulesThe dynamic adjustment can ensure that the influence of different tasks is balanced in the optimization process, and the overall optimization effect is improved. More specifically, the weightThe model is dynamically adjusted according to the gradient of each task, so that the model can pay attention to important tasks in a self-adaptive manner in the training process, and the performance of the model under different load types is improved.
In some examples of embodiments of the application, the base model module employs a reinforcement learning model. At this point, the loss function of the reinforcement learning model may be a corresponding policy gradient loss function (Policy Gradient Loss), a function for optimizing parameters of the policy network to enable actions taken to maximize the expected rewards for supporting iterative computation of element losses.
More specifically, the state of the reinforcement learning model is defined by the predicted traffic load data of each optical communication device in the optical communication network and the overall load balancing index, overall network delay, and overall bandwidth utilization of the optical communication network, the actions of the reinforcement learning model are defined by the load optimization adjustment instructions for one or more optical communication devices, and the rewards of the reinforcement learning model are defined by the overall load balancing index, overall network delay, and the degree of optimization of the overall bandwidth utilization of the optical communication network after the actions are performed.
According to the embodiment, the load balancing regulation and control scheme is integrated into the reinforcement learning model, and through interactive learning with the environment, the load balancing strategy can be adjusted in real time, the load balancing strategy is suitable for the network load condition which changes rapidly, and the dynamic response capability of the system is improved. Through continuous interactive learning and optimization, the basic model module can show good robustness in the face of different load changes and network conditions, and through selection and execution of an optimal strategy, the bandwidth utilization rate can be improved, network delay is reduced, resource allocation is optimized, and overall network performance is improved. For different load types, each corresponding basic model module can adaptively learn an optimal strategy to ensure optimization of system performance under various load conditions. Therefore, through detailed state, action and rewarding design, the self-adaptive reinforcement learning model is adopted as a basic model module, and the dynamic load balancing of the high-efficiency and stable optical communication device can be realized through real-time learning and dynamic optimization.
In some examples of embodiments of the application, the reward function of the reinforcement learning model is:
formula (4);
formula (5);
formula (6);
formula (7);
formula (8);
Formula (9);
Formula (10);
Formula (11);
In the formula,Representing the previous stateAfter the action is executed, the new state is reachedThe generated rewards; State before execution for actionThe corresponding overall load balancing index is used for balancing the load,New state after execution of actionIs a global load balancing index; State before execution for actionIs used to determine the overall network delay of the (c) network,New state after execution of actionIs a global network delay of (1); State before execution for actionIs used for the overall bandwidth utilization of the system,New state after execution of actionIs a whole bandwidth utilization of the system; Is in a previous stateAfter the action is executed, the new state is reachedDynamic weight coefficients of (2); As an initial weight to be used,In order to adjust the coefficient of the coefficient,Respectively an overall load balance index reference value, a network delay reference value and a bandwidth utilization ratio reference value; Indicating the total number of optical communication devices in the optical communication network,Is in state ofTime NoThe traffic load of the individual optical communication devices,Is in state ofAverage flow load at time: Is in state ofTime NoNetwork delay of the individual optical communication devices; Is in state ofTime NoBandwidth utilization of individual optical communication devices.
In the reward function provided in this embodiment, the overall load balance index, network delay and improvement degree of bandwidth utilization rate are comprehensively considered in the reward function, so that the load balance strategy can more effectively and comprehensively optimize each network performance index, and excessive fluctuation of the reward value can be avoided. The reward function reflects the system state change after the action is executed in real time, guides the reinforcement learning model to learn the optimal strategy, improves the dynamic response capability of the system, enables the system to adapt to the load change quickly, adjusts the load balancing strategy in time, and avoids network congestion and resource waste. In addition, through the design of each dynamic weight coefficient, the reward function can adaptively adjust the optimization target, so that the system can be excellent under different load conditions, for example, under high load conditions, the system can pay more attention to the improvement of the load balance index and the network delay, and under low load conditions, the system can pay more attention to the improvement of the bandwidth utilization rate, and the dynamic load balance capacity of the network system is optimized.
In some examples of embodiments of the present application, the graph neural network employs a depth map convolution model that includes a plurality of map convolution layers to update node characteristics of each map node in the history map structure by:
formula (12);
Formula (13);
In the formula,Represent the firstThe layer graph convolves the node feature matrix of the layer,The activation function is represented as a function of the activation,Representing graph nodesIs a neighbor map node set; AndRespectively, are graph nodesAndDegree of (3); AndRespectively the firstThe layer diagram is rolled up and layered with weights and biases; Represent the firstLayer graph nodeIs characterized by the node characteristics of (a),Representing the mean aggregation function.
In the embodiment of the application, the multidimensional data are fused through the graph convolution layer, so that the working load condition of each optical communication device in the optical communication network can be comprehensively reflected, and each updated node characteristic not only contains own load information, but also fuses the characteristics of neighbor nodes. Specifically, the graph volume stacking layer can effectively aggregate the information of the neighbor nodes through the mean value aggregation function, so that the characteristics of each node are not only dependent on own data, but also influenced by the neighbor nodes, the relevance among the nodes in the network can be better reflected, the load balancing strategy has higher integrity and global performance by considering the load condition of the neighbor nodes, the local optimal phenomenon is avoided, the load balancing is carried out through the global view, and the utilization efficiency of network resources is improved.
In some examples of embodiments of the application, the flow prediction model employs an LSTM model based on a multi-head attention mechanism. Fig. 5 shows a schematic structural connection diagram of an example of a flow prediction model according to an embodiment of the present application.
As shown in fig. 5, the traffic prediction model 500 includes an input layer 510, an LSTM layer 520, a multi-headed attention layer 530, and an output layer 540.
The input layer 510 is for receiving the timing fusion feature.
Here, the amount of information included in the timing fusion feature is huge, which can reflect the historical traffic feature, the equipment status feature, the external environment feature, and the corresponding timing feature change information of the optical communication device, and the features of other optical communication devices related to the network, and can comprehensively reflect the network status.
The LSTM layer 520 is configured to process the timing fusion feature, capture the dependency between time steps:
Formula (14);
In the formula,Representing the fusion characteristics of the corresponding time steps in the time sequence fusion characteristics,Is a time stepIs used to determine the hidden state of the (c),Is a time stepIs a hidden state of (c).
The LSTM layer can capture the dependency relationship in the long-time sequence, ensure the effective extraction and utilization of the time sequence characteristics, and improve the prediction capability of future flow trend.
The multi-headed attention layer 530 is used to capture timing characteristics from different angles and levels:
Formula (15);
formula (16);
A compound of formula (17;
Formula (18);
formula (19);
Formula (20);
In the formula,A multi-head attention feature output for a multi-head attention layer; AndRespectively represent the firstAnd (b)The output characteristics of the individual attention heads,Representing the total number of attention heads,The characteristic stitching operation is represented as such,Representing a multi-headed attention mechanism function; The function of the attention mechanism is represented,An output weight matrix for the multi-head attention; Is the firstThe query matrix weights of the individual attention headers,Is the firstThe key matrix weights of the individual attention headers,Is the firstA value matrix weight for each attention header;、 AndRepresenting a query matrix, a key matrix and a value matrix respectively,A dimension represented as a key vector; representing the softmax activation function.
By a multi-head attention mechanism, time sequence characteristics can be captured from different angles and levels, and the expression capability of the model is enhanced. Therefore, the multi-head attention mechanism enables the model to understand and process time sequence data from different perspectives, the model can capture the characteristics which have important influence on the prediction result more finely, and the comprehensiveness and the prediction reliability of the characteristic extraction are improved.
The output layer 540 is configured to map the output of the multi-head attention mechanism through the full connection layer, so as to obtain final predicted traffic load data:
;
In the formula,Is a time stepIs used for the flow prediction result of the (a),AndThe weights and offsets of the full connection layer respectively,Represents the leak ReLU activation function.
In addition, in order to realize the processing of non-negative data by combining the characteristics of flow prediction, a leakage ReLU activation function is used, and the leakage ReLU activation function allows a small negative slope, so that the problem that the output of the traditional ReLU is zero in a negative area is relieved, neurons can still keep active when the negative value is input, the stability of a model is improved, the problem of dead neurons can be effectively relieved, and the stability of the model is improved. In addition, because the neurons cannot be completely inactivated due to negative input, the model can be better adapted to different input data, and the overall robustness is improved. On the other hand, leakyReLU activation functions introduce nonlinear characteristics, so that the model can better fit complex data distribution, the model can more accurately predict complex flow load conditions, and a load balancing strategy is optimized. Therefore, accurate flow prediction is realized, the system can quickly respond to network load change, and the system is ensured to keep high-efficiency running under different load conditions by dynamically adjusting load balancing strategies, so that the overall network performance is improved.
It should be noted that, for simplicity of description, the foregoing method embodiments are all illustrated as a series of acts combined, but it should be understood and appreciated by those skilled in the art that the present application is not limited by the order of acts, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application. In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Fig. 6 shows a block diagram of an example of a dynamic load balancing system of an optical communication apparatus according to an embodiment of the present application.
As shown in fig. 6, the dynamic load balancing system 600 of the optical communication apparatus includes a data acquisition unit 610, a feature extraction unit 620, a traffic prediction unit 630, and a load balancing unit 640.
The data obtaining unit 610 is configured to obtain multi-dimensional time sequence data of each optical communication device in the optical communication network to be managed, where the multi-dimensional time sequence data includes a plurality of historical time steps and corresponding multi-dimensional data, the multi-dimensional data includes traffic load data, device state data and external environment data, and the traffic load data includes bandwidth usage rate, total traffic, upload rate, download rate, packet loss rate and transmission delay.
The feature extraction unit 620 is configured to extract, for each of the historical time steps, multi-dimensional data of each optical communication device corresponding to the historical time step in the multi-dimensional time sequence data, construct a corresponding historical graph structure according to each of the extracted multi-dimensional data, and then update node features of each graph node in the historical graph structure through a graph neural network, where the historical graph structure includes a plurality of graph nodes and edge connections, and node features of each graph node are respectively defined by the multi-dimensional data of the corresponding optical communication device, and each edge connection is used to connect the graph nodes corresponding to the optical communication devices having a link transmission relationship.
The traffic prediction unit 630 is configured to determine, for each optical communication device, a time-series fusion feature based on node features of corresponding graph nodes of each updated historical graph structure, and input the time-series fusion feature to a traffic prediction model to determine predicted traffic load data corresponding to a future preset time period.
The load balancing unit 640 is configured to perform a dynamic load balancing operation based on the predicted traffic load data corresponding to each of the optical communication devices.
In some embodiments, embodiments of the present application provide a non-transitory computer readable storage medium having stored therein one or more programs including execution instructions that are readable and executable by an electronic device (including, but not limited to, a computer, a server, or a network device, etc.) for performing the steps of the dynamic load balancing method of any of the above-described optical communication apparatuses of the present application.
In some embodiments, embodiments of the present application also provide a computer program product comprising a computer program stored on a non-volatile computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the steps of the dynamic load balancing method of any of the above-mentioned optical communication apparatuses.
In some embodiments, the present application further provides an electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of a dynamic load balancing method of an optical communication apparatus.
Fig. 7 is a schematic hardware structure of an electronic device for performing a dynamic load balancing method of an optical communication apparatus according to another embodiment of the present application, where, as shown in fig. 7, the device includes:
one or more processors 710, and a memory 720, one processor 710 being illustrated in fig. 7.
The apparatus for performing the dynamic load balancing method of the optical communication device may further include an input device 730 and an output device 740.
Processor 710, memory 720, input device 730, and output device 740 may be connected by a bus or other means, for example in fig. 7.
The memory 720 is used as a non-volatile computer readable storage medium for storing a non-volatile software program, a non-volatile computer executable program, and modules, such as program instructions/modules corresponding to the dynamic load balancing method of the optical communication device in the embodiment of the present application. The processor 710 executes various functional applications of the server and data processing by running non-volatile software programs, instructions and modules stored in the memory 720, i.e., implements the dynamic load balancing method of the optical communication device of the above-described method embodiment.
The memory 720 may include a storage program area that may store an operating system, application programs required for at least one function, and a storage data area that may store data created according to the use of the electronic device, etc. In addition, memory 720 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 720 may optionally include memory located remotely from processor 710, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 730 may receive input digital or character information and generate signals related to user settings and function control of the electronic device. The output device 740 may include a display device such as a display screen.
The one or more modules are stored in the memory 720 that, when executed by the one or more processors 710, perform the dynamic load balancing method of the optical communication device in any of the method embodiments described above.
The product can execute the method provided by the embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. Technical details not described in detail in this embodiment may be found in the methods provided in the embodiments of the present application.
The electronic device of the embodiments of the present application exists in a variety of forms including, but not limited to:
(1) Mobile communication devices, which are characterized by mobile communication functionality and are aimed at providing voice, data communication. Such terminals include smart phones, multimedia phones, functional phones, low-end phones, and the like.
(2) Ultra mobile personal computer equipment, which belongs to the category of personal computers, has the functions of calculation and processing and generally has the characteristic of mobile internet surfing. Such terminals include PDA, MID, and UMPC devices, etc.
(3) Portable entertainment devices such devices can display and play multimedia content. The device comprises an audio player, a video player, a palm game machine, an electronic book, an intelligent toy and a portable vehicle navigation device.
(4) Other on-board electronic devices with data interaction functions, such as on-board devices mounted on vehicles.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Based on such understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the related art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the method described in the respective embodiments or some parts of the embodiments.
It should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same, and although the present application has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that the technical solution described in the above-mentioned embodiments may be modified or some technical features may be equivalently replaced, and these modifications or substitutions do not make the essence of the corresponding technical solution deviate from the spirit and scope of the technical solution of the embodiments of the present application.