Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may also be split, combined, or partially combined, so that the order of actual execution may vary based on actual circumstances.
It is also to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flowchart of a deterministic network delay scheduling method based on TSN according to an embodiment of the present application, and as shown in fig. 1, the deterministic network delay scheduling method based on TSN according to an embodiment of the present application includes steps S100 to S600.
S100, performing graph theory analysis on a TSN network topological structure to obtain a network node connection relation and link bandwidth information;
It can be understood that the execution body of the present invention may be a deterministic network delay scheduling device based on TSN, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
Specifically, topology scanning is performed on the TSN network, and the number of nodes and node identification information in the network are obtained. All switching nodes and end nodes in the network are identified by topology scanning and assigned unique identifiers, which are organized as an initial set of nodes. And carrying out connection relation analysis on the nodes in the initial node set, and establishing an adjacency matrix between the nodes. An adjacency matrix is a standard graph-theory structure used to represent the direct connection between nodes in a network, with the rows and columns representing the node identities and the elements in the matrix indicating whether direct links exist between the corresponding nodes. The basic topology of the TSN network is described by an adjacency matrix, and the network is abstracted into a network connection topology diagram in the form of an undirected graph. And calculating the network connection topological graph through a minimum spanning tree algorithm to obtain a backbone link distribution diagram of the network. The purpose of the minimum spanning tree is to identify critical links in the network that form the lowest cost set of paths that ensure network connectivity while providing backbone structural basis for subsequent sub-network partitioning. Based on the generated backbone link distribution diagram, the whole network is divided into a plurality of independent sub-network units so as to facilitate management and resource scheduling. Identifying boundary nodes for each sub-network element is done at the sub-network division. The boundary nodes are key nodes for the communication between the sub-networks and other sub-networks, and the boundary nodes are identified to be capable of definitely defining the interconnection relationship among the sub-networks, so that an interconnection relationship matrix among the sub-networks is generated, and the topology structure of interconnection among different sub-networks through the boundary nodes is described. And measuring the bandwidth capacity of each link according to the interconnection relation matrix among the subnetworks to obtain a link bandwidth distribution diagram. The bandwidth capacity measurement includes evaluating the physical and logical bandwidths of the links and recording the maximum bandwidth and the actual available bandwidth for each link. By this step, a bandwidth distribution map covering the whole network is formed, providing necessary bandwidth constraints for resource allocation at scheduling. The reference time period is calculated based on the link bandwidth profile. The reference time period is the basis of periodic scheduling in the TSN, and determines the scheduling period and time synchronism of the time sensitive traffic. On the basis of calculating the reference time period, dividing the period into a preset number of time slice units to obtain a time slice division scheme. The time slice partitioning scheme provides a global time resource framework defining the transmission time window for each traffic in the network. After the time slice division is completed, initial time slice resources are allocated to each sub-network unit according to the division scheme, a specific scheduling plan is formed, and finally network node connection relation and link bandwidth information are generated.
Step 200, inputting the network node connection relation and the link bandwidth information into a multi-level priority sensing network, extracting the time delay requirement and the bandwidth demand characteristic of the data stream, and generating a data stream priority queue and a resource demand index matrix;
Specifically, the network node connection relation is subjected to matrix transformation, and the interrelation and connection modes among nodes in the network are extracted through the structuring treatment of the node adjacency matrix, so as to form a network topology characteristic matrix. And simultaneously, carrying out normalization processing on the bandwidth information of the links, and mapping the bandwidth value of each link into the standardized range of [0,1], so that the subsequent feature extraction module can uniformly process data with different scales to obtain bandwidth feature vectors. And inputting the network topology feature matrix and the bandwidth feature vector into a feature extraction layer of the multi-level priority perception network. The feature extraction layer adopts a three-layer convolution network structure, and each layer convolution network uses a ReLU activation function to enhance the nonlinear expression capability of the network, and meanwhile, the training stability and the convergence rate are improved through a batch normalization technology. The basic characteristic diagram of the data flow output by the process is a preliminary description of key characteristics such as time delay, bandwidth and the like of an input network. The data flow basic feature map is input into a flow classification layer of the multi-level priority perception network, the flow classification layer comprises two full-connection layers and one Softmax layer, the data flow is classified through the depth structure, the service characteristics of the data flow can be effectively identified, and the data flow is divided into key service flows, periodic service flows and burst service flows. The classification result is encoded as a traffic class identification matrix describing the class to which each data stream belongs and its characteristics. And constructing a feature vector of the traffic class identification matrix, and extracting parameters of four dimensions of end-to-end delay requirements, bandwidth requirements, burst characteristics and cycle characteristics of the data stream. These parameters are obtained by joint analysis of the classification result of the traffic and the initial network characteristics, and finally a traffic characteristic vector set is generated, which can comprehensively express the key attribute of each data stream to be considered in the resource scheduling process. The flow characteristic vector set is input to a priority mapping layer of a multi-level priority perception network, the priority mapping layer consists of a full connection layer and a Sigmoid activation layer, the flow characteristic is refined through the network structure, and the delay sensitivity weight, the bandwidth demand weight and the priority level weight of each data stream are calculated. These weight parameters reflect the different sensitivities of the data streams to latency, bandwidth, and priority. And calculating the resource demand index of each data stream according to the priority quantization parameter. The calculation of the resource demand index adopts a weighted summation mode, and the time delay sensitivity weight, the bandwidth demand weight and the priority class weight are combined proportionally so as to quantify the total demand level of the data flow on the network resource. And sorting the resource demand indexes in descending order according to the numerical value, and constructing a data stream priority queue. In the data stream priority queue, high demand data streams are scheduled preferentially, while low demand data streams are processed in subsequent time slices. Grouping the priority queues according to the preset time window size to form a grouping matrix so as to facilitate reasonable resource allocation to the data flows with different priorities in the time slice scheduling. And carrying out normalization processing on each matrix element in the grouping matrix, normalizing the resource demand index into a [0,1] interval so as to eliminate the difference of resource demand magnitudes among different data streams, and finally generating a resource demand index matrix.
Step S300, inputting a data flow priority queue and a resource demand index matrix into a deep Q network, establishing a state-action mapping relation comprising a network load state and a time slice allocation action, and generating an initial time delay scheduling strategy model;
Specifically, feature fusion is carried out on the data flow priority queue and the resource demand index matrix, high-dimensional feature representation is formed by fusing priority information of the data flow and the corresponding resource demand index, and a state space of the deep Q network is constructed. The fused characteristics are expressed as a network state characteristic matrix, and the matrix contains priority weights, bandwidth demands and corresponding resource occupation information of each data flow in the current network, so that the current network load state is comprehensively described. In order to realize discrete scheduling of time slices, the network state feature matrix is encoded with time slice allocation actions. The time slice allocation actions are quantized into discrete gear steps representing the resource allocation proportion for each time slice. Through the coding mechanism, an action space matrix is formed, each row of the action space matrix corresponds to a specific time slice allocation action, and all possible time slice allocation schemes form a complete action space for supporting action selection and optimization of the depth Q network. Inputting the constructed network state feature matrix into a value evaluation module of a deep Q network, wherein in the value evaluation module, data sequentially pass through three full-connection layers, and each full-connection layer adopts a ReLU activation function to enhance the nonlinear expression capability and ensure the stability of gradient propagation. And after passing through the value evaluation module, obtaining a state value evaluation result, wherein the result represents potential scheduling benefits of the current network state under different time slice allocation schemes. based on the state value evaluation result and the action space matrix, an action cost function is established, wherein the action cost function is composed of instant rewards, discount factors and future rewards. The immediate rewards term reflects the direct contribution of the current time slice allocation action to the time optimization, the discount factor is used for balancing the influence of the current rewards and the future rewards, so that the model can consider the long-distance dispatching effect, and the future rewards term evaluates the accumulated benefits possibly obtained by the subsequent time steps. By considering these three parts together, the action cost function can accurately evaluate the value of different time slice allocation actions. After the action cost function is built, training the depth Q network based on a strategy iteration mechanism. In the training process, an epsilon-greedy strategy is adopted to select the action, namely, the time slice allocation action with the highest value is selected with a certain probability, and other actions are explored with a smaller probability at the same time, so that the situation of sinking into a local optimal solution is avoided. And generating a selection probability distribution of the time slice allocation action through an epsilon-greedy strategy, so as to guide the parameter optimization of the network. And selecting probability distribution according to the generated actions, and optimizing the weight parameters of the depth Q network by using a random gradient descent method. In the optimization process, the action cost function of each sample is used as a part of the loss function, and the network parameters are continuously adjusted through a back propagation algorithm, so that the relation between the state and the action cost is better fitted, and a state-action mapping model of the depth Q network is obtained. To enhance the generalization ability of the model, the output results of the state-to-action mapping model are stored to an empirical playback buffer. The experience playback buffer records state transition samples for each time step, including the current state, the action taken, the instant prize, and the next state, to form a complete transition probability matrix. Based on the transition probability matrix, online learning is carried out on the deep Q network, and the model can adapt to the dynamically-changed network load condition by continuously resampling data in the buffer area and updating network parameters. The obtained state-action mapping model can effectively reflect the relation between the network load state and the time slice allocation action through multiple rounds of online learning and optimization, and an initial time delay scheduling strategy model is formed.
Step S400, according to an initial time delay scheduling strategy model, calculating queuing time delay, transmission time delay and processing time delay of each data stream, and generating a time slice resource reallocation scheme;
Specifically, key parameters related to time delay are extracted from an initial time delay scheduling strategy model, including the path hop count, link bandwidth and node processing capacity of a data stream. By integrating this information, a time delay calculation parameter set is constructed which is used to describe the performance constraints and resource characteristics involved in the entire transmission path of the data stream from the source node to the destination node. Analyzing the queue cache data in the time delay calculation parameter set, and mainly analyzing the cache queue length and the service processing rate of each node in the network. And obtaining a queuing delay distribution matrix of the data flow by modeling the flow queuing condition of each node. Queuing delay is an important part of delay composition, the value of the queuing delay depends on the degree of queue backlog and service capacity of nodes in a specific time period, and a queuing delay distribution matrix comprehensively reflects the flow queuing characteristics of all nodes in a network. After the queuing delay distribution matrix is obtained, the transmission process of the data packet on each link is modeled to calculate the transmission delay. The transmission delay is determined by the length and the bandwidth of the link, and the calculation model of the transmission delay is based on the physical transmission rate of the link and the size of the data stream, and the transmission delay matrix of the link is obtained through analysis of the links one by one. The matrix is used to describe the transmission time consumed by the data stream in each segment of link on the path. And carrying out load analysis on the processing capacity of the network node according to the link transmission delay matrix so as to calculate the processing delay. The processing time delay is directly related to the processing capacity and the current load of the nodes, and the node processing time delay matrix is obtained by analyzing the performance of each node when processing traffic. The matrix reflects the time cost of each node in the network when processing the data stream, and the time delay performance of the data stream in the whole network is more accurately estimated by combining the queuing time delay distribution matrix and the link transmission time delay matrix. And performing superposition operation on the queuing delay distribution matrix, the link transmission delay matrix and the node processing delay matrix to obtain an end-to-end total delay matrix, reflecting total delay information of all data streams from a source to a destination in a network, marking the data streams with total delay exceeding a threshold value, and generating a delay overrun flow set. And calculating the time slice reassignment coefficient of each sub-network based on the time delay overrun traffic set. The design principle of the time slice reallocation coefficient is inversely proportional to the total end-to-end time delay, namely, the traffic with larger total time delay preferentially obtains more resources in the time slice resource allocation, and the traffic with smaller time delay reduces the resource allocation, so that the dynamic optimization of global resources is realized. And generating a time slice adjustment factor matrix according to the calculation result of the reassignment coefficient, wherein the matrix is used for guiding the updating of the time slice assignment scheme. And based on the time slice adjustment factor matrix, optimizing and adjusting the initial time slice allocation scheme to obtain a time slice resource reallocation scheme.
Step S500, generating a global resource allocation matrix based on a time slice resource reallocation scheme;
Specifically, the resource utilization rate statistical calculation is performed on each sub-network unit in the time slice resource reallocation scheme, and the time slice allocation condition and the corresponding bandwidth resource occupation ratio in each sub-network unit are analyzed to form a sub-network resource utilization state matrix. Each row of the matrix corresponds to a sub-network element, and each column describes specific resource utilization characteristics, such as bandwidth occupation, time slice distribution ratio, etc. And constructing a traffic load balancing index based on the sub-network resource utilization state matrix. The traffic load balancing index is an important index for measuring the balance of the resource distribution among all the sub-networks, and is calculated through comprehensive analysis of the resource utilization rate and the traffic demand of the sub-networks. By the index, the imbalance degree of the resource allocation in the network is identified, a sub-network load distribution table is generated, and the load pressure and the resource utilization condition of each sub-network are recorded. And after the sub-network load distribution table is generated, selecting a sub-network unit with the flow load balancing index exceeding a preset threshold value for analysis. These sub-network elements face problems of resource shortage or unreasonable distribution, and their specific demands are quantified by calculating their bandwidth resource gaps. The bandwidth resource gap analysis models the difference between the current traffic demand and the existing bandwidth resources, and the result is presented in the form of a bandwidth resource demand matrix that describes the amount of additional bandwidth required by each sub-network in meeting the traffic demand. And performing multi-objective optimization calculation on the bandwidth resource demand matrix. And inputting the bandwidth borrowing cost weight, the resource utilization rate weight and the network risk weight into an optimization function, comprehensively considering the allocation cost of bandwidth resources, the utilization efficiency of the whole network resources and the potential network risk, and generating an optimal bandwidth borrowing scheme. The objective of the optimization function is to minimize the cost of resource borrowing and balance the load pressure of each sub-network at the same time, thereby realizing reasonable distribution of global resources. And inputting the resource allocation parameters in the bandwidth borrowing optimization scheme into a resource allocator, and reallocating the bandwidth resources across the sub-networks to generate a bandwidth multiplexing matrix, wherein the matrix records the specific relation of resource borrowing and multiplexing among the sub-networks. In order to adapt to a scheduling mechanism of a time sensitive network, the bandwidth multiplexing matrix is subjected to time slice mapping conversion, and the allocation relation of bandwidth resources is mapped to the time slice resources to form a time slice mapping table. The time slice mapping table describes the distribution condition of bandwidth resources in the time dimension, so that the resource allocation not only meets the bandwidth requirement, but also meets the periodic scheduling requirement of TSN. Based on the time slice mapping table, carrying out space arrangement calculation on the resources of each sub-network, constructing a three-dimensional resource allocation tensor, obtaining a resource space distribution map, integrating the information of time dimension, space dimension and resource dimension, and describing the distribution condition of the resources in the whole network. And performing dimension reduction processing on the resource space distribution diagram. In the dimension reduction process, the information of the time dimension, the space dimension and the resource dimension is integrated, redundant data is eliminated, and key characteristics are reserved to form a global resource allocation matrix.
And step S600, performing performance evaluation on the global resource allocation matrix to generate a network performance score, and performing parameter feedback update on the initial time delay scheduling policy model according to the network performance score to obtain a target time delay scheduling policy model.
Specifically, performance index calculation is performed on the global resource allocation matrix, and three core indexes of end-to-end time delay standard reaching rate, link utilization rate and priority satisfaction degree are calculated respectively through analyzing resource utilization condition and scheduling effect of the network. These metrics collectively describe the behavior of the network in supporting low latency transmissions, efficiently utilizing link bandwidth, and meeting the priority requirements of different data streams, forming a performance dataset. And inputting the performance data set into a multi-index comprehensive evaluation module. In the module, the network performance score is obtained by comprehensively evaluating the end-to-end time delay standard reaching rate through setting a weight coefficient alpha, setting a weight coefficient beta and setting a weight coefficient gamma for the link utilization rate and the priority satisfaction according to the calculation rule of weighted summation. The network performance score is a normalized scalar value reflecting the performance of the current scheduling policy in each dimension. And (3) performing threshold detection on the network performance score, comparing the calculated score with a preset performance benchmark threshold to generate a performance difference matrix, and recording the deviation between the actual performance and the target performance, wherein each item in the matrix reflects the optimization space of a specific index. By analyzing the performance difference matrix, the shortcomings of the current scheduling strategy in aspects of time delay control, resource utilization, priority satisfaction and the like are located. And based on the performance difference matrix, carrying out feedback adjustment on key parameters in the initial time delay scheduling strategy model. And calculating reverse errors through instant rewarding items, discount factors and future rewarding items in the depth Q network, comparing the performance difference serving as a target value with a prediction result of the model to generate a parameter gradient matrix, and describing the direction and the amplitude to be optimized. And according to the optimization rule of the random gradient descent method, using the parameter gradient matrix for updating the weight of the depth Q network. The introduction of the random gradient descent method ensures that the model can gradually approach the global optimal solution in the multidimensional parameter space, thereby effectively improving the prediction capability and decision efficiency of the model. The optimized depth Q network weight describes the state-action mapping relation more accurately, and generates a better strategy in the next scheduling calculation. And (3) according to the updated network weight, carrying out delay scheduling calculation on the data flow again, and generating a new state-action mapping table. The state-action mapping table records the corresponding optimal time slice allocation actions under different network states, and provides an execution scheme for actual scheduling. And carrying out online verification on the new mapping table, and calculating the average rewarding value of the sample by applying the new mapping table to a verification sample so as to evaluate whether the optimized strategy can obviously improve the network performance. In the online verification process, the improvement effect of the strategy is judged by comparing the average rewarding values before and after optimization. And if the verification result shows that the optimized strategy can meet the preset performance target, solidifying the optimized scheduling strategy parameters into the depth Q network to form a final target time delay scheduling strategy model. The solidified model can realize more efficient scheduling decision and adapt to dynamically-changed network environment, and consistent time delay guarantee and resource utilization efficiency are ensured to be provided under different load conditions.
In the embodiment of the invention, the TSN network topology is modeled by introducing a graph theory analysis method, a minimum spanning tree algorithm is combined to carry out sub-network division, network complexity is effectively reduced, resource allocation accuracy is improved, a multi-level priority sensing network is adopted to carry out intelligent classification on data streams, time delay demand characteristics of different data streams are accurately identified through three layers of characteristic extraction, flow classification and priority mapping, a state-action mapping relation is established based on a deep Q network, a scheduling strategy is dynamically optimized by a reinforcement learning method, the limitation of the traditional static scheduling method is overcome, an end-to-end time delay assessment mechanism is designed, queuing time delay, transmission time delay and processing time delay are comprehensively considered, accurate assessment and dynamic adjustment on network performance are realized, a bandwidth resource multiplexing mechanism of a cross-sub-network is provided, network resource utilization efficiency is remarkably improved through bandwidth borrowing and time slice reallocation, a performance assessment system based on multiple indexes is constructed, the scheduling strategy can be continuously optimized through a parameter feedback updating mechanism, and stability of network performance is maintained.
In a specific embodiment, the process of executing step S100 may specifically include the following steps:
performing topology scanning on the TSN network to acquire the number of nodes and node identification information in the network to acquire an initial node set;
Analyzing the connection relation of the nodes in the initial node set, establishing a node adjacency matrix to obtain a network connection topological graph, and performing minimum spanning tree calculation on the network connection topological graph to obtain a network backbone link distribution diagram;
Dividing the sub-network based on the network backbone link distribution diagram to obtain a sub-network unit set, and identifying boundary nodes of each sub-network unit in the sub-network unit set to obtain an interconnection relation matrix between the sub-networks;
According to the interconnection relation matrix among the sub-networks, measuring the bandwidth capacity of each link to obtain a link bandwidth distribution diagram;
Calculating a reference time period based on a link bandwidth distribution diagram, dividing the reference time period into a preset number of time slice units to obtain a time slice division scheme, and distributing initial time slice resources to each sub-network unit according to the time slice division scheme to obtain a network node connection relation and link bandwidth information.
Specifically, topology scanning is performed on the TSN network, and the number of nodes and node identification information in the network are identified. Gradually inquiring network equipment via network protocol (such as LLDP, link layer discovery protocol) to obtain identification information (such as MAC address or IP address) of all nodes, and organizing the identification information into an initial node set, which is recorded as setWhereinRepresent the firstThe number of nodes in the network is,Representing the total number of nodes. Analyzing the connection relation of the nodes in the initial node set to establish a node adjacency matrix. The adjacency matrix is oneIs a two-dimensional matrix of matrix elementsRepresenting nodesAndA connection state between them. If nodeAndIf there is a direct physical link connection, thenOtherwise. For example, assuming that there are 5 nodes in the network, the connection relationship between the nodes is as followsConnectionAndNode (C)ConnectionAndNode (C)ConnectionNode (C)ConnectionAndNode (C)Connect only. Then the adjacency matrixExpressed as:
;
The network connection topology constructed based on the adjacency matrix clearly describes the network structure. To extract the critical link structure of the network, its minimum spanning tree is computed. The minimum spanning tree is a connected subgraph containing all nodes, and the sum of the edges of the graph is the smallest. Assuming that the weight of a link is the inverse of its bandwidthWhereinIs a nodeAndThe minimum spanning tree algorithm (e.g., prim algorithm or Kruskal algorithm) is used to find the lowest cost network backbone link profile. And after obtaining the network backbone link distribution diagram, dividing the sub-network based on the backbone links. By analyzing key nodes (i.e., highly connected central nodes) in the minimum spanning tree, multiple sets of sub-network elements are partitionedWhereinRepresent the firstA sub-network unit. Analyzing each sub-network element in the sub-network element set, and identifying a boundary node set by detecting the connection node of the sub-network element with other sub-networks. Boundary nodes are bridges connecting different sub-networks, and an interconnection relation matrix between the sub-networks is constructed through the nodesWhereinRepresenting a sub-networkAndThe number is the number of links of the boundary node between the two. Based on interconnection relation matrix between subnetworksMeasuring the bandwidth capacity of each link to obtain a link bandwidth distribution diagram. Bandwidth capacityIs to query the bandwidth parameters of the device via a network management protocol (e.g.snmp) or to obtain link utilization via real-time monitoringPost-calculation:
;
Wherein,Is a linkAndIs used for the current utilization of the system. Based on link bandwidth profileCalculating a reference time period. The reference time period is the basis of the global scheduling of the network, and the value of the reference time period is determined by the minimum bandwidth link:
;
Wherein,Is the largest transmission unit of the data stream,Is the minimum link bandwidth in the network. Will reference time periodDivided intoAnd (3) obtaining a time slice dividing scheme by the time slice units:
;
Wherein,Is the length of each time slice. And according to the time slice division scheme, allocating initial time slice resources to each sub-network. Suppose a subnetworkThe total required time slice of (2) isThe time slice resource ratio is:
;
and obtaining the network node connection relation and the link bandwidth information through the distribution.
In a specific embodiment, the process of executing step S200 may specifically include the following steps:
Performing matrix transformation on the network node connection relation to generate a network topology feature matrix, and performing normalization processing on link bandwidth information to obtain a bandwidth feature vector;
inputting the network topology feature matrix and the bandwidth feature vector into a feature extraction layer of a multi-level priority perception network, wherein the feature extraction layer comprises three convolution layers, and each convolution layer uses a ReLU activation function and batch normalization processing to obtain a data flow basic feature map;
Inputting the data flow basic feature map into a flow classification layer of a multi-level priority sensing network, wherein the flow classification layer comprises two full-connection layers and one Softmax layer, and classifying key service flows, periodic service flows and burst service flows of the data flows to obtain a flow category identification matrix;
constructing a feature vector of the flow category identification matrix, extracting parameters of four dimensions of end-to-end delay requirement, bandwidth requirement, burst characteristic and periodic characteristic, and obtaining a flow feature vector set;
Inputting the flow characteristic vector set into a priority mapping layer of a multi-level priority perception network, wherein the priority mapping layer comprises a full-connection layer and a Sigmoid layer, and calculating delay sensitivity weight, bandwidth demand weight and priority level weight to obtain priority quantization parameters;
Calculating a resource demand index for each data stream according to the priority quantization parameter, wherein the resource demand index is obtained by weighted summation of delay sensitivity weight, bandwidth demand weight and priority class weight;
the resource demand indexes are ordered in descending order according to the numerical value, a data stream priority queue is constructed, and the data stream priority queue is grouped according to the preset time window size, so that a grouping matrix is obtained;
and carrying out normalization processing on each matrix element in the grouping matrix to obtain a resource demand index matrix.
Specifically, the network node connection relation is subjected to matrix transformation to generate a network topology feature matrix. Suppose a set for nodes in a networkRepresenting the adjacency matrix for connection relationRepresentation of whereinRepresenting nodesAnd (3) withConnected state (1 indicates connected, 0 indicates disconnected). To embody the network topology characteristics, the centrality of the node is calculated through matrix transformationThe formula is:
;
constructing a network topology feature matrix by combining the centrality of nodes with an adjacency matrixWherein each row represents the local topology characteristics of a node, such as centrality, link weights, etc. Normalizing the link bandwidth information, and assuming that the bandwidth set isWhereinRepresenting nodesAndBandwidth of inter-link, normalized bandwidth feature vectorCalculated by the following formula:
;
Wherein the method comprises the steps ofAndThe minimum and maximum bandwidth values in the network, respectively. Normalized toMapped to interval 0,1 for the feature extraction process. Network topology feature matrixAnd bandwidth feature vectorThe feature extraction layer of the multi-level priority aware network is input. The feature extraction layer comprises three convolution layers, each convolution layer using a ReLU activation functionThe nonlinear expression capacity of the model is improved, and the intermediate output is standardized through a batch normalization technology, so that model training is accelerated, and stability is improved. Assume that the convolution kernel is of sizeThe input matrix isThe calculation process of the first layer convolution is as follows:
;
Wherein,Is the weight matrix of the first layer convolution,Is an offset term and the symbols represent convolution operations. Three-layer convolution superimposed output data flow basic feature diagramIn the shape ofWhereinThe number of samples representing the characteristic map is,Representing the feature dimension. Mapping data flow basic characteristicsThe traffic classification layer of the input multi-level priority aware network comprises two fully connected layers and one Softmax layer. The first layer of full-connection layer passes through the weight matrixMapping to an intermediate feature space:
;
The second full-connection layer further compresses feature dimensions, and the Softmax layer normalizes the output result into classification probability:
;
Wherein the method comprises the steps ofIs a category ofIs used for the activation value of (a),Is the number of classification categories (critical traffic, periodic traffic and bursty traffic). The classification results are organized into traffic class identification matricesWherein each row represents a category distribution of the data stream. Identifying matrix for traffic classConstructing feature vectors, and generating a flow feature vector set by analyzing parameters of four dimensions of end-to-end delay requirement, bandwidth requirement, burst characteristic and cycle characteristic. For example, the end-to-end latency requirement is calculated by the formula:
;
Wherein the method comprises the steps ofIs the number of path hops and,Is the size of the data packet and,Is the link bandwidth. Inputting the flow characteristic vector set into a priority mapping layer which comprises a full connection layer and a Sigmoid activation function for calculating time delay sensitivity weightBandwidth demand weightAnd priority level weights:
;
Wherein the method comprises the steps ofIs the activation value for the corresponding dimension. Based on priority quantization parametersCalculating a resource demand index for each data stream:
;
Wherein the method comprises the steps ofIs a weight coefficient representing the relative importance of latency, bandwidth, and priority. The resource demand indexes of all data flows are ordered in descending order according to the numerical value, a data flow priority queue is constructed, and the data flow priority queue is controlled according to the preset time window sizeGrouping the data streams to form a grouping matrix. For grouping matrixEach element in the (2) is normalized, and the normalization formula is as follows:
;
Normalized matrixThe resource demand index matrix is a resource demand index matrix, which completely describes the resource demands of each data stream in the network in different time windows.
In a specific embodiment, the process of executing step S300 may specifically include the following steps:
feature fusion is carried out on the data stream priority queue and the resource demand index matrix, a state space of the deep Q network is constructed, and a network state feature matrix is obtained;
performing time slice allocation motion coding on the network state feature matrix, and dividing the allocation proportion of each time slice into discrete gears to obtain a motion space matrix;
inputting a network state feature matrix into a value evaluation module of a deep Q network, wherein the value evaluation module comprises three full-connection layers, and each full-connection layer uses a ReLU activation function to obtain a state value evaluation result;
constructing an action cost function according to the state value evaluation result and the action space matrix, wherein the action cost function comprises instant rewarding items, discount factors and future rewarding items;
Performing strategy iterative computation on the action cost function, and selecting time slice allocation actions based on epsilon-greedy strategy to obtain action selection probability distribution of the depth Q network;
According to the motion selection probability distribution, carrying out parameter optimization on the depth Q network, and updating the network weight by using a random gradient descent method to obtain a state-motion mapping model;
And inputting the state-action mapping model into an experience playback buffer, storing a state transition sample of each time step to obtain a transition probability matrix, and carrying out online learning on the state-action mapping model based on the transition probability matrix to obtain an initial time delay scheduling strategy model.
Specifically, the data stream is prioritizedAnd resource demand index matrixAnd carrying out feature fusion to generate state space features of the deep Q network. Data stream priorityDescribing scheduling priority of data flows, resource demand indexRepresent the firstThe data flow is at the firstResource requirements for individual time slices. By combining both into a state vectorGenerating a state feature matrixWherein,Is the number of data streams that are to be processed,Is the number of time slices. For state characteristic matrixAnd performing time slice allocation motion coding. The allocation ratio of time slices is divided intoDiscrete gear positionsEach gear represents a different resource allocation strategy. For example, assume that the time slice allocation proportion is divided intoEach gear corresponds to a specific allocation of bandwidth resources. Motion space matrixAll possible combinations of actions for each data stream are recorded in the shape ofRepresenting the possible allocation policies for each data flow under each time slice. The constructed state characteristic matrixA value evaluation module input to the deep Q network, the value evaluation module comprising three fully connected layers, each layer using a ReLU activation functionTo enhance the nonlinear expression ability while avoiding the gradient vanishing problem. The calculation formula of the first full-connection layer is as follows:
;
Wherein,Is the weight matrix of the first layer,Is a bias term that is used to determine,Is the output of the first layer. After three-layer full-connection calculation, outputting a state value evaluation resultIts value indicates that it is in the current stateThe underlying performance of the network. Based on state value evaluation resultsAnd an action space matrixConstructing an action cost function. Action cost function is composed of instant rewarding itemDiscount factorAnd future rewards termThe formula of the composition is as follows:
;
Wherein,Representing the current time stepExecuting an actionThe immediate rewards after, such as reducing time delay or improving the effect of resource utilization; is a discount factor for balancing the impact of current rewards with future rewards; Is the next stateIs the maximum expected value of (a). By means of strategic iterative computation, use is made of-Greedy policy selection time slice allocation actions. In each round of iteration, toTo explore the probabilistic random selection actions of (a)The action with the highest current value is selected for use, so that the situation of falling into local optimum is avoided while the global optimum solution is searched. Action selection probability distributionDescribed by the following formula:
;
And selecting probability distribution according to the action, optimizing parameters of the deep Q network, and updating the network weight by using a random gradient descent method. Assuming a target valueThe current value isThe optimization objective is to minimize the loss function:
;
Wherein,Is a parameter of the deep Q network. Gradient calculation by back propagation algorithmAnd adjusts the parameters according to the following update rules:
;
Wherein,Is the learning rate, determines the step size of the parameter update. At each time step, a state-to-action mapping model is createdIs stored in an experience playback buffer to record the current stateAction ofInstant rewardsAnd the next state. Based on experience playback, sampling state transition samples to calculate transition probability matrixFor further optimization of the model. Through iterative optimization, the state-action mapping model generated by the depth Q network effectively reflects the mapping relation between the network state and the time slice allocation, so that an optimized initial time delay scheduling strategy model is obtained.
In a specific embodiment, the process of executing step S400 may specifically include the following steps:
extracting parameters of path hops, link bandwidths and node processing capacity of the data stream according to the initial time delay scheduling strategy model to obtain a time delay calculation parameter set;
Analyzing the queue cache data in the time delay calculation parameter set, and obtaining a data stream queuing time delay distribution matrix according to the cache queue length and the service processing rate of each node;
modeling the transmission process of the data packet on each link based on the data stream queuing delay distribution matrix, and calculating the transmission delay to obtain a link transmission delay matrix;
According to the link transmission delay matrix, carrying out load analysis on the data processing capacity of the network node, and calculating processing delay to obtain a node processing delay matrix;
performing superposition operation on the data stream queuing delay distribution matrix, the link transmission delay matrix and the node processing delay matrix to obtain an end-to-end total delay matrix, marking the data stream exceeding a delay threshold in the end-to-end total delay matrix, and generating a delay overrun flow set;
and calculating the time slice reallocation coefficient of each sub-network according to the time delay overrun flow set, wherein the time slice reallocation coefficient is inversely proportional to the total time delay from end to obtain a time slice adjustment factor matrix, and updating the time slice allocation scheme based on the time slice adjustment factor matrix to obtain a time slice resource reallocation scheme.
Specifically, key network parameters of the data stream are extracted according to an initial delay scheduling policy model, including the number of path hopsLink bandwidthAnd node processing capability. Number of path hopsRepresenting the number of network nodes through which a data flow passes from a source node to a destination node, link bandwidthRepresenting nodesAndMaximum transmission rate of links between nodes, while node processing powerReflecting the nodeData traffic that can be processed in a unit time. Computing parameter sets by organizing these parameters into time delaysThe transmission path characteristics and resource allocation situation of each data stream are described. Analyzing the queue cache data in the time delay calculation parameter set, and combining the cache queue length of each nodeAnd service processing rateThe queuing delay of the data stream at each node is calculated. NodeQueuing delay of (2)Using M/M/1 queuing model estimation, the formula is:
;
Wherein the method comprises the steps ofIs a nodeIs used for the rate of service of (a),Is a nodeIs provided for the arrival traffic rate of (a). Generating a queuing delay distribution matrix of the data stream by queuing delay calculation of each nodeEach element in the matrixRepresenting the data stream at the firstThe first nodeQueuing delay on individual queues. Queuing delay distribution matrixModeling the transmission process of data packets on each link to calculate transmission delayThe transmission delay of the link is defined by the packet sizeAnd link bandwidthThe decision, the formula is:
;
Wherein the method comprises the steps ofIs the size of the data packet; Is a linkIs not limited to the bandwidth of the (c). Calculating the transmission delay of each link to obtain a link transmission delay matrixWherein each elementIndicating that the data packet is in the linkTransmission time on the same. After obtaining the link transmission delay matrix, carrying out load analysis on the data processing capacity of the network node to calculate the processing delay. The processing delay reflects the time required by the node when processing the data packet, and the formula is as follows:
;
Wherein the method comprises the steps ofIs a nodeIs used for the processing power of the system. Calculating the processing time delay of all the nodes to form a node processing time delay matrix. Queuing time delay distribution matrixLink transmission delay matrixAnd node processing delay matrixPerforming superposition operation to obtain an end-to-end total time delay matrixThe formula is:
;
Wherein the method comprises the steps ofIs the number of path hops and,Representing a data streamTotal delay from source to destination. For a pair ofMarking the data flow exceeding the preset time delay threshold value to generate a time delay overrun flow setWherein each element represents a timeout data stream and its delay overrun condition. According to time delay overrun flow collectionAnd calculating the time slice reassignment coefficient of each sub-network. Time slice reassignment coefficientsInversely proportional to the total end-to-end delay, the calculation formula is:
;
Wherein the method comprises the steps ofIs a sub-networkA total set of delays for the data streams. Generating a time slice adjustment factor matrix by calculating time slice reassignment coefficients of all sub-networksThe matrix records the time slice resource adjustment ratio for each sub-network. Time slice based adjustment factor matrixAnd updating the time slice allocation scheme. New time slice resource allocationCalculated by the following formula:
;
Wherein the method comprises the steps ofIs a sub-networkIs used for the initial time-slot allocation of (a),Is the corresponding adjustment factor. And updating the time slice allocation schemes of all the sub-networks to obtain a final time slice resource reallocation scheme.
In a specific embodiment, the process of performing step S500 may specifically include the following steps:
Carrying out resource utilization rate statistical calculation on each sub-network unit in the time slice resource reallocation scheme to obtain a sub-network resource utilization state matrix;
constructing a traffic load balance index based on the sub-network resource utilization state matrix, and analyzing the bandwidth resource distribution of each sub-network unit to obtain a sub-network load distribution table;
Selecting a sub-network unit with a flow load balancing index exceeding a preset threshold according to the sub-network load distribution table, and analyzing a bandwidth resource gap to obtain a bandwidth resource demand matrix;
Performing multi-objective optimization calculation on the bandwidth resource demand matrix, and inputting bandwidth borrowing cost weight, resource utilization rate weight and network risk weight into an optimization function to obtain a bandwidth borrowing optimization scheme;
Inputting resource allocation parameters in the bandwidth borrowing optimization scheme into a resource allocator, carrying out bandwidth resource reallocation across sub-networks to obtain a bandwidth multiplexing matrix, carrying out time slice mapping conversion on the bandwidth multiplexing matrix, and converting the bandwidth resources into a time slice resource allocation relation to obtain a time slice mapping table;
and carrying out space arrangement calculation on the resources of each sub-network according to the time slice mapping table, constructing a three-dimensional resource allocation tensor to obtain a resource space distribution diagram, carrying out dimension reduction treatment on the resource space distribution diagram, and integrating the information of the time dimension, the space dimension and the resource dimension to obtain a global resource allocation matrix.
Specifically, the resource utilization rate statistical calculation is performed on each sub-network unit in the time slice resource reallocation scheme. Let the sub-network set beThe resource utilization of each sub-network is defined as:
;
Wherein the method comprises the steps ofIs a sub-networkIs used for the resource utilization of the (a),Representing allocation to data streamsIs used for the amount of resources of (a),Representing a sub-networkIs a time slice of the total time slice resources of the mobile device. By calculation ofValue, generating a sub-network resource utilization state matrixThe matrix describes the resource usage of each sub-network. Based on sub-network resource utilization state matrixConstructing a flow load balancing indexThe method is used for measuring the balance degree of the resource load among the subnetworks. The calculation formula of the load balancing index is as follows:
;
Wherein the method comprises the steps ofIs the average resource utilization of all sub-networks, defined as:
;
By calculation ofValue, generating a sub-network load distribution tableThe load deviation degree of each sub-network is recorded. Selecting a flow load balancing index according to the sub-network load distribution tableExceeding a preset thresholdIs marked as an overloaded subnetwork and is subjected to bandwidth resource gap analysis. Setting the actual required bandwidth of the overload sub-network asThe allocated bandwidth isBandwidth resource gapThe method comprises the following steps:
;
If it isThen explain the sub-networkThere are cases where resources are insufficient. Generating a bandwidth resource demand matrix by carrying out bandwidth resource gap analysis on all overload sub-networks. And performing multi-objective optimization calculation on the bandwidth resource demand matrix to determine an optimal allocation scheme of bandwidth borrowing. The optimization objective is to balance bandwidth borrowing costs, resource utilization efficiency, and network risks. Let bandwidth borrowing cost weight beThe weight of the resource utilization rate isThe network risk weight isThe optimization objective function is:
;
Wherein,Representing bandwidth borrowing costs, total cost is,Is a sub-networkBorrowing cost per unit bandwidth; indicating resource utilization variability, is;Representing network risks based on potential link bottlenecks created during resource allocation. By constraint conditionsAnd bandwidth allocation upper limit constraint, solving by using an optimization algorithm (such as a genetic algorithm or a gradient descent method), and generating a bandwidth borrowing optimization scheme. And inputting the resource allocation parameters in the bandwidth borrowing optimization scheme into a resource allocator to reallocate the bandwidth resources across the sub-network. Generating bandwidth multiplexing matrix after bandwidth redistributionWhereinRepresenting slave subnetworksBy means of a sub-networkIs used to determine the bandwidth amount of the transmission. In order to map bandwidth resource allocation to time slice resource allocation, performing time slice mapping conversion, wherein a conversion formula is as follows:
;
Wherein the method comprises the steps ofIs the bandwidthThe corresponding amount of time-slice resources,Is the unit of the minimum bandwidth that is available,Is a reference time period. According to the time slice mapping tablePerforming space arrangement calculation on the resources of each sub-network to construct a three-dimensional resource allocation tensor. TensorRespectively the three dimensions of (a) are the time dimensionDimension in spaceAnd resource dimensionThe distribution of resources throughout a network is described. Tensor for resource allocationPerforming dimension reduction processing, integrating information of time dimension, space dimension and resource dimension, wherein the dimension reduction formula is as follows:
;
obtaining global resource allocation matrix through dimension reductionThe matrix is the result of an optimized allocation of network global resources.
In a specific embodiment, the process of executing step S600 may specifically include the following steps:
Performing performance index calculation on the global resource allocation matrix to obtain a performance data set of three indexes of end-to-end time delay standard reaching rate, link utilization rate and priority satisfaction;
Inputting the performance data set into a multi-index comprehensive evaluation module, and carrying out weighted summation calculation through a weight coefficient alpha of an end-to-end time delay standard reaching rate, a weight coefficient beta of a link utilization rate and a weight coefficient gamma of a priority satisfaction degree to obtain a network performance score;
Detecting a threshold value of the network performance score, and comparing the network performance score with a preset performance benchmark threshold value to obtain a performance difference matrix;
Based on the performance difference matrix, performing reverse error calculation on the instant rewarding item, the discount factor and the future rewarding item in the initial time delay scheduling strategy model to obtain a parameter gradient matrix;
Updating the depth Q network weight of the initial time delay scheduling strategy model according to the optimization rule of the random gradient descent method by the parameter gradient matrix to obtain updated network weight;
According to the updated network weight, carrying out time delay scheduling calculation on the data stream again to obtain a new state-action mapping table;
And carrying out online verification on the new state-action mapping table, obtaining a strategy optimization result by calculating an average rewarding value of a verification sample, confirming the scheduling strategy based on the strategy optimization result, and solidifying the verified scheduling strategy parameters into a deep Q network to obtain a target time delay scheduling strategy model.
Specifically, from the global resource allocation matrixAnd extracting network performance related data, and calculating three key performance indexes of end-to-end time delay standard reaching rate, link utilization rate and priority satisfaction degree to form a performance data set. End-to-end time delay standard rateThe proportion of the data flow meeting the time delay requirement is reflected, and the calculation formula is as follows:
;
Wherein,Representing a data streamIs provided with a total end-to-end delay,Is the time delay threshold value and,Is the total number of data streams,To indicate a function, the value is 1 when the condition is satisfied, otherwise, the value is 0. Link utilizationThe use condition of the link bandwidth resource is measured, and the calculation formula is as follows:
;
Wherein,Is a linkIs used in the amount of bandwidth utilization of the (c) network,Is a linkIs used to determine the total bandwidth of the device. Priority satisfactionDescribing the scheduling satisfaction degree of the network on the data streams with different priorities, the method is defined as:
;
Wherein,Is a data streamIs used for the weight of the (c),Is the priority level that is actually assigned,Is a data streamPriority requirements of (c). The calculated performance data setInput multi-index comprehensive evaluation module through weight coefficient、AndThe three performance indexes are weighted and summed to obtain a network performance score:
;
Wherein,Ensuring the normalization of the weights, and distributing specific values according to actual requirements (such as increasing delay sensitive scenesWeights of (2). By calculation ofAnd generating a comprehensive performance evaluation result. Scoring network performanceDetecting threshold value, and comparing it with preset performance reference threshold valueComparing to generate a performance difference matrix:
;
If it isIndicating that the current scheduling strategy does not reach the performance target, the initial time delay scheduling strategy model needs to be further optimized. The optimization process is derived from a performance difference matrixStarting, for instant rewards in modelDiscount factorAnd future rewards termPerforming reverse error calculation to obtain a parameter gradient matrix
;
Wherein,Is a parameter of the deep Q network,As an error loss function. Updating network weights by gradient descentThe formula is:
;
Wherein the method comprises the steps ofIs the learning rate, determines the step size of the parameter update. After updating the network weight, performing delay scheduling calculation of the data stream again based on the optimized deep Q network to generate a new state-action mapping table. The state-action mapping table describes the optimal time slice allocation actions for each network state and is a direct representation of the scheduling policy. Evaluating the state-action mapping table through online verification, and verifying the average rewarding value of the sampleAs an index of the optimization effect, a calculation formula is:
;
Wherein,Is the total number of verification samples that are to be taken,Is a sampleIs a real-time reward for (a). If it isAnd if the current scheduling strategy parameters are higher than the previous verification result, solidifying the current scheduling strategy parameters into the deep Q network to form a final target time delay scheduling strategy model.
Referring to fig. 2, fig. 2 is a schematic block diagram of a deterministic network delay scheduling device 200 based on TSN according to an embodiment of the present application, and as shown in fig. 2, the deterministic network delay scheduling device 200 based on TSN includes:
the graph theory analysis module 210 is configured to perform graph theory analysis on the TSN network topology structure to obtain a network node connection relationship and link bandwidth information;
The extracting module 220 is configured to input the network node connection relationship and the link bandwidth information into the multi-level priority sensing network, extract the delay requirement and the bandwidth requirement characteristic of the data stream, and generate a data stream priority queue and a resource requirement index matrix;
the establishing module 230 is configured to input the data flow priority queue and the resource demand index matrix into the deep Q network, establish a state-action mapping relationship including a network load state and a time slice allocation action, and generate an initial delay scheduling policy model;
A calculation module 240, configured to calculate queuing delay, transmission delay and processing delay of each data stream according to the initial delay scheduling policy model, and generate a time slice resource reallocation scheme;
a generating module 250, configured to generate a global resource allocation matrix based on the time slice resource reallocation scheme;
and the updating module 260 is configured to perform performance evaluation on the global resource allocation matrix, generate a network performance score, and perform parameter feedback updating on the initial delay scheduling policy model according to the network performance score to obtain the target delay scheduling policy model.
The method comprises the steps of modeling TSN network topology through the collaborative cooperation of all components, carrying out sub-network division by introducing a graph theory analysis method and combining a minimum spanning tree algorithm, effectively reducing network complexity, improving the accuracy of resource allocation, adopting a multi-level priority sensing network to intelligently classify data streams, accurately identifying time delay demand characteristics of different data streams through three layers of feature extraction, flow classification and priority mapping, establishing a state-action mapping relation based on a deep Q network, dynamically optimizing a scheduling strategy through a reinforcement learning method, overcoming the limitation of the traditional static scheduling method, designing an end-to-end time delay assessment mechanism, comprehensively considering queuing time delay, transmission time delay and processing time delay, realizing accurate assessment and dynamic adjustment of network performance, providing a bandwidth resource multiplexing mechanism of a cross-sub-network, obviously improving the utilization efficiency of network resources through bandwidth borrowing and time slice redistribution, constructing a performance assessment system based on multiple indexes, enabling the scheduling strategy to be continuously optimized through a parameter feedback updating mechanism, and keeping the stability of network performance.
Referring to fig. 3, fig. 3 is a schematic block diagram of a deterministic network latency scheduling apparatus 300 based on TSN according to an embodiment of the present application, where the deterministic network latency scheduling apparatus 300 based on TSN includes a processor 301 and a memory 302, and the processor 301 and the memory 302 are connected by a device bus 303, where the memory 302 may include a nonvolatile storage medium and an internal memory.
The non-volatile storage medium may store a computer program. The computer program comprises program instructions that when executed by the processor 301 cause the processor 301 to perform any of the TSN-based deterministic network latency scheduling methods described above.
The processor 301 is configured to provide computing and control capabilities to support the operation of the overall TSN-based deterministic network latency scheduling apparatus 300.
The internal memory provides an environment for the execution of a computer program in a non-volatile storage medium that, when executed by the processor 301, causes the processor 301 to perform any of the TSN-based deterministic network latency scheduling methods described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 3 is merely a block diagram of a portion of the structure related to the present application and is not limiting of the TSN-based deterministic network delay scheduling apparatus 300 related to the present application, and that a particular TSN-based deterministic network delay scheduling apparatus 300 may include more or less components than those shown, or may combine some components, or may have a different arrangement of components.
It should be appreciated that the Processor 301 may be a central processing unit (Central Processing Unit, CPU), the Processor 301 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should be noted that, for convenience and brevity of description, the specific working process of the TSN-based deterministic network delay scheduling apparatus 300 described above may refer to the corresponding process of the TSN-based deterministic network delay scheduling method described above, and will not be described herein.
Embodiments of the present application also provide a computer readable storage medium storing a computer program, which when executed by one or more processors, causes the one or more processors to implement a deterministic network latency scheduling method based on TSN as provided by the embodiments of the present application.
The computer readable storage medium may be an internal storage unit of the TSN-based deterministic network delay scheduling device 300 according to the foregoing embodiment, for example, a hard disk or a memory of the TSN-based deterministic network delay scheduling device 300. The computer readable storage medium may also be an external storage device of the TSN-based deterministic network latency scheduling device 300, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), etc. that the TSN-based deterministic network latency scheduling device 300 is equipped with.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, systems and units may refer to the corresponding processes in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
While the application has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that the foregoing embodiments may be modified or equivalents may be substituted for some of the features thereof, and that the modifications or substitutions do not depart from the spirit and scope of the embodiments of the application.