Disclosure of Invention
Aiming at the limitation of the prior proposal, the invention provides an innovative hierarchical multimedia exchange system. The system combines the advantages of circuit switching and packet switching, and divides multimedia data into different priorities for transmission and switching by a flexible time slice dividing mechanism. Compared with the traditional scheme, the method mainly has the following innovation points:
(1) The fusion time slicing, slice group and data packetization transmission mode can meet the performance requirements of differentiated multimedia packets;
(2) Delay and quality-determined switching, which is compatible with efficient and multi-type packet switching;
(3) The expandability is determined based on the time delay of the time recovery.
The hierarchical multimedia switching system provided by the invention aims to pointedly solve the technical challenges of time delay, jitter, packet loss and the like faced by multimedia data transmission from the perspective of network switching. The system is based on a flexible time slice division mechanism, the multimedia data with different priorities are loaded into corresponding time slices for transmission, and different switching strategies are adopted for the time slices with different grades, so that the hierarchical service quality of the multimedia data is guaranteed.
As shown in figure 1, the system mainly comprises three modules, namely a slicing transmission module, an exchange module and a capacity expansion module of multimedia data.
For the slicing transmission module of the multimedia data, the multimedia data is firstly divided into time slices with fixed length, and the time slices with different priorities form slice groups. Within each slice group, high priority slices are privileged to preempt transmission, while low priority slices compete for the remaining bandwidth resources. The hierarchical transmission of the time slices can not only meet the high-demand multimedia service, but also ensure the basic communication quality of common data.
The switching module is designed with high performance and low time delay as design targets. For the time slices with high priority, the system directly performs switching according to the connection matrix, while other time slices are cached in a queue and packet switching is performed through a scheduling algorithm. Thus, the real-time performance of high-quality flow is ensured, and the exchange efficiency of the system for various flows is also considered.
In the capacity expansion module, the system adopts a lossless expansion scheme based on time recovery. When the external network is needed to be connected, the high-quality time slices are extracted and converted into IP packets, the IP packets are sent to the backbone network for transmission, and after the IP packets reach the destination node, the IP packets are restored to the original time slice form according to the time stamp information. The normal data traffic is forwarded directly in the form of packets. By using the mode, the system can realize seamless expansion among different network systems.
Specifically:
The hierarchical multimedia exchange system mainly comprises the following three modules:
(1) A slicing transmission module of multimedia data:
The module is responsible for slicing the multimedia data stream, and concretely comprises the steps of dividing fixed time slices, forming multi-priority slice groups, preempting a time slice in the slice, packetizing and filling data and the like.
Let the length of the time slices beThen (1)The time interval corresponding to each time slice is. A plurality of time slices form a slice group, and the length of the slice group is. The system divides the multimedia data into performance data with strong real-time performance and general capacity data, and the performance data and the general capacity data correspond to high priority and common priority respectively.
For performance type data, the system marks the time stamp and loads the time stamp preferentially at the beginning of each slice group to ensure high-quality transmission, and the capacity type data is filled in the rest time slices to carry out transmission as much as possible. Under the preemption mechanism, the performance data can obtain stable bandwidth and time delay guarantee.
(2) And (3) an exchange module:
The core of the switching system is to realize efficient and low-delay switching of multimedia data. For different levels of time slices, two different switching modes are employed:
For performance-type time slices, a direct exchange based on a connection matrix is performed. The control plane calculates the connection matrix of each switching node in advance according to the network topology and the service requirement, and sends the connection matrix to the data plane. The high priority time slices arriving at the switching node will directly select the appropriate output port through the connection matrix without queuing, thereby minimizing switching delay.
For the capacity type time slices, a packet-based store-and-forward scheme is used for switching. The arriving packets are first extracted and buffered with valid data and then sent to the scheduling module for queuing. The scheduling module comprehensively considers factors such as fairness and time delay, and selects a proper packet for forwarding. This approach can increase the system's support for multiple services compared to direct switching.
(3) And the capacity expansion module is used for:
The invention utilizes the capacity expansion module to realize smooth capacity expansion and interconnection of the network, and the key of the capacity expansion module is the conversion of time slices and IP packets.
When the system needs to interface with an external IP network, the data is extracted from the high-priority time slices, and the time stamp information is added, and the data is repackaged into IP packets to be transmitted in the backbone network. After a plurality of hops are forwarded, the packet reaches a destination node and is restored into a time slice form, and the packet is inserted into a new slice group according to the time stamp information to finish the end-to-end transmission.
The normal data packets may be encapsulated directly into IP packets and sent, and the receiving node may deliver the packets as well after arrival. In this way, seamless communication between heterogeneous networks can be achieved without introducing additional delay.
The specific operation of each module is further described below
The working process of the slicing transmission module of the multimedia data is as follows:
(1) Fixed time slicing and time slice groups:
The multimedia data is at fixed time intervalsDivided into equal length time slices. Numbering of adjacent time slicesThe method meets the following conditions:
;
Wherein the method comprises the steps ofIs the absolute time of the system. It can be seen that the time slice sequenceAnd the linear mapping relation is discrete with the time axis.
Continuous and continuousThe time slices form a slice group:
;
Length of sheet setThe method meets the following conditions:;
(2) Prioritizing and preempting within a time slice:
Each time sliceIs divided into two parts, performance time slicesAnd capacity time sliceFor transmitting performance-type data and capacity-type data, respectively.
Performance time sliceHaving a higher priority allows for interrupting the transmission of capacity slots. Therefore, the performance type service can occupy bandwidth resources at the beginning stage of the slice group, and a guaranteed transmission channel is obtained. Whereas capacity type traffic can only transmit data packets using the remaining bandwidth.
A special scheduling time slice is arranged at the beginning of each slice group. The scheduling time slices are responsible for collecting resource applications of the performance type service and distributing the time slices, and the whole preemption process is shown in the following algorithm 1.
Algorithm 1, time slice preemption algorithm, the input of which is a set of performance servicesCapacity service setLength of the sheet groupThe output is the time slice distribution result.
The flow chart of algorithm 1 is shown in FIG. 2, and the specific process is shown as pseudo-code:
1. Initializing a time slice allocation vector;
2.foreach:
3.A// computing serviceThe number of time slices required;
4.if The// remaining performance time slices are sufficient;
5. At the position ofMiddle markPerformance time slices and updating the total number of assigned performance time slices;
6.else:
7. Marking all remaining time slices as performance time slices, and exiting the cycle;
8.foreach:
9. Will beSequentially padding packets of (a) into unlabeled time slices;
Wherein the constant isRepresenting the transmission capacity of a single time slice. The algorithm 1 firstly counts the time slice demand of the performance service, allocates resources as much as possible, and if resource competition occurs, preferentially guarantees the service quality of the existing service and pauses the scheduling of the capacity service data.
(3) Data packetization and backlog processing
For data frames that cannot be accommodated by a single time slice, a sliced transmission is required. The granularity of the fragments should not exceed the payload length of the time slices.
When the data frame of the performance service is filled, the fragments should be continuous as much as possible, so that the additional transmission delay caused by dispersing the fragments into a plurality of fragment groups is avoided. If the current slice group cannot accommodate the complete data frame, the system chooses to hold the redundant slices and waits for the next slice group to start to be retransmitted, and the frame boundary crossing slice group cannot be caused by blind filling. This strategy can reduce inter-frame delay jitter, thereby guaranteeing QoE of performance traffic.
For capacity traffic, the system should monitor the amount of data backlog to prevent persistent data accumulation. When backlog data exceeds a set threshold, the system automatically discards portions of the packets, thereby avoiding further degradation of latency.
The specific working process of the exchange module is as follows:
(1) Direct exchange of performance time slices:
for the performance time slice, the system adopts a direct exchange mode based on a connection matrix, so that queuing waiting is avoided, and transmission delay is minimized.
The setting system comprisesInput portsThe output ports are connected with the matrixCan be defined as:
Wherein, the method comprises the steps of, wherein,Representing input portsTo an output portIs connected to the connection state of the device.
When (when)When the special channel is established between the two ends, the special channel reaches the input portWill be directed to the output portNo additional exchange process is required.
Connection matrixIs calculated by the control plane according to the network topology and the service requirements. Specifically, the control plane calculates through a centralized flow table to obtain an optimal connection matrix of each switching node, and issues the optimal connection matrix to the data plane through a southbound interface (such as the OpenFlow protocol).
(2) Cache exchange of capacity type time slices:
for data packets carried by capacity-type time slices, the system adopts a storage-forwarding mechanism based on a cache for switching. The core of the switching system is a packet buffering and scheduling module.
When a packet arrives at the input port, the system extracts the payload data from the content slots and stores the complete packet in a buffer queue. The scheduling module is responsible for selecting the appropriate packet from the queue for delivery to the destination output port.
The design of the scheduling strategy needs to consider both time delay performance and fairness. On one hand, the first packet of the queue should be transmitted preferentially to reduce the overall delay, and on the other hand, the problem that individual services occupy resources for a long time should be avoided. The system employs algorithm 2 to solve this problem. Its input is input port setCache queues for each portWeight vector for each queueAnd scheduling slot lengthsIts output is the selected packet and its destination port.
The flow chart of algorithm 2 is shown in FIG. 3, and the specific working process is shown as pseudo-code:
1. initializing a slot counterOrder-makingIndicating the last scheduling time;
2.while // within a scheduling slot;
3.forto traversing all input ports;
4.if non-empty// queueA waiting packet is arranged in the buffer;
5. Dequeue head groupThe destination port is recorded as;
If portIdle:
7. Order theTransmittingAnd return to;
8.else:
9. Continuing circulation;
10.;
11. Returning to NULL;
Algorithm 2 examines the head of queue packet of each port in turn during a scheduling time slot. Once a packet is found to be transmittable, the loop is immediately stopped and the packet is returned. And if the available packet is not found after the time slot is finished, returning an empty result and waiting for the next round of scheduling. Introducing a weight vectorFor controlling the scheduling priority of each queue. Weighting ofThe larger the representation portThe easier it is for packets of (c) to be preferentially scheduled. The system can dynamically adjust the weight according to factors such as service type, service level and the like so as to realize differentiated QoS guarantee. The status of the output ports is also taken into account. The system performs switching only when the destination port of the packet is idle, otherwise continues to check other queues to avoid unnecessary packet backlog.
The specific working process of the capacity expansion module is as follows:
(1) Conversion of performance time slices and IP protocol:
when the system needs to interwork with an external IP network, the performance-type time slice will trigger the protocol conversion mechanism. The mechanism encapsulates the time slice data into IP packets at the ingress node (IngressNode), transmits over the backbone, and decapsulates back into time slice format at the egress node (EgressNode). The conversion process needs to record and restore the priority and time sequence information of the original time slices, and ensures that the QoS semantics from end to end are unchanged.
Specifically, for high priority time slices to be transmittedThe ingress node first extracts its payload dataAdding a serial numberAnd a time stampGenerating a data unit:
;
the system will thenGenerating standard IP packets as upper layer data by encapsulation:
;
GroupingEnters the backbone network for transmission, and reaches the exit node after multi-hop routing. Egress node decapsulationObtainingRecovering the original time slice payload. Using serial numbersAnd a local time reference, the egress node may calculate the target time instant at which the time slice should be inserted:
WhereinFor the size of the slice group,For the slice group period, synchronization may be performed in advance between the ingress and egress nodes. By using the above method, the system can accurately position the time slicesThe position in the new cycle. By the switching mechanism, seamless connection can be realized among networks of different systems, and the transmission time sequence of high-priority service is maintained.
(2) Capacity expansion of capacity type time slice
For the common data packets carried by the capacity type time slices, the forwarding nodes can directly extract the payloads and assemble the payloads into IP packets, and send the IP packets into a backbone network for transmission. After arriving at the sink node, the packets are delivered in the form of packets as well, without special handling. Capacity expansion of capacity type service does not involve timing synchronization, and implementation is relatively simple.
(3) Time synchronization and recovery
In order to support high-precision time slice restoration, the source node and the destination node need to be time-synchronized, and a unified time reference is established. Synchronization may be achieved by a variety of means, such as GPS, atomic clocks, PTP (PrecisionTimeProtocol), etc. In addition, a certain protection margin is reserved by considering the factors such as synchronization error, network jitter and the like, so that the service quality is prevented from being reduced due to time slice dislocation.
In addition, in the event of an abnormality such as a network failure, the continuity of the time slices may be broken. Therefore, the system also has a time slice recovery mechanism, and realizes the repair and retransmission of the time slices by means of redundant transmission, FEC and the like, thereby ensuring that the service is not interrupted.
Detailed Description
Taking two video files to be transmitted simultaneously as an example, one video file is a real-time video conference video, has strict requirements on transmission delay and jitter, belongs to performance type services, and the other video file is video on demand, is insensitive to delay and belongs to capacity type services.
Time slice partitioning of multimedia data, first, we divide video data at fixed time intervalsDivided into equal length time slices.Then atTime slice sequence number to which video frame belongsThe method comprises the following steps:
if we willThe continuous time slices form a slice group, and the length of the slice groupThe method comprises the following steps:
within each time slice we are further divided into performance time slicesAnd capacity time slice. The data packet of the real-time video conference is marked as performance type and is occupied preferentiallyThe data packet of video-on-demand is marked as capacity type and can only be used。
Direct exchange of Performance data packets for real-time video conferencing, where the data packets are padded to Performance time slicesIs a kind of medium. Is assumed to be inAt the moment, there is oneIs required to be transmitted, and the payload of a single time slice isThe frame may be fully loaded into a performance time slice.
And the control plane issues forwarding rules of the performance type service to each switching node according to the network topology and the routing calculation result. The forwarding rule contains a connection matrix such as:
;
Representing input portsDirectly connected to the output portInput portDirectly connected to the output port。
When the performance time slices arrive at the switching node, the node directly switches the time slices to the target port according to the connection matrix without queuing, thereby minimizing the forwarding delay. For example, for the connection matrix described above, it is assumed thatTime of day performance time sliceReach the input portThe node immediately switches it to the output portThe time delay is close to。
After multi-hop direct switching, the performance time slices reach the destination node. The destination node extracts the payload data, restores the original video frame and realizes the low-delay transmission of the real-time video conference.
Buffer exchange of capacity type data packet, for video on demand service, the data packet is filled into capacity time sliceIs a kind of medium. Unlike performance time slices, capacity time slices need to be buffered and scheduled at the switching node.
At the position ofWhen there are two video on demand data packetsAndTo both input ports of the switching node. The node maintains a buffer queue for each port, which will、Respectively storing the data into corresponding queues.
The scheduling module of the node then executes a packet scheduling algorithm to select appropriate packets for switching based on the queue weights and packet latencies.
Input: input Port setCache queues for each portWeight vector for each queueScheduling slot length;
And outputting the selected packet and the destination port thereof.
Algorithm steps:
1. initializing a slot counterOrder-makingIndicating the last scheduling time
2.whileI/within a scheduling slot
3.fortoI/traversing all input ports
4.ifNon-empty// queueWith waiting packets inside
5. Dequeue head groupThe destination port is recorded as
If portIdle:
7. Order theTransmittingAnd return to
8.else:
9. Continuing the cycle
10.
11. Returning to NULL
For example, the weights of the two queues are respectively、Indicating that both queues have the same scheduling priority. Let the scheduling time slot length againI.e. each scheduling period is two time slices long.
Assuming that a scheduling time slot has elapsed, the node selects a packetExchange is performed. ThenIs sent to its target output portAnd continuing to wait for scheduling.
All capacity type packets are dredged through a plurality of scheduling time slots, and finally reach a destination node. Compared with the performance type packet, the capacity type packet allows certain time delay and jitter to occur, but as long as the cache and scheduling algorithm is reasonably designed, better transmission experience can still be obtained.
Interconnection of different networks, namely, if a source node and a destination node are located in different networks, protocol conversion is needed to be realized through a gateway or a boundary node.
For performance time slices, the gateway reverts it to IP packets and carries timestamp information in the packet header. After the packet is transmitted through the backbone network and reaches the gateway on the other side, the gateway reassembles the IP packet into a performance time slice according to the time stamp, and the performance time slice is added into a new slice group for switching. In this way, end-to-end delay guarantees and QoS mapping can be achieved.
For the capacity type time slice, the gateway directly extracts the payload data, encapsulates the payload data into common IP packets and transmits the common IP packets in the backbone network. After the packet arrives at the gateway on the other side, the packet is reassembled into a capacity time slice, and the capacity time slice is inserted into a new slice group to participate in switching scheduling.
By the hierarchical exchange and protocol conversion mechanism, different systems of networks can be connected in a seamless way, and the real-time video with high priority and the common video-on-demand service can obtain corresponding service quality guarantee.
Synchronization mechanism-in order to ensure lossless delivery of performance time slices between heterogeneous networks, the source node and destination node need to establish a synchronization mechanism. The synchronization information may be communicated via a separate signaling channel or may be appended to the time slice payload.
The synchronization information should at least contain: Length of (3),The size of (3),Period, time slice sequence number of (c)Etc. After receiving the performance type IP packet with the timestamp, the sink gateway extracts the synchronous information and maintains a time slice sequence number generator which is the same as the source node:
Wherein the method comprises the steps ofFor the local time of the gateway,For the start time after synchronization, factors such as transmission delay, clock drift and the like need to be considered. The gateway fills the extracted video data into the sequence numberAnd adding the performance time slices into the current slice group, thereby realizing the lossless reduction of the time slices.
The capacity time slices do not need strict synchronization, and the slice groups are only filled according to the arrival sequence of the packets.
Through the steps, the hierarchical multimedia switching system can simultaneously and efficiently transmit two services of real-time video conference and video on demand with high quality. The core mechanism of the system is that the direct exchange facing the performance and the cache scheduling facing the capacity are combined, and the time slice synchronization and the protocol conversion are added, so that the end-to-end service quality assurance and the seamless interconnection are finally realized.
The above is a detailed embodiment of the hierarchical multimedia exchange method. The method fully considers QoS difference of different services, not only meets low-delay requirement of multimedia service with strong real-time property, but also considers transmission efficiency of general data. The time slice dividing and preempting mechanism, the direct exchange connection matrix, the capacity scheduling weighting polling algorithm and other key technologies constitute one complete solution.
Aiming at the differentiated QoS requirement of the multimedia data, the hierarchical multimedia switching system provided by the invention realizes the refined priority division and resource isolation at the network layer, and compared with the traditional scheme, the hierarchical multimedia switching system has the following beneficial effects:
By the time slice preemption mechanism and direct exchange, the high priority service obtains deterministic bandwidth guarantee and extremely low transmission delay, and avoids competing with common service for resources. The flexible time slice cutting mechanism allows QoS differentiation with different granularity, and can meet the diversified requirements of multimedia services.
For the performance time slice, the system directly exchanges based on the connection matrix to minimize queuing delay, and for the capacity type grouping, the system combines with buffer exchange scheduling to improve throughput while ensuring fairness. The storage-forwarding is combined with direct exchange, so that the support of real-time traffic and common traffic is considered, and the overall efficiency is improved.
The flexible expandability is supported, and the seamless interconnection of the heterogeneous network is realized by introducing protocol-independent time slice abstraction. The high precision time synchronization mechanism ensures QoS of transmissions across the network. The flexible capacity expansion mode not only supports the butt joint of the backbone network, but also can provide high-quality service nearby, and is convenient for realizing multi-level coverage.
The hierarchical exchange scheme of the invention balances the performance indexes of various aspects such as service quality, system efficiency, expansion capacity and the like well, and provides a new scheme for the construction of the next generation of multimedia network.