Movatterモバイル変換


[0]ホーム

URL:


CN101632268A - Parameterized quality of service architecture in the network - Google Patents

Parameterized quality of service architecture in the network
Download PDF

Info

Publication number
CN101632268A
CN101632268ACN200880007820ACN200880007820ACN101632268ACN 101632268 ACN101632268 ACN 101632268ACN 200880007820 ACN200880007820 ACN 200880007820ACN 200880007820 ACN200880007820 ACN 200880007820ACN 101632268 ACN101632268 ACN 101632268A
Authority
CN
China
Prior art keywords
node
network
service flow
l2me
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200880007820A
Other languages
Chinese (zh)
Other versions
CN101632268B (en
Inventor
S·茵德吉特
B·希斯洛普
A·萨夫达尔
R·黑尔
Z·吴
I·辛格
S·欧瓦迪亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Entropic Communications LLC
Original Assignee
Entropic Communications LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Entropic Communications LLCfiledCriticalEntropic Communications LLC
Priority claimed from PCT/US2008/053222external-prioritypatent/WO2008098083A2/en
Publication of CN101632268ApublicationCriticalpatent/CN101632268A/en
Application grantedgrantedCritical
Publication of CN101632268BpublicationCriticalpatent/CN101632268B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

A kind of communication system and method may further comprise the steps: first request that receives is to initiate guaranteed service quality stream in network; Ask to a plurality of node broadcasts second that are connected to network from network coordinator; And reception is from first response to second request of at least one Ingress node.This method also comprises second response to second request of reception from least one Egress node, indicates this at least one Egress node whether to have available resources to receive this guaranteed service quality stream; And if this at least one Ingress node have available resources with transmit should guaranteed service quality stream and this at least one Egress node have available resources and should guaranteed service quality flow to receive, then be this guaranteed service quality flow distribution resource.

Description

Parameterized quality of service architecture in a network
Priority requirement
Priority of the present application for U.S. provisional application 60/900,206 filed on 6/2/2007, U.S. provisional application 60/901,564 filed on 14/2/2007, U.S. provisional application 60/927,613 filed on 4/5/2007, U.S. provisional application 60/901,563 filed on 14/2/2007, U.S. provisional application 60/927,766 filed on 4/5/2007, U.S. provisional application 60/927,636 filed on 4/5/2007, and U.S. provisional application 60/931,314 filed on 21/5/2007, each of which is hereby incorporated by reference.
FIELD OF THE DISCLOSURE
The disclosed methods and apparatus relate to communication protocols in networks, and more particularly to quality of service protocols in networks.
Background
In addition to computers, home networks now typically include multiple types of subscriber equipment configured to deliver subscriber services through the home network. Subscriber services include the delivery of multimedia, such as streaming audio and streaming video, to subscriber equipment over a home network, where the multimedia is presented to a user at the subscriber equipment. As the number of available subscriber services increases, the number of devices connected to the home network also increases. The increase in the number of services and devices increases the complexity of coordination between network nodes, as each node may be produced by a different manufacturer at a different time. Some home networking technologies have emerged in an attempt to facilitate simple home network solutions and to take advantage of the existing network infrastructure that may exist in many homes. For example, household electricityThe voice network alliance (HPNA) enables users to network home computers by using existing telephone and coaxial cable wiring in the home. HPNA enabled devices utilize a different spectrum than that used by fax and telephone. Rather than using existing telephone and coaxial cabling, the Home Jacobian ((C))
Figure G2008800078206D00011
Power Alliance) utilizes existing Power wiring in the home to create a home network.
Figure G2008800078206D00012
One problem with (a) is that network bandwidth is susceptible to significant reductions due to large variations in the reactive loads in the home power wiring and outlets.
In addition, these techniques can also create problems in implementing network devices that properly interact with other network devices. These problems may limit the deployment of newer devices that provide services that are later developed in the presence of obsolete (legacy) devices. The emerging multimedia over coax alliance (MoCA) standard architecture impacts this problem in the following respects: (1) network behavior dynamically assigns a device a "network coordinator" (NC) role in order to optimize performance, (2) it is known that only devices in the NC role can schedule traffic for all other nodes in the network, and (3) a fully meshed network architecture is formed between any device and its peers.
In the case where many potential applications share the same digital network, the various applications must compete for the same limited bandwidth, making the distribution problem more complex. Bandwidth intensive applications such as high throughput downloads may result in degradation of other more important applications sharing the network. This result is unacceptable when the other application requires high quality of service.
Various solutions to this problem have been proposed, typically involving advanced network controllers or having advanced applications to prioritize data packets or data traffic within the network. Moreover, intelligent network devices require high computational power and are therefore more expensive than they require. Finally, complex network devices are impractical for home use because most consumers do not have the skill or experience of configuring a computer network.
Summary of the disclosure
In one embodiment, a method of communication includes the steps of: (1) receiving a first request to initiate a guaranteed quality of service data flow in a network; (2) broadcasting a second request from the NC to a plurality of nodes connected to the network; and (3) receiving a first response to the second request from the at least one ingress node. The second request is based on the first request and the first response indicates whether at least one ingress node has available resources to transmit the guaranteed quality of service flow. The method also includes receiving a second response to the second request from the at least one egress node; and allocating resources for the guaranteed quality of service flow if the ingress node(s) have available resources to transmit the guaranteed quality of service flow and the egress node(s) have available resources to receive the guaranteed quality of service flow.
In another embodiment, a system includes a physical interface connected to a coordinated network and a quality of service module coupled to the physical interface. The physical interface is configured to transmit and receive messages over the coordinated network. The quality of service module is configured to admit one or more guaranteed quality of service flows in the coordinated network via a plurality of layer 2 messages.
Brief Description of Drawings
Fig. 1 illustrates an embodiment of a network architecture.
Fig. 2 is a diagram illustrating two L2ME wave periods according to the embodiment of fig. 1.
Fig. 3 illustrates a block diagram of an L2ME frame according to the embodiment of fig. 1.
FIG. 4 is a block diagram of a layer 2 management entity transaction protocol according to one embodiment.
Fig. 5 illustrates an embodiment of a parameterized quality of service network architecture.
FIG. 6 illustrates a decision tree for converting TSpec XML to L2ME for QSpec.
FIG. 7 is a diagram illustrating a create/update transaction according to the embodiment of FIG. 5.
FIG. 8 is a diagram illustrating a delete transaction according to the embodiment of FIG. 5.
FIG. 9 is a diagram illustrating a list transaction according to the embodiment of FIG. 5.
FIG. 10 is a diagram illustrating a query transaction according to the embodiment of FIG. 5.
FIG. 11 is a diagram illustrating a maintenance transaction according to the embodiment of FIG. 5.
Overview
One system disclosed herein and shown in fig. 5 and described in more detail below includes aphysical interface 512, such as a multimedia over coax alliance (MoCA) PHY layer, connected to acoordinating network 502, such as a MoCA 1.1 network.Physical interface 512 is configured to transmit and receive messages over coordinatednetwork 502. The system also includes a quality of service (QoS)manager 520 connected to a layer 2 management entity (L2ME) 516. QoSmanager 520 is configured to admit one or more guaranteed quality of service data flows, such as unidirectional traffic flows from a single ingress node (source device) to one or more egress nodes (sink devices), in coordinatednetwork 502 through a plurality of layer 2 messages managed byL2ME 516.
One network architecture disclosed herein supports parameterized quality of service (pQoS) in a managed network. In a pQoS enabled network, the data flows in the network may include guaranteed (parameterized) and/or best effort data flows. Ensuring that the guaranteed (parameterized) flow is at least one performance level defined by predetermined parameters of the flow established during admission (establishment) as discussed in detail below. If a parameterized stream has no data to transmit during its time slot, the time slot reserved for that parameterized stream may be made available to other streams. In the disclosed architecture shown in fig. 6 and described in more detail below, anode 604, such as the c-link data link layer, transmits a QoS initiation request to a Network Coordinator (NC)node 606 to initiate a guaranteed quality of service flow having at least one quality parameter. The NC 606 broadcasts a layer 2 request to allnodes 604, 608 that includes at least one parameter from the QoS initiation request. A plurality ofnodes 608, including aningress node 508 and anegress node 510, transmit responses to the broadcast request indicating whether theingress node 508 has resources available to transmit the flow and whether theegress node 510 has resources to receive the flow. If the received responses indicate that the ingress andegress nodes 508, 510 each have resources to establish the flow, theNC node 606 broadcasts a message to the plurality ofnodes 608 indicating that thenodes 608 should commit resources to the flow.
Detailed Description
This description of the embodiments is intended to be read in connection with the accompanying drawings, which are to be considered part of the entire written description.
Embodiments are generally directed to apparatuses, systems, methods, and architectures to support a low-level messaging framework in a network. Some embodiments facilitate layer 2 messaging to enable low cost and high speed management of resources within a network, thereby securing the ability to distribute multimedia data (such as video/audio, games, images, general data, and interactive services) within existing home networks.
Embodiments facilitate simplifying home networking equipment so that they are easier to use and cost effective. In other words, the home network should be simple to configure so that the home user does not have to deal with complex configuration menus or require advanced knowledge of the computer network. Embodiments also address configuration and cost issues by implementing a low-level digital transmission framework that does not require a high amount of computational power. This low-level framework may be considered an extension of the Medium Access Control (MAC) sublayer or the Physical (PHY) network layer, and is referred to as a "layer 2 messaging framework".
Layer 2 messaging may be implemented in various networks where spectrum is shared and negotiated due to the introduction or removal of nodes and the evolution of network signaling capabilities. In some embodiments, the network is a coordinated network having a Network Coordinator (NC) that coordinates communications between several devices connected to the network. Coordination is achieved by the NC allocating time slots to network devices during which these devices may transmit or receive MAC messages, probes, and data. Network devices connected to the coordinated network may include managed devices and unmanaged devices. Examples of such networks include a coaxial network according to the multimedia over coax alliance (MoCA) standard, a wired network on "twisted pair" wires, or a wireless home network. Embodiments are described herein as being implemented with 8 or 16 nodes within a network. However, other embodiments may incorporate extensions to implement any number of nodes within various networks. Further, embodiments may include systems, methods, and devices having a layer 2 messaging architecture and protocols to support end-user applications and vendor-specific services.
Embodiments will now be described with reference to a layer 2 management entity (L2ME) architecture and messaging protocol for a digital network. Some embodiments support application layer triggered transactions such as, but not limited to, universal plug and play (UPnP) quality of service and IEEE Stream Reservation Protocol (SRP). Layer 2 messaging protocols may implement capabilities such as parameterized quality of service (pQoS) transactions within a network. Note that the interface between L2ME and the application layer may be different.
Fig. 1 illustrates a coordinatedmesh network architecture 100 having a plurality ofnetwork nodes 104, 106, 108, 110 connected to anetwork 102.Network node 106 is an NC node and is shown configured withPHY layer 112,MAC sublayer 114, andL2ME 116. Note that any network node may have multiple physical interfaces and may implement upper layer functions (e.g., TCP/IP, UDP, etc.). Thenetwork node 104 is an Entry Node (EN). Each ofnodes 104, 108, and 110 may also be configured withL2ME 116.
L2ME 116 provides layer 2 interfaces and management services by which layer 2 management functions may be invoked. Based on the end-user application initiated transactions,L2ME 116 is responsible for executing and managing all L2ME transactions, such as parameterized quality of service, betweennetwork nodes 104, 106, 108, and 110.L2ME 116 includes two sublayers: an uppertransaction protocol sublayer 120 and a lowerwave protocol sublayer 118. The L2MEwave protocol sublayer 118 is a high reliability message mechanism inL2ME 116 configured with its own messaging protocol. The L2ME wave protocol enables network nodes to participate in robust, network-wide, low-latency general transactions and enablesNC node 106 to manage the flow of low-cost audio/video bridge devices on a home network having multiple layer 2 quality of service segments, such as devices according to the IEEE 802.1Qat/D0.8 draft standard (month 7 2007).
L2ME wave protocol
The L2ME wave protocol provides reliable transport services for the L2ME transaction protocol by generating multiple wave cycles. The L2ME wave includes one or more L2ME wave periods. A wave cycle includes the transmission of a message from the NC to one or more nodes and the reception of corresponding responses to the message from the one or more nodes. A wave cycle begins whenNC node 106 broadcasts a particular payload, such as a request, to allnodes 104, 108, 110 connected tonetwork 102. In one embodiment,NC node 106 first classifies all nodes in the wave _ node mask field (described in more detail below) into 3 categories before initiating a wave cycle. The first class of nodes ("class 1 nodes") includes network nodes that have not been specified in the CYCLE _ NODEMASK field of the request L2ME frame issued byNC node 106. The second category of nodes ("category 2 nodes") includes network nodes that have been identified in the CYCLE _ NODEMASK field of the request L2ME frame issued byNC node 106 but from whichNC node 106 has not received a response. The third category of nodes ("category 3 nodes") includes network nodes from whichNC node 106 has received a response L2ME frame.
AfterNC node 106 has properly categorized each ofnetwork nodes 104, 108, 110 as class 1, 2, or 3 nodes,NC node 106 constructs the CYCLEJMASK according to the following guidelines. First, if there are 3 or more class 1 nodes,NC node 106 sets a corresponding number of bits in the CYCLE _ NODE mask to "1". If there are 3 or more class 1 nodes, the number of bits set by theNC node 106 in the CYCLE _ NODEMASK may be less than the total number of class 1 nodes, but not less than 3 bits. For example, if there are 5 category 1 nodes,NC node 106 may set 3, 4, or 5 bits in the CYCLE _ NODE mask to "1". Second, if there are 3 or more class 2 nodes,NC node 106 sets 3 or more bits in the CYCLJNODEMASK corresponding to the class 2 node to "1". Third, if there are no class 1 nodes, or if all of the bits in the CYCLJMASK that correspond to class 1 nodes have been set to "1,"NC node 106 sets the bit in the CYCLJMASK that corresponds to class 2 nodes to "1. Finally,NC node 106 may set as many bits to "1" in the CYCLE _ NODE MASK asNC node 106 may receive a response from without disrupting network service. Once the CYCLE _ NODEMASK has been generated,NC node 106 initiates a wave cycle by broadcasting an L2ME message that includes the CYCLE _ NODEMASK.
The wave cycle is completed whenNC node 106 receives a corresponding payload, such as a response, from some or all ofnodes 104, 108, 110 or when the NC node's timer expires. For example,NC node 106 transmits a message and then starts its timer. If the timer ofNC node 106 reaches T21 (e.g., 20 milliseconds) before receiving a response message from some or all of the network nodes identified in the CYCLEJNOMASK, the wave cycle is complete even thoughNC node 106 has not received a response message. Note that T21 is the maximum allowable time interval between theNC node 106 transmitting a request L2ME frame and the requested node transmitting a corresponding response L2ME frame. The L2ME wave cycle completes successfully when each of the nodes identified in the wave _ node mask field of the payload has responded. Alternatively, ifnetwork nodes 104, 108, 110 are all classified as class 3 nodes before the timer ofNC node 106 reaches T21, the wave cycle is successful. Alternatively, ifNC node 106 does not receive a response L2ME frame from a Category 2 node whose corresponding bit in the CYCLE _ NODE mask transmitted byNC node 106 is set to "1", the wave cycle is unsuccessful, or fails. If the wave cycle fails,NC node 106 repeats the wave cycle by sending a multicast message only to those nodes from whichNC node 106 did not receive a response L2ME frame. Note that in one embodiment, multicast messages are treated the same as broadcast messages for repeating the wave cycle by sending multicast messages to non-responding nodes.NC node 106 will complete the scheduled wave cycle before creating a new wave cycle for any node from which a response was not received.
Fig. 2 is an example of an L2ME wave diagram 200 showing two wave periods 214, 216. The first wave cycle 214 is initiated when theNC node 206 broadcasts a message with a payload to allnodes 202, 204, 208, 210, 212 connected to thenetwork 102 with a node ID of 2. In this example, the payload includes a node _ bitmask 011011, where the rightmost bit corresponds to the node with a node ID of 0. This bit mask indicates thatNC node 206 expects to receive a payload containing a WAVE _ ACK fromnodes 202, 204, 208, and 210. As shown in FIG. 2, prior to the expiration ofNC node 206 timer,NC node 206 only receives response L2ME frames fromnodes 202, 204, and 208, while response L2ME frames fromnode 210 are either lost or not received. Expiration of the timer inNC node 206 completes the first wave cycle 214, but does not end the transaction.
SinceNC node 206 has not received a response L2ME frame fromnode 210,NC node 206 sends another request L2ME frame tonode 210, initiating the second wave cycle 216. The request sent tonode 210 is also sent tonode 212 and includes a NodeBitstock 110000 that requestsnodes 210 and 212 to send a wave ACK toNC node 206. The response L2ME frame fromnodes 210 and 212 is subsequently received byNC node 206, completing wave cycle 216.
L2ME transaction protocol
The L2ME transaction protocol is a higher sublayer protocol of L2ME that uses multiple L2ME waves to achieve network-wide transactions. In general, all L2ME transactions include j +1 waves (where j is 0, 1, 2.) and are initiated by an EN or NC node. An EN may be any network node, including an NC node, that initiates an L2ME transaction based on an end-user application. In the last L2ME wave, the requested result is returned by the NC node to the EN. The L2ME transaction is complete when the requested network node provides its final response. In one embodiment, only one L2ME transaction is executed or pending at any given time within the network. For a failed L2ME wave, the resulting NC node actions depend on the specific L2ME transaction type and wave number.
In general, all L2ME transaction messages may be classified into 3 different categories during a transaction. These messages are classified as follows: (1) submitting; (2) requesting; and (3) responding. Nodes that do not use L2ME messages, such as legacy nodes that are not configured with L2ME, may simply drop these messages. Nodes not configured with L2ME may receive the L2ME message because the L2ME message is embedded within the original MAC messaging framework. Fig. 3 illustrates one example of aMAC frame 300. TheMAC frame 300 includes aMAC header 302, a MAC payload 304, and a MAC payload Cyclic Redundancy Check (CRC) 310. The L2ME frame is embedded within the MAC payload 304 and includes anL2ME header 306 and anL2ME payload 308.
Submit L2ME message
The submit L2ME message carries the application-initiated request from the EN to the NC node, where an L2ME wave transaction may be initiated. The ENs are generally responsible for managing the various phases of the transaction, while the NC node is responsible for broadcasting requests, gathering the responses of each node, and providing the transaction results to the ENs that transmitted the commit message. Table 1 below illustrates one example of a submit L2ME frame format that includes a submit L2ME frame header and payload.
Table 1-submit L2ME message format
Figure G2008800078206D00071
Figure G2008800078206D00081
Figure G2008800078206D00091
The commit L2ME frame header includes an 8-bit enter _ transaction _ ID field. The ENTRY _ TRANSACTIVE _ ID field is the transaction ID of the entry node, which starts at "1" and is incremented each time a commit message is sent to the NC node. When there is no EN, an EN _ transaction _ ID-0 value is reserved for the NC node. Any L2ME transaction that originates from a commit message may contain this transaction ID. Note that the combination of the ingress node ID and the transaction ID uniquely identifies each L2ME transaction in the network, thereby enabling an EN to know that its transaction has been triggered. Further, uniquely identifying each transaction enables the EN to identify and cancel any attempt by the NC node to begin a transaction if the EN has timed out waiting for the transaction to begin. The composition and length of the L2ME _ payload field depends on the specific vendor _ ID, transaction _ type, and transaction _ subtype fields. Vendor _ ID is a 16-bit field in the submit and request L2ME messages that indicates various fields of these messages are used off-site by vendor. For example, the assigned vendor _ ID range for entropy sensitive Communications (Entropic Communications) is 0x0010 to 0x001F, while the value 0x0000 to 0x000F is assigned to MoCA. The length of the L2ME _ payload field may be shorter than or equal to L _ SUB _ MAX (length _ commit _ MAX). Also note that the commit and request messages associated with a given L2ME transaction may have the same set of vendor _ ID, transactiontype, and transactionsubtype fields.
Request L2ME message
During the transaction wave, the NC node broadcasts a request L2ME frame message to all nodes. In one embodiment, where the NC node has received a submit message, the NC node will broadcast a request L2ME frame message as a result of the submit message. In some cases, when the NC node acts as an EN, as described below, no commit message is transmitted and the NC node initiates a transaction by issuing a request L2ME frame message on its own behalf. For example, when an NC node initiates a management transaction, the L2ME frame need not be committed and the transaction begins with a request L2ME frame. Each node that receives the request L2ME frame message is expected to respond to the NC node with the results of the transaction requested by the NC node in the payload. Table 2 shows a request L2ME frame message header and payload format, which is similar to the submit L2ME frame format, with the MAC header not shown.
Table 2-request L2ME frame message format
Figure G2008800078206D00101
Figure G2008800078206D00111
In this message, the ingress _ nodeld is copied to the initiate commit message. If the request message originates from an EN-less L2ME transaction, such as an NC management transaction, the entry _ node _ transaction _ ID has no meaning and the field value is reset to "0". If this is the first L2ME wave, the WAVE _ NODE MASK value equals the commit message. In the last L2ME wave of the transaction, the value of this field contains the set of nodes that will be part of the last wave. Otherwise, the wave _ node mask value corresponds to the set of nodes that provided the response in the previous request's participate _ next _ wave bit. The cycle _ node mask is a bit mask of a node, where each bit position corresponds to a node ID (i.e., abit 0 value corresponds to a node ID of 0). The bit corresponding to each node is set if the NC node instructs that node to provide a response upon receiving the request message. In addition, the request message includes a WAVE _ STATUS field that indicates whether the previous wave cycle failed or completed successfully. Note that the allowed values in the wave _ state field are 0, 1, 2, and 4, and if the response _ fail and/or NC _ cancel _ fail bits are set, this is the last L2ME wave of the transaction, and any subsequent waves may contain the L2ME _ payload field of the failed transaction.
The payload of a response frame for an L2ME wave (except wave 0) is typically formed by concatenating the responses from the nodes in the previous wave. The cascade is formed as follows: when a response L2ME frame arrives at the NC node from a given node, its payload is appended to the end of the response queue at the NC node. The length of the payload is then written into a data structure called a directory and the ID of the node is transmitted. When the NC node is ready to send the next request L2ME frame, it places the length of the directory in the DirectoryLength field, copies the directory to the beginning of the payload, and then copies the response queue into the rest of the payload.
The directory _ length field indicates the directory length in the payload portion of the request L2ME frame message. The L2ME _ payload field used in the request L2ME frame message has 4 different types, as follows:
1. if this is the first L2ME wave for a given transaction, then the first type of L2ME _ payload is the same as the payload of the commit message. The length of this L2ME _ payload field may be less than or equal to L _ SUB _ MAX, which is the maximum number of bytes in the concatenated committed L2ME frame payload.
2. From the second wave to the last wave of the transaction, a second type of request L2ME frame payload is sent as a report from the NC node to the participating nodes as shown in table 3 below. The L2ME _ payload field includes a directory of 16 entries with 2-byte entries from each node, and a response _ data field, which is a concatenation of variable length response L2ME frames from each participating L2ME node that provided a response in the previous wave. This directory enables the receiving node to decode the L2ME response from all nodes.
Table 3-request "concatenated" L2ME frame payload format
3. The third type of L2ME _ payload is the case of a failed L2ME transaction in which the response _ fail bit or NC _ fail bit is set to "1". The NC node may transmit a 0-length payload in the request message of the last L2ME wave.
4. The fourth type of L2ME _ payload is used to support some specific L2ME transactions, such as parameterized quality of service. In this payload, the directory _ length in the request L2ME frame header is unused and the NC node processes the responses of all nodes to produce a custom request frame payload. The format of the L2ME _ payload field is defined in a particular L2ME transaction. Note that the request frame without payload includes a 64-bit type III reserved field.
Response L2ME message format
The response L2ME frame format is shown in table 4 below. At the end of each L2ME wave, a response L2ME frame is sent unicast from each L2ME transaction capable node to the NC node. In some embodiments, the NC node may be configured to receive multiple (e.g., 3 or more) responses from the requested node simultaneously.
Table 4-response L2ME frame format
Figure G2008800078206D00131
The response L2ME message includes a response _ status field that indicates the response status of the node that is requested to respond in the next or last wave cycle. In addition, the RESP _ STATUS field enables an EN to cancel a transaction that it initiated by sending a submit message to the NC node, but waits for the response message to time out.
If an L2ME enabled network node receives any L2ME transaction message with an unidentified vendor _ ID, transaction _ type, or transaction _ subtype field value, then the node may set the response _ status field to "0" in the response frame, and the NC node may exclude this node from future waves of the transaction. The EN and any other nodes that set the participatory _ last _ wave bit field in any response may be included in the wave _ node mask of the last wave.
L2ME transaction overview
The L2ME transaction may be initiated in a variety of ways, although only one L2ME transaction may typically be performed at any given time within the network. In one embodiment, the L2ME transaction may be initiated by an EN, which may be any node connected to the network. For example, an EN may be a MoCA network node connected to a computer. A computer may be attached to the internet and run an application that communicates by means of a higher layer protocol interface. In this configuration, the computer may use the EN as a proxy (described in more detail below) to monitor the entire MoCA network by L2ME messaging in response to application-generated transactions within the computer.
Referring to FIG. 4, one example of an EN initiated transaction is now described. FIG. 4 illustrates a block diagram of one example of anL2ME transaction 400 initiated byEN 402. Upon receiving a request from a premium application, EN402 generates and transmits a submit L2ME message toNC node 404.NC node 404 receives the submit message and initiates the first L2ME wave,L2ME wave 0, by broadcasting a request message with a similar header to the submit message received fromEN 402. The request message is broadcast to each of the L2MEcapable nodes 406, 408, 410 as specified by the wave _ node mask field contained in the payload. If the request is sent to a node without L2ME capability, the node simply ignores the message.
The request L2ME frame message is also sent to EN402 for reasons now described. Upon receiving the request message, EN402 validates the transaction by comparing the appropriate fields in the request header with the values it uses in the commit header. If the values match, the transaction will be processed. However, there may be some instances where the L2ME transaction in the network is not the most recent transaction requested byEN 402. This situation occurs when the submit message transmitted by EN402 is corrupted, not received byNC node 404, or not granted byNC node 404. If the initiated transaction is not the most recently requested L2ME transaction, EN402 may cancel the transaction by setting the IN _ CANCEL bit to "1" in the response. Upon receiving a response from EN402 with the IN _ CANCEL bit set to "1",NC node 404 will not issue more L2ME waves in this transaction, but may immediately initiate another L2ME transaction.
Assuming that the L2ME transaction was not cancelled by EN402, the requested L2ME transaction-capable nodes send a response message toNC node 404 with a payload indicating whether they choose to participate in the next wave(s) of this transaction. If, for example, the transaction is a parameterized QoS transaction to create a new parameterized QoS flow and the node cannot support parameterized QoS flows, the node may choose not to participate in future waves. A node may choose to participate in the network transaction by setting the participate _ next _ wave bit to "1" and may choose not to participate by setting the participate _ next _ wave bit to "0". In a subsequent L2ME wave,NC node 404 typically generates a request L2ME frame payload by concatenating all responses from the previous wave as described above.NC node 404 then sends the request message to the node that requested participation in the current wave. Note that for some transaction embodiments, the NC node may generate a different, non-concatenated request message payload based on the received response payload. This transaction continues until the NC node reaches the maximum number of waves specified in the commit L2ME message. Upon reaching the maximum number of waves in the transaction,NC node 404 issues the last wave, which includes a request L2ME frame message toEN 402.
However, ifNC node 404 receives a response from all nodes that are L2ME capable with its JUNCTION _ NEXT _ WAVE bit set to "0" and there is an EN402,NC node 404 may skip the intermediate wave in the transaction and synthesize the appropriate request payload. If concatenation would otherwise be used to create the request payload,NC node 404 populates directory _ Node _ ID 0xFF into all entries of the directory, and the resultant request may have the Trans _ WAVE _ number properly set to the last wave.
In multiple L2ME transactions,NC node 404 may only request EN402 to provide a response to its request message after all other nodes have responded. This response to complete the L2ME wave among the various transactions ensures that the L2ME transaction has completed completely before EN402 notifies its application that the transaction is complete. In other L2ME transactions, the transaction does not complete untilNC node 404 sends a request to multiple nodes (including EN 402) and receives a response from each node.
In some instances, the entire L2ME transaction may be in error. Such circumstances arise, for example, in the following cases: (1) l2ME wave cycle failure; (2) the number of executed L2ME waves in a given transaction is less than the expected total number of L2ME waves indicated in the transaction _ last _ wave _ number field in the initiate commit L2ME message; and (3) L2ME transactions are initiated by EN. In one embodiment, if the L2ME transaction fails,NC node 404 issues a new L2ME wave referred to as a transaction failure wave. This wave informs that the transaction was terminated due to the failure of the previous L2ME wave. This transaction failure wave is initiated byNC node 404 sending a request L2ME frame header, with its wave _ State field set to "4" and the wave _ Nodemask having its bit corresponding to EN402 set to "1" as defined in Table 2 above. Further, the request L2ME frame is a 0 long payload as described above. Upon receiving this request, EN402 sends a response L2ME frame as shown in table 4 above.
In another embodiment,NC node 404 may autonomously initiate an L2ME transaction to inform network nodes which other nodes have L2ME transaction capabilities. These NC node-initiated transactions are typically conducted in a single wave and are designed to achieve network maintenance by providing interoperability with legacy or other compatible nodes. L2ME wave transactions initiated by an NC node typically have the following characteristics:
1. to define the wave duration, the NC node should include at least 3 nodes in the cycle _ nodemask field;
2. if the NC node does not receive the expected response from the requested node within the NC _ timeout, the NC node assumes that the response is no longer pending;
3. the NC node may not request the node to retransmit its response until all other nodes have been asked to send their response for the first time; and
4. any node that fails to provide a response (in the case of a request) within T21 of the second request results in an L2ME wave failure.
The WAVE _ NODE MASK field indicates the set of nodes thatNC node 404 identified as nodes that enable the L2ME transaction. If a node is identified byNC node 404, it completes the transaction with a 0-length response message in accordance with Table 5 below.
Table 5-L2 ME enabled response frame format
Figure G2008800078206D00161
Parameterized quality of service architecture
One embodiment of a network parameterized quality of service (pQoS) segment is now described. Note that the home network may include multiple pQoS segments, such as a coaxial network, a MoCA segment, and an IEEE 802.11 segment. The pQoS section may be any group of networking nodes sharing the same PHY and MAC layers that ensure that a flow entering the network at an ingress node will reach one or more egress nodes with a pQoS guarantee. A pQoS guarantee is a guarantee that at least a predetermined data rate will be provided for data traffic from an ingress node to an egress node. In one embodiment, each pQoS segment has its own ID, which is typically the MAC address of the NC node. The upper layer pQoS logical entity may be configured to specify how a flow may be established across several pQoS segments. Note that all network nodes typically operate within the same pQoS segment.
Generally, networks can be divided into 3 categories: (1) legacy networks, such as networks without L2ME transactions or pQoS functionality; (2) parameterized QoS enabled networks; and (3) networks with parameterized QoS disabled. Any network node operating in an L2 ME-enabled network will behave as a legacy device if the node is operating in a network with other legacy devices. In one embodiment, each network node has L2ME and pQoS capabilities.
In some embodiments, if any of the network nodes does not support pQoS, then pQoS will be disabled. For example, if a non-pQoS capable node joins a pQoS enabled network, the network will discontinue supporting pQoS and will also stop creating new pQoS flows until all network nodes have pQoS capability. If the network node attempts to create a new flow, an error message will be transmitted to the network node requesting the establishment of the new flow. Furthermore, the pQoS flow will no longer be guaranteed and the packet will be treated as prioritized traffic or best effort traffic.
However, if a non-pQoS capable node leaves the network leaving only pQoS capable nodes, the network can upgrade and enable pQoS transport. Upon upgrading to pQoS, the prioritized flows will remain as prioritized flows until updated by the ingress node as described below.
Referring to fig. 5, one embodiment of a pQoS architecture based on the L2ME architecture will now be described. The parameterizedQoS network architecture 500 includes anetwork 502 having a plurality ofnodes 504, 506, 508, 510. Thenetwork 502 may be a coordinated network including a coaxial network, a mesh network, or a wireless network according to the MoCA standard. In one embodiment, each ofseveral nodes 504, 506, 508, 510 has aPHY layer 512, aMAC sublayer 514, and anL2ME 516. In a UPnP quality of service environment,L2ME 516 interfaces with QoS device services 518. In a non-UPnP environment, L2ME interfaces with an appropriate quality of service application entity (not shown) for quality of service management.L2ME 516 is also configured to adapt messages from higher layer applications to layer 2 compatible messages as explained in more detail below.
Several nodes 504, 506, 508, 510 are also configured with higher layer capabilities includingQoS device services 518,QoS manager services 520, and QoS policy holder services 522. TheQoS device service 518 receives action invocations from theQoS manager service 520 and reports the results of the actions to theQoS manager service 520.QoS device 518 will perform actions by itself or by utilizing lower layers viaL2ME 516.
As shown in FIG. 5,node 504 is an ingress node andnode 506 is an NC node.Nodes 508 and 510 are ingress and egress nodes, respectively. Note that in anynetwork 502, there may bemultiple egress nodes 510. Assume, for example, that an end-user application requires a particular bandwidth for a video stream fromingress node 508 toegress node 510. Traffic is generally shown as traffic having a unidirectional flow from theingress node 508 to theegress node 510. The end-user application is generally aware of theingress node 508, theegress node 510, and the streaming content. The end-user application may also be aware of the traffic specification (TSpec XML) of the content.
The TSpec XML may include various parameters that describe the bandwidth of the stream, packet size, latency, and loss tolerance. Some of the bandwidth parameters include an average data rate, a peak data rate, and a maximum burst size. The packet size parameter may specify minimum and maximum packet sizes, as well as a nominal packet size. The latency parameters include maximum delay variation, and maximum and minimum service intervals.
In a pQoS environment and as shown in FIG. 6,L2ME 606 is adapted to convert TSpec XML to layer 2 proprietary QSpec.L2ME 606 may convert QSpec from TSpec XML by simply using TSpec XML as QSpec, selecting some parameters of TSpec XML for QSpec and ignoring other parameters, or selecting some parameters of TSpec XML and converting them to QSpec format. Some of the QSpec parameters may include service type, peak data rate, average data rate, and maximum, minimum, and nominal packet size.
The end-user application constructs the traffic descriptor and requests theQoS manager 520 to set the required QoS resources for the requested flow. The traffic descriptor may include a traffic ID that defines the source and sink information, TSpec XML, and other related information for the video stream to provide parameterized QoS.QoS manager 520 acting on behalf of the end-user application requests QoSpolicy holder service 522 to provide the appropriate policy for the requested video stream as described by the traffic descriptor. The QoSpolicy holder service 522, which is a repository of QoS policies for thenetwork 502, provides theQoS manager 520 with the appropriate policies for the requested video stream. This policy may be used to set the relative importance of the traffic flow. The user importance number is used to ensure that the traffic that is most important to the user receives a corresponding priority to the network resource. Based on this policy, theQoS manager 520 configures theQoS device service 518 so that theingress node 508 and theegress node 510 can process the new video traffic. Note that theQoS policy holder 522 andQoS manager 520 services may reside on anynode 504, 506, 508, 510 or on another parameterized QoS segment.
Cost of pQoS flows
Before admitting or updating a pQoS flow in the network,NC node 506 must decide whether a particular flow request can be granted, e.g., whether sufficient network resources are available. TheNC node 506 makes a decision as to whether a flow should be admitted by first determining the cost of the pQoS flow. The flow Cost (CF) is a measure of the specific bandwidth required to support a given pQoS flow and is expressed in the slot _ time field (i.e., the slot time, where the slot time is a unit of measure equal to 20 ns). Note that in one embodiment, the basic bandwidth unit is a time slot rather than a transmission rate (e.g., megabits/second). However, in an alternative embodiment, the CF is provided as a transmission rate.
For each create or update pQoS flow transaction (defined in more detail below), the CF may be computed periodically by theingress node 508. TheNC node 506 may use this CF calculation to decide whether to allow the requested pQoS flow in the network. CF (multiple of slot _ time/second) can be calculated as follows:
Figure G2008800078206D00191
formula (1)
Wherein,
TABLE 6 parameter List of formula (1)
Parameter nameDescription of the invention
NppsTotal number of packets transmitted per second
TminMinimum packet size transmission time
LpPacket length (bytes) including RS padding
OFDMbBits per OFDM symbol based on PHY distribution for transmission
TCPCyclic prefix length (multiple of slot _ time)
TFFTIFFT/FFT period (multiple of slot _ time)
TIFGIFG period (multiple of slot _ time)
LpRELength of preamble per packet (multiple of slot _ time)
NPPSMAX is the number of Orthogonal Frequency Division Multiplexing (OFDM) symbols for each packet, where
Figure G2008800078206D00192
Is a rounded up integer for X multiplied by the OFDM symbol length (multiple of slot _ time/second). Note that the length of the OFDM symbol depends on the network channel characteristics. N after adding a preamble length and an interframe gap (IFG) length per packetPPSMAX times the total number of packets transmitted per second, the latter given by the peak packet rate divided by the nominal packet size. The cost of all existing flows N (no packet aggregation) peringress node 508 is given by:
Figure G2008800078206D00193
formula (2)
In order for theingress node 508 to accept the new flow, its maximum available bandwidth must be greater than or equal to the cost of the current flow and the new flow. This condition is given by:
formula (3)
Once the ingress node has accepted the new pQoS flow, theNC node 506 should determine whether the cost of all aggregated pQoS flows across all nodes, including the cost of the new pQoS flow, is less than or equal to the maximum total available network bandwidth. Assuming there are M nodes in the network, the total available network bandwidth must satisfy the following condition:
Figure G2008800078206D00195
formula (4)
Wherein BWNCIs the total network bandwidth. In some embodiments, the total available bandwidth for pQoS services in the network is 80% of the total network bandwidth minus the cost of all overhead, which may include all link control packets, reservation requests, admission control, and probes. If equation (4) is true,NC node 506 admits the new pQoS flow to the network. If equation (4) is not true,NC node 506 denies the flow request and returns the Available Flow Bandwidth (AFBQ) as follows:
Figure G2008800078206D00201
formula (5)
In order for the NC node to accept new flows, the node capacity of each of theingress node 508 and theegress node 510 must be greater than or equal to the cost of the existing and new flows through that node. This condition is given by:
Figure G2008800078206D00202
formula (6)
The remaining node capacity (remaining _ nodelipacity) is the difference between the left and right sides of equation (6) and is one of the bandwidth-related criteria used by theNC node 506 before granting a particular stream creation or update. Since the most basic bandwidth requirement of a pQoS flow is the number of slots required for a period (e.g., 1ms) and a simple mapping between bandwidth values of several megabits/second and the number of slots on the data link layer is not straightforward due to OFDM modulation and bit loading, a switch is needed to determine the number of packets required for the flow. To find the equivalent maximum number of packets in a data link cycle and the size of each packet (in bits), the worst case bandwidth of the flows on the data link layer per cycle needs to be as follows:
QSpec _ peak data rate (maximum number of packets × QSpec _ maximum packet size equation (4)
Wherein,
QSpec _ peak data rate and its conversion to time slot are the data link layer bandwidth reserved by the NC for the stream;
Figure G2008800078206D00203
QSpec _ max packet size-TSpec _ max packet size + ethernet packet overhead;
the TSpec _ peak data rate over 1ms is calculated from a TSpec _ peak data rate whose time unit is not 1 ms.
The time unit parameter allows the specification of the token bucket TSpec XML for the live traffic source to match its traffic generation process. The time unit parameter also provides a convenient and flexible way to extract the token bucket TSpec XML from pre-recorded or legacy content regardless of whether transport information is available. For example, for MPEG encoded video content without transport information, the peak data rate may be specified as the maximum number of bits within a video frame divided by the video frame duration. In this way, the time unit is a video frame interval determined by the video frame rate. If the media is PCM audio, for example, the time unit may be equal to the inverse of its sample rate. For content arranged with transport information such as RTP, the resolution of the RTP timestamp of 90KHz by default is typically used to specify TSpec XML. It is not uncommon for time units in TSpec XML to not match time units determined by the operating clock rate of the underlying link used to transport traffic and it may be necessary to convert token buckets TSpec XML specified in different time units.
At any interval [ t1, t0, as defined by the peak data rate in the token bucket model]Maximum number of bits generated by traffic source with characteristic r, b, p for any t1-t0≥TUTSPECWill not exceed p (t)1-t0). Therefore, in any interval [ t ]1-t0]The maximum or peak data rate measured in the middle of the time interval does not exceed the maximum or peak data rate measured in the time intervalp(t1-t0)t1-t0=p.
Similarly, at any interval [ t ] according to the definition of the maximum burst size in the token bucket model1-t0]Maximum amount of bits generated by traffic source with characteristic r, b, p for any t1-t0≥TUTSPECWill not exceed r (t)1-t0) + b. In any interval [ t ]1-t0]The maximum or peak data rate measured in the middle of the time interval does not exceed the maximum or peak data rate measured in the time intervalr(t1-t0)+bt1-t0=r+bt1-t0.
Thus, the two constraints above are combined for the slave operation (operator) clock rate coperDetermined TUoper(>TUTSPEC) At an operating clock rate coperMeasured peak data Rate (denoted as p)oper) Given by:
poper=min(p,r+bt1-t0)=min(p,r+bTUoper)min(p,r+bcoper)formula (6)
Parameterized QoS flow guarantees
pQoS flow guarantees mean that pQoS-enabled networks can support flows without the CF exceeding the available network bandwidth. This means that a new pQoS flow will not be admitted into the network unless the peak data rate/nominal packet size (N) of the flow can be supported at any given timepps). Note that either theingress node 508 or theNC node 506 may permit the incoming peak packet rate transient for the flow to exceed the peak data rate/nominal data size that the network can support.
In one embodiment,NC node 506 may guarantee that a portion of the total bandwidth is set aside for prioritized traffic, while the rest of the traffic is used for parameterized traffic. For example,NC node 506 may set aside 20% of the total network bandwidth for prioritized traffic, while the remaining 80% of the bandwidth is set aside for parameterized traffic. Parameterized traffic includes asynchronous traffic and asynchronous data traffic. Asynchronous traffic, such as video traffic, requires knowledge of the average data rate of the traffic. Thus,QoS manager 520 can request admission or obtain information regarding bandwidth availability for asynchronous traffic. If prioritized bandwidth is not available due to heavy network loading, the traffic will not be admitted and the source (ingress node 508) may then attempt to send the traffic as asynchronous data traffic. QSpec of asynchronous traffic includes a service type parameter and a maximum packet size parameter.
Asynchronous data traffic, such as file transfers, is traffic for which there is no required or predictable bandwidth. Asynchronous data traffic may also include best effort traffic, e.g., traffic without a VLAN tag indicating its priority. In one embodiment, best-effort traffic does not pass the admission process described below. Network control and flow management traffic is generally considered prioritized traffic. However, in certain applications where short and predictable latencies are required, network control and stream management traffic may be structured to use parameterized stream bandwidths (e.g., pull-mode DVR playback or DTCP positioning constraints where the round-trip time of the management exchange is limited to 7 ms). Alternatively, network control and flow management traffic may be treated as high priority asynchronous traffic. When treated as high priority asynchronous traffic, the bandwidth set aside for prioritized traffic should be greater than the bandwidth needed for network management and traffic management traffic so that these management messages can be sent in a timely manner.
When requesting bandwidth for a pQoS flow, all nodes may set the priority field in the data/control reservation request element frame to 0x3, as shown in Table 7 below. TheNC node 506 coordinates the scheduling of various flows within thenetwork 502. The network level supports 3 priorities: (1) high priority including network and traffic management; (2) medium priority including asynchronous traffic; and (3) low priority for asynchronous traffic including no priority label, such as best effort traffic. In scheduling flows,NC node 506 schedules pQoS flows on a first-in-first-out basis. In one embodiment, these pQoS flows are scheduled before any non-pQoS flows are scheduled.
Table 7-data/control reservation request element frame format with revised priority field
Field(s)Length ofUse of
Frame _ subtype4 bitsIf frame _ type-control 0x 0-type I/III sounding report 0x 1-reserved 0x 2-reserved 0x 3-key distribution 0x 4-dynamic key distribution 0x 5-type I/III sounding report request 0x 6-link acknowledgement 0x 7-reserved 0x 8-periodic link packet 0x 9-power control 0 xA-power control response 0 xB-power control acknowledgement
0 xC-Power control update 0 xD-topology update 0 xE-unicastMAC Address Notification 0 xF-reservation if frame _ type 0x0 Ethernet _ packet
Frame _ type4 bits0x2 control 0x3 ethernet transmission
Purpose(s) to8 bitsNode ID of destination node
PHY _ distribution8 bitsThe type bit 7: 600, which indicates the modulation scheme used for the transmission, distribution sequence 001, distribution sequence 1, bit 5: 00x2, diversity mode distribution 0x7, unicast distribution 0x8, all other values of the broadcast distribution are reserved.
Request _ ID8 bitsA sequence number associated with the request.
Parameter(s)12 bitsRetention
Priority level4 bitsControl 0x0 if frame type ethernet transport 0x 0-low priority 0x 1-medium priority 0x 2-high priority 0x 3-parameterized quality of service flow if frame type
Duration of time16 bitsRequired transmission time in multiples of slot _ time
Some pQoS flows may be Variable Bit Rate (VBR) flows. Since the peak data rate of a VBR stream is greater than its average rate and the stream uses its average rate over a long period of time, a significant portion of the parameterized stream bandwidth may not be used by the stream. To maximize bandwidth, unused bandwidth of the VBR stream is made available for asynchronous traffic. Thus, the actual asynchronous bandwidth generally has two components: (1) a predetermined portion of asynchronous traffic, and (2) a portion that is reclaimed from parameterized stream bandwidth.
Parameterized QoS transactions
In the embodiment shown in fig. 5, the parameterized QoS transaction may be initiated by either theNC node 506 or theingress node 504. An EN-initiated transaction typically includes 2 pQoS waves and is typically initiated with a commit message sent unicast toNC node 506. It is important to note that theEN 504 input may come from another pQoS segment outside thenetwork 502. Upon receipt of the submit message, theNC node 506 typically begins the first wave by broadcasting a request message to allnetwork nodes 504, 508, 510 asking for a particular pQoS flow information to be returned. In the second wave, theNC node 506 typically broadcasts information from the response received from the network node in the first wave.
In contrast, a pQoS transaction initiated byNC node 506 typically includes only a single pQoS wave. The pQoS wave is initiated by theNC node 506 broadcasting a request message to allnodes 504, 508, 510 requesting that a particular action occur. This wave is completed when theNC node 506 receives a response from each of the requestednetwork nodes 504, 508, 510.
Each of the supported pQoS streams may be transmitted in a unicast, or broadcast, stream. Note that multicast streams within a network are typically handled as broadcast streams where the egress node ID is 0x3 f. A broadcast flow is a pQoS flow transmitted to all network nodes in the network. If theingress node 508 oregress node 510 is disconnected from thenetwork 502, theNC node 506 may delete the unicast stream. In contrast, for network topology reasons, broadcast streams are typically not deleted except when theingress node 508 is disconnected from the network.
Creating and updating parameterized QoS flow transactions
With reference to FIG. 7, one example of a create/update transaction in accordance with the embodiment shown in FIG. 5 will now be described. The purpose of creating or updating a transaction is to create a new pQoS flow or update pQoS flow attributes between theingress node 508 and theegress node 510 as shown in FIG. 5. Initially, theQoS manager 520 receives the IP and MAC addresses of both theingress node 508 and theegress node 510 and the QoS devices on the flow path from theQoS device service 518. TheQoS manager 520 then determines the path for the pQoS flow by comparing the reachable MAC values reported by theQoS device service 518 with the MAC addresses of theingress node 508 andegress node 510 until a path is found.
TABLE 8-submit L2ME header and payload format for create/update
Figure G2008800078206D00251
A pQoS flow may be identified by a flow ID. In one embodiment, the flow ID is a multicast destination MAC ethernet address and the packets are routed on the pQoS flow. The tag-value (TV) field has a maximum of 24 different pQoS entries. Each pQoS TV entry includes an 8-bit tag field followed by a 24-bit tag value field. Table 9 shows an example of a pQoS tag list of TV entries. Note that the label "0" indicates the current TV, and any subsequent TV entries may be ignored. Peak _ datarate values outside this range can be interpreted as special cases to query the available bandwidth without creating a stream. The lease _ time field indicates a duration after which the ingress node 508 (as shown in fig. 5) may stop treating the associated traffic as a pQoS flow and release resources associated with the flow.
TABLE 9-defined tags for TV entries
Name of TAGLabel #Tag value description
TV List ending 0Ignore
Peak _ data _ Rate 20-0 xFFFFFE; peak data rate (kb/s)0 xFFFFFF-is used for queries only and does not create or update pQoS flows;
nominal _ packet _ size 9Nominal packet size (bytes) -necessary; see also
Lease _ time 20Lease time (seconds) -optional (default ═ 0; permanent)
RetentionAll othersReserved for future use; MoCA-1.1 node ignore
In one embodiment, a pQoS transaction is initiated whenNC node 506 receives a submit message fromEN 704. Note thatEN 704 may send a submit message in response to a higher layer application, such as QoS device service 518 (shown in FIG. 5). Upon receipt of the submit message fromEN 704,NC node 706 transmits a request message to allnodes 704, 708 connected to the network, thereby initiating a first wave (wave 0) 710. Thefirst wave 710 is used to inform allnetwork nodes 704, 708 to create or update transactions on the proposed pQoS flows and to collect metrics from these nodes on the current flow allocation.
Upon receipt of the request L2ME message,ingress node 508 and egress node 510 (both shown in fig. 5) use the TSpec XML value to calculate the time slots required for the stream and the resources required from each node, such as system bus bandwidth and memory. Each requested node may respond to theNC node 706 with a response L2ME frame indicating the aggregate cost of the existing pQoS flows to complete the first L2ME wave. Note that if a node receives a request L2ME frame and does not involve the flow, it simply ignores the message. One example of a response message format for creating/updating a transaction is specified in table 10 below. Note that if theNC node 706 does not receive a response L2ME frame from either theingress node 508 or theegress node 510 within a given time interval, theNC node 706 will rebroadcast the request L2ME message up to several times, e.g., 3 times, before treating the message as a failure.
Table 10-response for create/update L2ME message format (wave 0)
Figure G2008800078206D00271
Figure G2008800078206D00281
Each requestednode 704, 708 generates a response L2ME frame payload by calculating the existing _ TPS values for all existing flows except for the new or update flow in which the node is the ingress node. This value is calculated by using equation (1) for each stream. Thenodes 704, 708 also calculate the existing _ PPS values for all existing flows except the new flow or the updated flow. The existing _ PPS value is the sum of the peak data rate/nominal packet size for each stream. Further, thenodes 704, 708 calculate the cost _ TPS value as the cost in slot _ time/second for the new or updated flow according to equation (1). The value corresponding to peak _ data _rate 0 is excluded. If there is an ingress or egress node limit on the flow throughput (bits/sec), thenode 704, 708 calculates the remaining node capacity in bits/sec (remaining _ node _ capacity) and uses the overrule _ code field (capacity definition of the node) to identify the cause. An example of the format of the response L2ME frame for wave 1 is shown in table 15 below.
There are several scenarios in which a node may not be able to fulfill a request issued byNC node 706. In these cases, the node issues a veto _ code. An example of the veto _ code is shown in table 11 below. Invalid _ TV is issued if one or more of the following statements about the TV set received byNC node 706 are true:
1. the peak _ data _ rate is not present.
2. Nominal _ packet _ size does not exist.
3. Nominal _ packet _ size value < 64B or > 1518B.
TABLE 11 acceptable veto code value List
Veto code nameValue ofDescription of the invention
Veto _ code _ entry _ OK 1The node is an ingress node (both create and update flow)
Veto _ code _ non-entry _ OK 2The node is not an ingress node anddisallowing stream creation or update
Veto _ codestreamexists 3Already on the node flows-the node overrules the creation of the same flow (for the creation of flows only)
Veto _ code _ insufficient _ entry _ BW 4The ingress node has a Bandwidth (BW) limit that prevents the creation of flows (both create and update flows) as specified
Veto _ code _ not _ egress _ BW 5Retention
Veto _ code _ too _ many _ stream 6The node already has too many existing flows (for nothing more than just
New flowing)
Veto _ code _ invalid _ stream _ ID 7The requested flow ID cannot be used by the ingress node as a quality of service flow ID- (for creating flows only)
Veto _ code _ invalid _ TV 8
Veto _ code _ invalid _ node _ ID 9The node ID becomes invalid during network operation
Veto _ code _ lease _ expire 10Update only
Before theNC node 706 can initiate the second wave (wave 1)712, it needs to determine whether the result of creating or updating the transaction was (1) denied because the node provided a non-bandwidth related reason for the requested flow, (2) denied because of a bandwidth limitation, or (3) allowed to commit the flow resources as requested.
TABLE 12 non-Bandwidth related veto code and Return reason
Veto code namenon-Bandwidth Return reason name
Veto _ codestreamexistsReturn _ cause _ flow _ Presence
Veto _ code _ too _ many _ streamBack _ cause _ too _ many _ stream
No _ block _ code no _ valid stream _ IDReturn _ cause _ invalid _ stream _ ID
Veto _ code _ invalid _ TVReturn _ cause _ invalid _ tv
Veto _ code _ invalid _ node _ IDReturn _ cause _ invalid _ node _ id
Veto _ code _ lease _ expireReturn _ reason _ lease _ expire
If any node returns one of the veto _ codes listed in Table 12 above, the wave 1 request contains the corresponding fallback _ reason. If a node does not return a veto _ codeentroks, then the wave 1 request contains a fallback _ cause _ flow _ not _ found as shown in table 14 below.
TheNC node 706 evaluates and ensures that the following 3 bandwidth-related criteria are met before granting a particular flow creation or update:
1. aggregate TPS-the sum of the existing _ TPS and cost _ TPS values from all nodes may be less than quality of service _ TPS _ MAX.
2. Aggregated PPS-existing _ PPS and N from all nodesppsThe sum of the values can be less thanQuality of service _ PPS _ MAX.
3. Ingress or egress node capacity-the remaining _ node _ capacity value returned at the ingress or egress node may be greater than or equal to the requested flow peak _ data _ rate.
IfNC node 706 determines that the requested stream resources can be committed to creating or updating a transaction, it sends a request L2ME frame with a 0 length payload to the participating nodes in wave 1 to commit the requested resources.
If any of these bandwidth-related criteria fail, theNC node 706 may calculate a maximum _ Peak _ Data _ Rate (threshold _ BPS) value in the payload of the request frame. The max _ peak _ data _ rate is the maximum allowable flow peak _ data _ rate (bits/second) that can be successful. TheNC node 706 may also specify the most restrictive criteria by selecting one of the following reasons for fallback:
1. back _ cause _ insufficient _ entry _ BW,
2. back _ cause _ insufficient _ egress _ BW,
3. fallback _ cause _ insufficient _ aggregate _ BW,
4. fallback _ cause _ insufficient _ aggregation _ PPS.
Thesecond wave 712 informs the node about the decision of the flow creation or update transaction. If creating or updating a transaction fails in thefirst wave 710,NC node 706 may send a request L2ME frame for thesecond wave 712 according to Table 13 below, where the threshold _ BPS value is defined only for the above 4 fallback _ reasons. Note that if the update transaction fails, the existing pQoS stream still adheres to its current TSpec XML parameters.
Table 13-request for failed create/update L2ME frame payload (wave 1)
Figure G2008800078206D00301
Figure G2008800078206D00311
TABLE 14 acceptable Return _ reason value List
Return reason nameValue ofDescription of the invention
Return _ cause _ flow _ Presence 3Creating a transaction fails because the node has acted as an entry for the specified flow
Return _ cause _ deficiency _ entry _ bw 4A flow cannot be created due to insufficient bandwidth on the ingress node data path; NC provides maximum feasible data bandwidth
Return _ cause _ deficiency _ Outlet _ BW 5A flow cannot be created due to insufficient bandwidth on the egress node data path; NC provides maximum feasible data bandwidth
Back _ cause _ too _ many _ stream 6The ingress or egress node cannotAdditive flow
Return _ cause _ invalid _ stream _ ID 7Requested stream ID is reserved by node
Return _ cause _ invalid _ TV 8Node can not accept received TV
Return _ cause _ invalid _ node _ ID 9The node ID becomes invalid during network operation
Return _ reason _ lease _ expire 10Update transactions fail by deleting streams from the network
Return _ cause _ flow _ not _ found 16Update transaction failure
Return _ cause _ deficiency _ aggregation _ TPS 17 MoCATMInsufficient stream bandwidth on a network
Return _ cause _ deficiency _ aggregation _ pps 18 MoCATMInsufficient packets/second on the network
When a 0-length request for a successful creation of a transaction is received in thesecond wave 712, the ingress node 504 (shown in FIG. 5) of the flow may commit the requested resources. Eachnode 704, 708 may respond with a response message format, an example of which is shown in table 15 below. In thelast wave 814, wave 2,NC 806 notifiesEN 804 of the result of the create/update transaction.
Table 15-response for create/update L2ME message format (wave 1)
Deleting parameterized quality of service flow transactions
The purpose of deleting a quality of service flow transaction is to tear down a particular pQoS flow between a set ofingress 508 andegress 510 nodes (shown in FIG. 5). Referring to fig. 8, an example of deleting apQoS transaction 800 according to the embodiment shown in fig. 5 will now be described. The deletepQoS flow transaction 800 includes 3L 2ME waves 810, 812, 814. The transaction begins whenEN 804 sends a commit message toNC node 806 to specify a flow ID to be deleted. An example of a delete message format is shown in table 16 below.
TABLE 16-commit L2ME message format for delete transactions
Figure G2008800078206D00331
The first wave (wave 0)810 of thedelete transaction 800 informs allnetwork nodes 804, 808 about which pQoS flows and resources are to be deleted. TheNC node 806 initiates afirst wave 810 to allnodes 804, 808 using a request message format based on a submit message. Thenodes 804, 808 may respond with a response message indicating whether they have resources associated with the flow to be deleted.
Table 17-response for delete L2ME header and payload (wave 0)
During thesecond wave 812, wave 1, the stream resource is deleted. TheNC node 806 initiates asecond wave 812 with a cascading response from thefirst wave 810 using a request message format. The response message format used in thesecond wave 812, wave 1, is shown in table 18 below as one example of a response message format. Eachnode 804, 808 responds with a response frame in thesecond wave 812 indicating stream deletion by setting bit 31 in the deleted field in the payload portion of the frame.
Table 18-response for delete L2ME payload (wave 1)
Figure G2008800078206D00342
Figure G2008800078206D00351
In thethird wave 814, wave 2, theNC node 806 notifies theEN 804 that the requested flow has been deleted. Thethird wave 814 is initiated by theNC node 806 using the request message format with a concatenated response from thesecond wave 812. Thedelete transaction 800 completes whenEN 804 and any other requestednodes 808 provide a final response as shown in Table 19 below.
Table 19-response L2ME header and payload format (wave 2)
Figure G2008800078206D00352
Listing parameterized quality of service flow transactions
The list pQoS flow transaction enables any network node to retrieve a list of flows in the network. Referring to fig. 9, one example of alist pQoS transaction 900 according to the embodiment shown in fig. 5 will now be described. The transaction typically includes 2L 2ME waves 910, 912 and is initiated whenEN 904 sends a commit message toNC node 906 in a format according to table 20 below. Eachnode 904, 906, 908 is configured to maintain a logical table of its ingress flows numbered consecutively from 0. The order of the elements in the table is changed only when a stream is created or deleted. Thus, the remote node may construct a complete flow list by selecting which entry in the logical table is the first.
Table 20-submit L2ME frame format for lists
Figure G2008800078206D00361
In thefirst wave 910,wave 0, theNC node 906 informs thenodes 904, 908 of which range of QoS flows to query. TheNC node 906 initiates thefirst wave 910 using a request message format based on a submit message received fromEN 804. The request message is sent to anode 908 that may provide a response. Queriednode 908 may respond with a response message format according to table 21 below. The return _ flow _ ID in the payload portion of the listresponse frame contains a list of pQoS flows, starting from the flow _ start _ index of the node and up to the maximum flow number as specified by flow _ max _ return. Note that the flow update counter is incremented when the number of flows changes. For the purposes of this transaction, assume that the node maintains a logical table of entry flows, where each element is assigned an index from 0 to the maximum number of flows to be deleted.
Table 21-response L2ME frame payload format (wave 0)
Figure G2008800078206D00371
In thesecond wave 912, wave 1, theNC node 906 informs theENs 904 and any otherinterested nodes 908 of the aggregated list of pQoS flows found in thefirst wave 910. TheNC node 906 initiates thesecond wave 912 with a cascading response from thefirst wave 910 using the request message format. Thelist transaction 900 completes when allinterested nodes 904, 908 send their final response to theNC node 906 as shown in Table 22 below.
Table 22-request for list L2ME message format (wave 1)
Figure G2008800078206D00372
Querying parameterized quality of service flow transactions
The purpose of a query pQoS flow transaction is to retrieve the attributes of a particular flow ID. Referring to fig. 10, one example of aquery pQoS transaction 1000 according to the embodiment shown in fig. 5 will now be described. Thequery pQoS transaction 1000 includes twowave periods 1010, 1012 and begins when theEN 1004 sends a submit message to theNC node 1006 specifying the flow ID for a particular pQoS flow.
Table 23-submit L2ME frame format for queries
Figure G2008800078206D00382
Thefirst wave 1010,wave 0, of thequery transaction 1000 informsnodes 1004, 1008 of which pQoS flow is being queried and is initiated whenNC node 1006 transmits a request message tonodes 1004, 1008 based on a commit message to identify which node holds the particular flow. Eachnode 1004, 1008 may respond with a response message whether it is the ingress node for the flow. The response L2ME message format is shown in table 23 below as an example of such a format. If the node is not the entry for the flow, it responds with a response frame of 0 length payload.
Table 23-if a flow is found, response for query L2ME payload (wave 0)
Figure G2008800078206D00391
Figure G2008800078206D00401
In thesecond wave 1012, wave 1, the query results are transmitted toEN 1004 and anyother nodes 1008 interested in these results. TheNC node 1006 initiates the second wave with a cascading response from thefirst wave 1010 using the request L2ME message format. Thequery transaction 1000 completes when theinterested nodes 1004, 1008 send their final response frame to theNC node 1006 as shown in Table 24 below.
Table 24-response for query L2ME message format (wave 1)
Figure G2008800078206D00402
Maintaining parameterized quality of service flow transactions
Maintaining the pQoS transactions may be used to periodically evaluate whether sufficient network resources exist for the promised pQoS flows. Referring to fig. 11, one example of maintaining apQoS transaction 1100 according to the embodiment shown in fig. 5 will now be described. MaintainingpQoS transaction 1100 may be accomplished byNC node 1106 issuing this transaction between T22(T6/5) to T6 seconds, where T6 may be 25 or 50 seconds. Further,NC node 106 may issue this transaction T22(T6/5) seconds after a new L2MEpQoS enabled node joinsnetwork 502.Maintenance transaction 1100 includes twoL2ME waves 1110, 1112 and does not require a commit message because the transaction is triggered byNC node 1106.
NC node 1106 initiatesfirst wave 1110,wave 0, andmaintenance transaction 1100 by transmitting a request message, an example of which is shown in Table 25 below. The request message asks allnodes 1104, 1108 to provide information about their current flow allocation metrics.
Table 25-request for maintenance L2ME frame format (wave 0)
Figure G2008800078206D00411
Each requestednode 1104, 108 sends its response message with payload format as shown in table 26 for thefirst wave 1110 specifying the existing _ TPS and the existing _ PPS values for all existing flows where the node is an ingress node.
Table 26-response for maintenance L2ME payload format (wave 0)
Figure G2008800078206D00412
Figure G2008800078206D00421
Thesecond wave 1112, wave 1, enables theNC node 1106 to ascertain whether the current pQoS flow in the network can be guaranteed in view of changing network conditions based on the results of thefirst wave 1110. TheNC node 1106 initiates thesecond wave 1112 using a request message format header as shown in table 27 with the following changes:
1. wave _ state 1
2. Directory _ length of 0x10
3. Transaction _ wave _ number is 1
If the aggregation of all pQoS flows is over-committed,NC node 1106 sets the over _ committed field to "1" in the request message ofsecond wave 1012. Eachnode 1104, 1108 may send a message to its application layer informing it that the pQoS flow resources of the network are not guaranteed.
Table 27-request for maintenance L2ME payload message format (wave 1)
Field(s)Length ofUse of
Over _ committed32 bitsSet to '1' if the pQoS flow is over-committed; else '0'
Retention32bits0x 0; type III
Retention32bits0x 0; type III
Themaintenance transaction 1100 completes when eachnode 1104, 1108 sends its response frame to theNC node 1106 as shown in Table 28 below.
Table 28-response for maintenance L2ME message format (wave 1)
Figure G2008800078206D00431
In addition to the embodiments described above, the disclosed methods and apparatus may be embodied in the form of computer-implemented processes and apparatuses for practicing those processes. The methods and apparatus disclosed herein may also be embodied in, for example, a floppy disk, a Read Only Memory (ROM), a CD-ROM, a hard drive, a "ZIPTM"high-density hard drive, DVD-ROM, flash drive, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the disclosed methods and systems. The methods and apparatus disclosed herein may also be embodied in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the disclosed methods and apparatus. When implemented on a general-purpose processor, the computer program code segments configure the processor to create specific logic circuits.
Although the disclosed methods and systems have been described in terms of embodiments for example, they are not limited thereto. Rather, the appended claims should be construed broadly, to include other variants and embodiments of the disclosed method and system, which may be made by those skilled in the art without departing from the scope and range of equivalents of the method and system.

Claims (29)

1. A method of communication, comprising:
broadcasting, from a network coordinator to a plurality of nodes connected to a network, the plurality of nodes including at least one ingress node and at least one egress node, a request for guaranteed quality of service flow in the network;
receiving a first response to the request from the at least one ingress node, the first response indicating whether the at least one ingress node has available resources to transmit the guaranteed quality of service flow;
receiving a second response to the request from the at least one egress node indicating whether the at least one egress node has available resources to receive the guaranteed quality of service flow; and
allocating resources for the guaranteed quality of service flow if the at least one ingress node has available resources to transmit the guaranteed quality of service flow and the at least one egress node has available resources to receive the guaranteed quality of service flow.
2. The communication method of claim 1, further comprising receiving a submit message at the network coordinator, wherein the request is broadcast from the network coordinator as a result of the received submit message.
3. The communication method of claim 1, wherein an ingress node is separate from the ingress node and the egress node.
4. The communication method of claim 2, wherein the submit message and the second request are layer 2 messages.
5. The communication method of claim 1, further comprising the steps of:
receiving a cost of admitting the flow from an ingress node, wherein the cost of the flow is based in part on a number of existing flows in the network.
6. The communications method of claim 1, wherein the ingress node is capable of supporting the guaranteed quality of service flow if the received first response includes slot or packet size information.
7. The communications method of claim 1, wherein the ingress node is unable to support the guaranteed quality of service flow if the received first response includes bandwidth information.
8. The communications method of claim 1, wherein the egress node is capable of supporting the guaranteed quality of service flow if the received second response includes slot or packet size information.
9. The communications method of claim 1, wherein the egress node cannot support the guaranteed quality of service flow if the received second response includes bandwidth information.
10. The communication method of claim 2, further comprising the steps of:
transmitting a unicast message to an ingress node if the guaranteed quality of service flow is not created, the unicast message comprising a maximum common bandwidth available on the network.
11. The communication method of claim 1, further comprising the steps of:
broadcasting a message from the network coordinator to all network nodes if the network coordinator determines that there are sufficient network resources to support the guaranteed quality of service flow.
12. The communication method of claim 10, wherein upon receipt of the broadcast message, the network node commits network resources to the guaranteed quality of service flow.
13. A machine readable storage medium encoded with program code, wherein when the program code is executed by a processor, the processor performs a method comprising:
broadcasting, from a network coordinator to a plurality of nodes connected to a network, the plurality of nodes including at least one ingress node and at least one egress node, a request for guaranteed quality of service flow in the network;
receiving a first response to the request from the at least one ingress node, the first response indicating whether the at least one ingress node has available resources to transmit the guaranteed quality of service flow;
receiving a second response to the request from the at least one egress node indicating whether the at least one egress node has available resources to receive the guaranteed quality of service flow; and
allocating resources for the guaranteed quality of service flow if the at least one ingress node has available resources to transmit the guaranteed quality of service flow and the at least one egress node has available resources to receive the guaranteed quality of service flow.
14. The machine-readable storage medium of claim 13, further comprising receiving a commit message at the network coordinator, wherein the request is broadcast from the network coordinator as a result of the received commit message.
15. The machine-readable storage medium of claim 13, wherein the first request is transmitted by an ingress node.
16. The machine-readable storage medium of claim 13, wherein the ingress node is capable of supporting a parameterized quality of service flow if the received first response comprises time slot or packet size information.
17. The machine-readable storage medium of claim 13, wherein the egress node is unable to support a parameterized quality of service flow if the received second response comprises bandwidth information.
18. The machine-readable medium of claim 13, further comprising the steps of:
determining, at the network coordinator, whether the guaranteed quality of service flow is supported in the network based on network bandwidth information received from a network node.
19. The machine-readable storage medium of claim 16, wherein the received network bandwidth information comprises a cost of the flow in the network.
20. The machine-readable medium of claim 16, further comprising the step of:
broadcasting a message from the network coordinator to all network nodes if resources are allocated for the guaranteed quality of service flow.
21. The machine-readable storage medium of claim 18, wherein upon receipt of the broadcast message, the network node commits network resources to the parameterized quality of service flow.
22. The machine-readable storage medium of claim 12, wherein the first and second requests are layer 2 messages.
23. A system, comprising:
a physical interface connected to a coordinated network, the physical interface configured to transmit and receive messages over the coordinated network; and
a quality of service module coupled to the physical interface, the quality of service module configured to admit one or more guaranteed quality of service flows in the coordinated network through a plurality of layer 2 messages.
24. The system of claim 21, wherein the quality of service module is coupled to the physical interface through L2 ME.
25. The system of claim 21, wherein the quality of service module is configured to receive messages from a plurality of nodes connected to the coordinated network.
26. The system of claim 21, wherein the coordination network is a coaxial network.
27. The system of claim 21, wherein the coordinating network is a wireless network.
28. A system, comprising:
a network coordinator connected to a network, the network coordinator configured to broadcast a request for guaranteed quality of service flows in the network to a plurality of nodes;
at least one ingress node connected to the network, the at least one ingress node configured to transmit a first response to the network coordinator, the first response indicating whether the at least one ingress node has available resources to transmit the guaranteed quality of service flow;
at least one egress node connected to the network, the at least one egress node configured to transmit a second response to the network coordinator, the second response indicating whether the at least one egress node has available resources to receive the guaranteed quality of service flow;
wherein the network coordinator is further configured to allocate resources for the guaranteed quality of service flow if the at least one ingress node has available resources to transmit the quality of service flow and the at least one egress node has available resources to receive the quality of service flow.
29. The system of claim 28, wherein the network coordinator is further configured to broadcast the request as a result of receiving a submit message from an entering node.
CN200880007820.6A2007-02-062008-02-06Parameterized quality of service architecture in a networkExpired - Fee RelatedCN101632268B (en)

Applications Claiming Priority (15)

Application NumberPriority DateFiling DateTitle
US90020607P2007-02-062007-02-06
US60/900,2062007-02-06
US90156407P2007-02-142007-02-14
US90156307P2007-02-142007-02-14
US60/901,5642007-02-14
US60/901,5632007-02-14
US92776607P2007-05-042007-05-04
US92761307P2007-05-042007-05-04
US92763607P2007-05-042007-05-04
US60/927,6132007-05-04
US60/927,7662007-05-04
US60/927,6362007-05-04
US93131407P2007-05-212007-05-21
US60/931,3142007-05-21
PCT/US2008/053222WO2008098083A2 (en)2007-02-062008-02-06Parameterized quality of service architecture in a network

Publications (2)

Publication NumberPublication Date
CN101632268Atrue CN101632268A (en)2010-01-20
CN101632268B CN101632268B (en)2014-12-03

Family

ID=41576402

Family Applications (3)

Application NumberTitlePriority DateFiling Date
CN200880007920APendingCN101632259A (en)2007-02-062008-02-06The 2nd layer management entity information receiving and transmitting framework in the network
CN200880007820.6AExpired - Fee RelatedCN101632268B (en)2007-02-062008-02-06Parameterized quality of service architecture in a network
CN2008800078403AExpired - Fee RelatedCN101632261B (en)2007-02-062008-02-06 Full mesh rate transactions in the network

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
CN200880007920APendingCN101632259A (en)2007-02-062008-02-06The 2nd layer management entity information receiving and transmitting framework in the network

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
CN2008800078403AExpired - Fee RelatedCN101632261B (en)2007-02-062008-02-06 Full mesh rate transactions in the network

Country Status (1)

CountryLink
CN (3)CN101632259A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103548326A (en)*2011-03-302014-01-29熵敏通讯公司Method and apparatus for quality-of-service (QoS) management
TWI467961B (en)*2010-10-042015-01-01Broadcom CorpSystems and methods for providing service("srv") node selection
US8942250B2 (en)2009-10-072015-01-27Broadcom CorporationSystems and methods for providing service (“SRV”) node selection
CN108702339A (en)*2016-04-012018-10-23英特尔公司 Techniques for quality-of-service-based throttling in fabric architecture

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7013339B2 (en)*1998-07-062006-03-14Sony CorporationMethod to control a network device in a network comprising several devices
JP2001053925A (en)*1999-08-102001-02-23Matsushita Graphic Communication Systems IncCommunication controller and serial bus managing device
US20030005130A1 (en)*2001-06-292003-01-02Cheng Doreen YiningAudio-video management in UPnP
CN1173544C (en)*2001-07-092004-10-27北京艺盛网联科技有限公司Coaxial long-distance Ethernet connection method and its equipment
SE524696C2 (en)*2002-10-302004-09-21Operax AbNetwork resource allocation method for Internet protocol network, involves allocating network resources individually for requested network resource reservation and updating usage history statistics
CN100505639C (en)*2005-01-122009-06-24华为技术有限公司 Processing method of multi-service flow resource application

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8942250B2 (en)2009-10-072015-01-27Broadcom CorporationSystems and methods for providing service (“SRV”) node selection
TWI467961B (en)*2010-10-042015-01-01Broadcom CorpSystems and methods for providing service("srv") node selection
CN103548326A (en)*2011-03-302014-01-29熵敏通讯公司Method and apparatus for quality-of-service (QoS) management
CN108702339A (en)*2016-04-012018-10-23英特尔公司 Techniques for quality-of-service-based throttling in fabric architecture
US11343177B2 (en)2016-04-012022-05-24Intel CorporationTechnologies for quality of service based throttling in fabric architectures
US12058036B2 (en)2016-04-012024-08-06Intel CorporationTechnologies for quality of service based throttling in fabric architectures

Also Published As

Publication numberPublication date
CN101632261B (en)2013-09-18
CN101632268B (en)2014-12-03
CN101632261A (en)2010-01-20
CN101632259A (en)2010-01-20

Similar Documents

PublicationPublication DateTitle
US12438825B2 (en)Parameterized quality of service in a network
US10432422B2 (en)Parameterized quality of service architecture in a network
CN106209758B (en)Method and device for processing SOME/IP flow by interworking with AVB technology
CA2762683C (en)Quality of service for distribution of content to network devices
US8468200B2 (en)Retransmission admission mechanism in a managed shared network with quality of service
CN101783758B (en)Family coaxial network MAC layer data transmission method based on service differentiation
CN101632268A (en)Parameterized quality of service architecture in the network
US20050122904A1 (en)Preventative congestion control for application support
CN102938743B (en)Data transmission method and device

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20141203

Termination date:20170206


[8]ページ先頭

©2009-2025 Movatter.jp