Movatterモバイル変換


[0]ホーム

URL:


US9270602B1 - Transmit rate pacing of large network traffic bursts to reduce jitter, buffer overrun, wasted bandwidth, and retransmissions - Google Patents

Transmit rate pacing of large network traffic bursts to reduce jitter, buffer overrun, wasted bandwidth, and retransmissions
Download PDF

Info

Publication number
US9270602B1
US9270602B1US13/732,337US201213732337AUS9270602B1US 9270602 B1US9270602 B1US 9270602B1US 201213732337 AUS201213732337 AUS 201213732337AUS 9270602 B1US9270602 B1US 9270602B1
Authority
US
United States
Prior art keywords
data packets
subset
another
buckets
send ring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/732,337
Inventor
Alan B. Mimms
Timothy S. Michels
Jonathan M. Hawthorne
William R. Baumann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
F5 Inc
Original Assignee
F5 Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by F5 Networks IncfiledCriticalF5 Networks Inc
Priority to US13/732,337priorityCriticalpatent/US9270602B1/en
Assigned to F5 NETWORKS, INC.reassignmentF5 NETWORKS, INC.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: HAWTHORNE, JONATHAN M., MICHELS, TIMOTHY S., BAUMANN, WILLIAM R., MIMMS, ALAN B.
Application grantedgrantedCritical
Publication of US9270602B1publicationCriticalpatent/US9270602B1/en
Activelegal-statusCriticalCurrent
Adjusted expirationlegal-statusCritical

Links

Images

Classifications

Definitions

Landscapes

Abstract

A system, method and medium is disclosed which includes selecting, at a software component of a network traffic management device, a first bucket having a first predetermined transmit time. The disclosure includes populating one or more selected data packet descriptors associated with one or more corresponding data packets in the first bucket. The disclosure includes releasing the first bucket to a hardware component of the network traffic management device, wherein the hardware component processes the one or more data packet descriptors of the first bucket for the first predetermined transmit time.

Description

FIELD
This technology relates to a system and method for pacing network traffic between network devices.
BACKGROUND
It is very common for a modern server to transmit large blocks of data in one burst to a single destination where the network path to the destination has much lower bandwidth than the really large bandwidth of the server's path within the data center and the first few hops along that path. Often these bursts are in response to a request for data, and the data is read from a storage medium (e.g., a disk) in a large block to help amortize the cost of the read operation. The large chunk of data is then dumped by the server's OS into the network at full speed, adding latency for other traffic following some portion of the same path through the network. This latency is unnecessary since other traffic could easily have been interleaved with the packets of the burst if the packets of the burst were spaced in time to match the true bandwidth of the full path to the destination. Further, such bursts can result in loss of some of the burst data due to overruns in of the buffering in the network path to the destination. Such losses necessitate retransmission of some of the large chunk of data—effectively reducing the gains that were hoped to be achieved by the batching of the read operation into a large chunk and reducing the overall throughput of the server. If the packets in these bursts were, instead, spread out in time at a pace matching the full network path data rate, both of these problems are easily solved
What is needed is a system and method which overcomes these disadvantages.
SUMMARY
In an aspect, a method for pacing data packets from one or more sessions comprises selecting, at a software component of a network traffic management device, a first bucket having a first predetermined transmit time. The method comprises populating one or more selected data packet descriptors associated with one or more corresponding data packets in the first bucket. The method comprises releasing the first bucket to a hardware component of the network traffic management device, wherein the hardware component processes the one or more data packet descriptors of the first bucket for the first predetermined transmit time.
In an aspect, a processor readable medium having stored thereon instructions for pacing data packets from one or more sessions, comprising machine executable code which when executed by at least one processor and/or network interface a network traffic management device to perform a method comprising selecting a first bucket having a first predetermined transmit time; populating one or more selected data packet descriptors associated with one or more corresponding data packets in the first bucket; releasing the first bucket to a hardware component of the network traffic management device, wherein the hardware component processes the one or more data packet descriptors of the first bucket for the first predetermined transmit time.
In an aspect, a network traffic management device comprises a memory containing non-transitory machine readable medium comprising machine executable code having stored thereon instructions for pacing data packets from one or more sessions. A network interface configured to communicate with one or more servers over a network. A processor coupled to the network interface and the memory, the processor configured to execute the code which causes the processor to perform, with the network interface, a method comprising: selecting a first bucket having a first predetermined transmit time; populating one or more selected data packet descriptors associated with one or more corresponding data packets in the first bucket; releasing the first bucket to a hardware component of the network traffic management device, wherein the hardware component processes the one or more data packet descriptors of the first bucket for the first predetermined transmit time.
In one or more of the above aspects, the method performs comprises selecting, at the software component, a second bucket having a second predetermined transmit time that is the same as the first transmit time of the first bucket; populating one or more selected data packet descriptors associated with one or more corresponding data packets in the second bucket; releasing the second bucket to the hardware component of the network traffic management device, wherein the hardware component processes the one or more data packet descriptors of the second bucket for the second predetermined transmit time quanta.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram of an example system environment that includes a network traffic management device in accordance with an aspect of the present disclosure.
FIG. 2A is a block diagram of the network traffic management device in accordance with an aspect of the present disclosure.
FIG. 2B illustrates a block diagram of the network interface in accordance with an aspect of the present disclosure.
FIG. 2C illustrates further details of the network traffic management device in accordance with an aspect of the present disclosure.
FIG. 3 illustrates an example transmit ring or bucket in accordance with an aspect of the present disclosure.
FIG. 4 illustrates a time line of an example transmit ring or bucket in accordance with an aspect of the present disclosure.
FIG. 5 illustrates a block diagram of hardware based time enforcement in performed by a high speed bridge (HSB) priority mechanism in accordance with an aspect of the present disclosure.
FIG. 6 illustrates an example transmit time line in accordance with an aspect of the present disclosure.
FIG. 7 illustrates a flow chart of the software implementation of populating buckets with data packet descriptors in accordance with an aspect of the present disclosure.
FIG. 8 illustrates a flow chart of the software implementation of populating buckets with data packet descriptors in accordance with an aspect of the present disclosure.
While these examples are susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail preferred examples with the understanding that the present disclosure is to be considered as an exemplification and is not intended to limit the broad aspect to the embodiments illustrated.
DETAILED DESCRIPTION
FIG. 1 is a diagram of an example system environment that includes a network traffic management device in accordance with an aspect of the present disclosure. Theexample system environment100 includes one or more Web and/or non Web application servers102 (referred generally as “servers”), one ormore client devices106 and one or more networktraffic management devices110, although theenvironment100 can include other numbers and types of devices in other arrangements. The networktraffic management device110 is coupled to theservers102 via local area network (LAN)104 andclient devices106 via awide area network108. Generally, client device requests are sent over thenetwork108 to theservers102 which are received or intercepted by the networktraffic management device110.
Client devices106 comprise network computing devices capable of connecting to other network computing devices, such as networktraffic management device110 and/orservers102. Such connections are performed over wired and/or wireless networks, such asnetwork108, to send and receive data, such as for Web-based requests, receiving server responses to requests and/or performing other tasks. Non-limiting and non-exhausting examples ofsuch client devices106 include personal computers (e.g., desktops, laptops), tablets, smart televisions, video game devices, mobile and/or smart phones and the like. In an example,client devices106 can run one or more Web browsers that provide an interface for operators, such as human users, to interact with for making requests for resources to different web server-based applications and/or Web pages via thenetwork108, although other server resources may be requested by client devices.
Theservers102 comprise one or more server network devices or machines capable of operating one or more Web-based and/or non Web-based applications that may be accessed by other network devices (e.g. client devices, network traffic management devices) in theenvironment100. Theservers102 can provide web objects and other data representing requested resources, such as particular Web page(s), image(s) of physical objects, JavaScript and any other objects, that are responsive to the client devices' requests. It should be noted that theservers102 may perform other tasks and provide other types of resources. It should be noted that while only twoservers102 are shown in theenvironment100 depicted inFIG. 1B, other numbers and types of servers may be utilized in theenvironment100. It is contemplated that one or more of theservers102 may comprise a cluster of servers managed by one or more networktraffic management devices110. In one or more aspects, theservers102 may be configured implement to execute any version of Microsoft® IIS server, RADIUS server, DIAMETER server and/or Apache® server, although other types of servers may be used.
Network108 comprises a publicly accessible network, such as the Internet, which is connected to theservers102,client devices106, and networktraffic management devices110. However, it is contemplated that thenetwork108 may comprise other types of private and public networks that include other devices. Communications, such as requests fromclients106 and responses fromservers102, take place over thenetwork108 according to standard network protocols, such as the HTTP, UDP and/or TCP/IP protocols, as well as other protocols. As per TCP/IP protocols, requests from the requestingclient devices106 may be sent as one or more streams of data packets overnetwork108 to the networktraffic management device110 and/or theservers102. Such protocols can be utilized by theclient devices106, networktraffic management device110 and theservers102 to establish connections, send and receive data for existing connections, and the like.
Further, it should be appreciated thatnetwork108 may include local area networks (LANs), wide area networks (WANs), direct connections and any combination thereof, as well as other types and numbers of network types. On an interconnected set of LANs or other networks, including those based on differing architectures and protocols. Network devices such as client devices,106,servers102, networktraffic management devices110, routers, switches, hubs, gateways, bridges, cell towers and other intermediate network devices may act within and between LANs and other networks to enable messages and other data to be sent between network devices. Also, communication links within and between LANs and other networks typically include twisted wire pair (e.g., Ethernet), coaxial cable, analog telephone lines, full or fractional dedicated digital lines including T1, T2, T3, and T4, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links and other communications links known to those skilled in the relevant arts. Thus, thenetwork108 is configured to handle any communication method by which data may travel between network devices.
LAN104 comprises a private local area network that allows communications between the one or more networktraffic management devices110 and one ormore servers102 in the secured network. It is contemplated, however, that the LAN104 may comprise other types of private and public networks with other devices. Networks, including local area networks, besides being understood by those skilled in the relevant arts, have already been generally described above in connection withnetwork108 and thus will not be described further.
As shown in theexample environment100 depicted inFIG. 1B, the one or more networktraffic management devices110 is interposed betweenclient devices106 with which it communicates with vianetwork108 andservers102 with which it communicates with via LAN104. In particular to the present disclosure, the networktraffic management device110 operates in conjunction with a clustered multi-processing (CMP) system which includes one or more networktraffic management devices110, each of which having one or more cores or processors200 (FIG. 2A). Generally, the networktraffic management device110 manages network communications, which may include one or more client requests and server responses, via thenetwork108 between theclient devices106 and one or more of theservers102. In any case, the networktraffic management device110 may manage the network communications by performing several network traffic related functions involving the communications. Some functions include, but are not limited to, load balancing, access control, and validating HTTP requests using JavaScript code that are sent back to requestingclient devices106.
FIG. 2A is a block diagram of the network traffic management device in accordance with an aspect of the present disclosure. As shown inFIG. 2A, the example networktraffic management device110 includes one or more device processors orcores200, one or more device I/O interfaces202, one ormore network interfaces204, and one ormore device memories206, which are coupled together by one ormore bus208. It should be noted that the networktraffic management device110 can be configured to include other types and/or numbers of components and is thus not limited to the configuration shown inFIG. 2A.
Device processor200 of the networktraffic management device110 comprises one or more microprocessors configured to execute computer/machine readable and executable instructions stored in thedevice memory206. Such instructions, when executed by one ormore processors200, implement general and specific functions of the networktraffic management device110, including the inventive process described in more detail below. It is understood that theprocessor200 may comprise other types and/or combinations of processors, such as digital signal processors, micro-controllers, application specific integrated circuits (“ASICs”), programmable logic devices (“PLDs”), field programmable logic devices (“FPLDs”), field programmable gate arrays (“FPGAs”), and the like. Theprocessor200 is programmed or configured according to the teachings as described and illustrated herein.
Device I/O interfaces202 comprise one or more user input and output device interface mechanisms. The interface may include a computer keyboard, mouse, display device, and the corresponding physical ports and underlying supporting hardware and software to enable the networktraffic management device110 to communicate with other network devices in theenvironment100. Such communications may include accepting user data input and providing user output, although other types and numbers of user input and output devices may be used. Additionally or alternatively, as will be described in connection withnetwork interface204 below, the networktraffic management device110 may communicate with the outside environment for certain types of operations (e.g. smart load balancing) via one or more network management ports.
Network interface204 comprises one or more mechanisms that enable the networktraffic management device110 to engage in network communications over the LAN104 and thenetwork108 using one or more of a number of protocols, such as TCP/IP, HTTP, UDP, RADIUS and DNS. However, it is contemplated that thenetwork interface204 may be constructed for use with other communication protocols and types of networks.Network interface204 is sometimes referred to as a transceiver, transceiving device, or network interface card (NIC), which transmits and receives network data packets over one or more networks, such as the LAN104 and thenetwork108. In an example, where the networktraffic management device110 includes more than one device processor200 (or aprocessor200 has more than one core), each processor200 (and/or core) may use the samesingle network interface204 or a plurality of network interfaces204. Further, thenetwork interface204 may include one or more physical ports, such as Ethernet ports, to couple the networktraffic management device110 with other network devices, such asservers102. Moreover, theinterface204 may include certain physical ports dedicated to receiving and/or transmitting certain types of network data, such as device management related data for configuring the networktraffic management device110 or client request/server response related data.
Bus208 may comprise one or more internal device component communication buses, links, bridges and supporting components, such as bus controllers and/or arbiters. Thebus208 enables the various components of the networktraffic management device110, such as theprocessor200, device I/O interfaces202,network interface204, anddevice memory206, to communicate with one another. However, it is contemplated that thebus208 may enable one or more components of the networktraffic management device110 to communicate with one or more components in other network devices as well. Example buses include HyperTransport, PCI, PCI Express, InfiniBand, USB, Firewire, Serial ATA (SATA), SCSI, IDE and AGP buses. However, it is contemplated that other types and numbers of buses may be used, whereby the particular types and arrangement of buses will depend on the particular configuration of the networktraffic management device110.
Device memory206 comprises computer readable media, namely computer readable or processor readable storage media, which are examples of machine-readable storage media. Computer readable storage/machine-readable storage media may include volatile, nonvolatile, removable, and non-removable media implemented in any method or technology for storage of information. Examples of computer readable storage media include RAM, BIOS, ROM, EEPROM, flash/firmware memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the information, which can be accessed by a computing or specially programmed network device, such as the networktraffic management device110.
Such storage media includes computer readable/processor-executable instructions, data structures, program modules, or other data, which may be obtained and/or executed by one or more processors, such asdevice processor200. Such instructions, when executed, allow or cause theprocessor200 to perform actions, including performing the inventive processes described below. Thememory206 may contain other instructions relating to the implementation and operation of an operating system for controlling the general operation and other tasks performed by the networktraffic management device110.
FIG. 2B illustrates a block diagram of the network interface in accordance with an aspect of the present disclosure. In particular,FIG. 2B shows the DMA processes used bynetwork interface204 for using multiple independent DMA channels with corresponding multiple applications, where each application has its own driver, and for sending packets.
As illustrated inFIG. 2B, thehost system111 can send a network data packet stored inhost memory212 to thenetwork108 vianetwork interface controller204 andEthernet port236. A send DMA operation is performed when thehost system111 uses a DMA channel to move a block of data from host memory22 to a network interface controller peripheral (not shown) vianetwork108. To perform a send DMA operation, thehost processor200 places the target network data packet intoDMA packet buffer216 and creates a DMA send descriptor (not shown separately) in send DMA descriptor rings218. The DMA send descriptor is jointly managed by thehost system111 and thenetwork interface controller204. The DMA send descriptor includes an address field and length field. The address field points to the start of the target network data packet inDMA packet buffer216. The length field declares how many bytes of target data are present in theDMA packet buffer216. The DMA send descriptor also has a set of bit flags (not shown) used to signal additional target data control and status information.
By way of example only, return DMA descriptor rings and send DMA descriptor rings218 can be physically in the same hardware memory blocks functioning as return and send DMA rings, respectively, at different times. Alternatively, separate and distinct memory blocks withinhost memory212DMA memory resources214 may be reserved for each return DMA descriptor rings and send DMA descriptor rings218.
Host system111 places the send descriptor on the send DMA descriptor rings218 inhost system memory212. Thehost processor200 determines the QoS of the network packet to be transferred to thenetwork108 and moves the network packet to the appropriateDMA packet buffer216 and places the descriptor on the appropriate descriptor rings 1-4 in send DMA descriptor rings218. The descriptor ring in send DMA descriptor rings218 is chosen by thehost system111 which selects the DMA channel, its associated peripheral, and the QoS level within the DMA channel. Send descriptors created byhost system111 in send DMA descriptor rings218 can be of variable types, where each descriptor type can have a different format and size. The send DMA descriptor rings218 are capable of holding descriptors of variable type.
Thehost processor200 writes one or more mailbox registers230 of thenetwork interface controller204 to notify thenetwork interface controller204 that the packet is ready. In performing this notification, thehost processor200 performs a write operation to a memory mapped network interface controller register (mailbox register230). Thehost processor200 can report the addition of multiple descriptors onto the send DMA ring in a single update, or alternatively, in multiple updates.
The appropriate packet DMA engine withinDMA engine224 is notified that the packet is ready. Thepacket DMA engine224 can be selected from available DMA channels, or if a specific application has a dedicated DMA channel, the associatedpacket DMA engine224 for that channel is used. TheDMA engine224 retrieves the DMA descriptor from the send DMA descriptor rings218. When multiple descriptors are outstanding in the send DMA descriptor rings218, theDMA Engine224 may retrieve more than one descriptor. Retrieving multiple descriptors at a time maximizes bus bandwidth and hardware efficiency. TheDMA engine224 is capable of receiving and processing send descriptors of variable type, format, and size.
As outlined above, thepacket DMA engine224 monitors the progress of the host DMA operations via a set of mailbox registers230. Eachpacket DMA engine224 supports its own set of mailbox registers230. The mailbox registers230 reside in a mapped address space of thenetwork interface controller204. When appropriate, thehost processor200 accesses the mailbox registers230 by performing memory mapped read and write transactions to the appropriate target address. The mailbox registers230 also contain ring status information for the Ring toQoS Mapper228. In this send DMA example, thepacket DMA engine224 reads the send descriptor, performs the DMA operation defined by it, and reports to thehost system111 that the DMA operation is complete. During the DMA operation, data is received from one or more CPU Bus read transactions (e.g., HyperTransport or PCI Express read transactions).
DMA scheduler226 chooses packets out ofpacket buffers216 based upon the priority of the queued network data packets and schedules the transfer to the appropriatepacket DMA engine224. For clarity and brevity, only a single packet buffer, a single DMA scheduler, and DMA engine are shown inFIG. 2B, but it should be understood that additional packet buffers, DMA schedulers, and DMA engines supporting the independent DMA channels 1-n and associated applications App(1)-App(n) can be included innetwork interface controller204.
The packet buffers216 are selected based on the novel scheme (discussed below) usingDMA scheduler226. TheDMA scheduler226 selects which descriptor ring 1-4 out of return DMA descriptor rings (also referred to as return DMA rings, or send rings) withinDMA memory resources212 to service and the matchingpacket buffer216 is accessed for a single packet. The scheduling process is then repeated for the next packet.
Each network packet retrieved from apacket buffer216 is routed to the appropriate DMA channel controlled by the respective packet DMA engine such as thepacket DMA engine224. The DMA channel segments the network packet for delivery to hostmemory212 via several, smaller, HyperTransport packets. These HyperTransport packets are interleaved with HyperTransport packets from the other DMA channels in thenetwork interface controller204.
Ring toQoS Mapper228 examines the assigned send DMA ring in send DMA descriptor rings218 and receives packet data and packet control information from thepacket DMA engine224. Using the control information, the Ring toQoS Mapper228 stamps the appropriate QoS onto the network data packet, thereby allowinghost system111 to send the network data packet back to thenetwork108. For example, using the control information, the Ring toQoS Mapper228 can create and prepend a HiGig header to the packet data.
An egressDMA routing interface232 arbitrates access to the network for DMA send packets. When a Ring toQoS Mapper228 has a network packet ready to send, the egressDMA routing interface232 arbitrates its access to theEthernet port236 and routes the packet to the correct interface if there is more than one present in thenetwork interface controller204. The egressDMA routing interface232 behaves like a crossbar switch and monitors its attached interfaces for available packets. When a packet becomes available, the egressDMA routing interface232 reads the packet from the selected ring toQoS mapper228 and writes it to the destination interface. The egressDMA routing interface232 moves complete packets toEthernet MACs234. When multiple sources are contending for egressDMA routing interface232, the egressDMA routing interface232 uses a weighted arbitration scheme as discussed in more detail below.
Thenetwork interface controller204 provides DMA services to a host complex such as thehost system111 on behalf of its attached I/O devices such as theEthernet port236. DMA operations involve the movement of data between thehost memory212 and thenetwork interface controller204. Thenetwork interface controller204 creates and manages HyperTransport or other types of CPU Bus read/write transactions targeting host memory22. Data transfer sizes supported by DMA channels maintained by various components ofapplication delivery controller110 are much larger than the maximum HyperTransport or CPU bus transaction size. Thenetwork interface controller204 segments single DMA operations into multiple smaller CPU Bus or HyperTransport transactions. Additionally, thenetwork interface controller204 creates additional CPU bus or HyperTransport transactions to support the transfer of data structures between thenetwork interface controller204 andhost memory212.
FIG. 2C illustrates further details of the network traffic management device in accordance with an aspect of the present disclosure. In particular, the networktraffic management device110 is shown handling a plurality of independent applications App(1)-App(n) being executed by one or more processors (e.g., host processor200) inhost system111. Each application in the plurality of applications App(1)-App(n) has its own respective application driver shown asDriver 1,Driver 2, . . . , Driver ‘n’ associated with the respective application, where the index n denotes an unlimited number of executing applications and drivers. Applications App(1)-App(n) send and receive data packets from and to the network108 (and/or LAN104), respectively, using respective DMA channels (e.g., DMA channels 1-n). DMA channels 1-n are uniquely assigned to individual applications out of App(1)-App(n). In this example, drivers 1-n manage access to respective DMA channels 1-n and do not require knowledge of each other or a common management database or entity (e.g., a hypervisor). By way of example only, each of applications App(1)-App(n) can be independent instances of different applications, or alternatively, may be independent instances of the same application, or further, may be different operating systems supported by different processors in host system111 (e.g., host processor200).
DMA channels 1-n each have unique independent resources allotted to them, for example, a unique PCI bus identity including a configuration space and base address registers, an independent view ofhost system memory212, a unique set of DMA descriptor ring buffers, a unique set ofpacket buffers216, unique DMA request/completion signaling (through interrupts or polled memory structures), and other resources. Each of DMA channels 1-n is unique and independent thereby permitting management by separate unique drivers 1-n.
Thenetwork interface controller204 classifies received packets to determine destination application selected from applications App(1)-App(n) and thereby selects the matching DMA channel to deliver the packet to the corresponding application. By way of example only, packet classification includes reading packet header fields thereby permitting application identification. Further by way of example only, packet classification includes hash calculation for distribution of packets across multiple instances of the same application, and/or reading a cookie stored, for example, in thenetwork interface controller204 associated with the application and the received network packet.
In general, the present disclosure utilizes hardware and software in the network traffic management device when pacing network traffic being sent to another network device (e.g. client, server). The purpose is for the network traffic management device to pace delivery of session data to the client to match the rate at which the client consumes the data, which adds value for mobile clients and networks. The network traffic management device utilizes a transmit time calendar having a plurality of quanta of transmit times. For example, one quanta may be 1 microsecond, although other time durations are contemplated.
FIG. 3 illustrates an example transmit ring or bucket in accordance with an aspect of the present disclosure. As shown inFIG. 3, thebucket300 is a software based data construct which holds a plurality of data packet DMA descriptors which point to the pending transmit packets and has packets from a plurality of various, different sessions. Thebucket300 has afence302 which marks the last descriptor in the bucket. Software determines what data packets to send and when to send those data packets. The software communicates with the hardware component wherein the software enhances the data structures to communicate a time component with regard to when the selected data packets are to be sent. In particular, the software component divides session data into multiple maximum segment sized (MSS) or smaller sized packets. The software component distributes the packets in buckets separated by fixed transmit times, wherein the buckets are then handed off to the hardware component, such as a DMA ring, to handle the delivery of the data packets populated in the buckets, by processing the buckets themselves.
FIG. 4 illustrates a time line of an example transmit ring or bucket in accordance with an aspect of the present disclosure. As can be seen inFIG. 4, the transmit time calendar utilizes a plurality ofbuckets300 wheresession data400, such as data packet DMA descriptors is populated into thebuckets300. Each bucket has a predetermined size and adjacent buckets are separate by a transmit time or quanta that is fixed (shown as T1-T4). It should be noted the time calendar is configured to have a finite number of buckets, wherein the calendar effectively wraps around to the first bucket after the last bucket has been handled. In addition, multiple packet descriptors can be placed in the same bucket to increase bandwidth. The software component may decide whether to pace a data packet, as opposed to being bulk data (FIG. 5) based on size, type of traffic the data packet is associated with, QoS parameters (based on flow) and the like.
In an aspect, one ormore buckets300 can be skipped for distribution, by the software component, to increase intra-packet spacing. One way for determining how many buckets to skip before placing a data packet descriptor in abucket300 can be based on the type or class of application in which the data is delivered (e.g. video streaming). For example, video streaming applications would benefit from a constant rate of data delivery. Another way of determining (based on speed) would be using TCP congestion control algorithms which calculate how many packets are to be sent over time. The software does not have to hunt for holes where packet descriptors can be inserted. Instead, the software component need only to drop packet descriptors into the one ormore buckets300.
Once abucket300 is populated by the software component, the software component releases the bucket to the hardware component, such as thenetwork interface204. In particular, the bucket contents are written into the network interface's204DMA ring218 en-mass. In an aspect, the hardware component can determine whether the size of a particular data packet in a bucket is of a threshold size such that the data packet can be processed within the allotted time quanta.
A gross timer is applied by the hardware component in writing the packets into theDMA ring218, wherein the poll loop time of theprocessor200 of the networktraffic management device110 is used as the gross timer. The sum of released bucket time quanta's must excel poll loop time.
The fence304 marking at the end of aparticular bucket300 serves as a timing boundary at the end of the bucket to allow the hardware component to do precise bucket to bucket timing.
FIG. 5 illustrates a block diagram of hardware based time enforcement in performed by a high speed bridge (HSB) priority mechanism in accordance with an aspect of the present disclosure. As shown inFIG. 5, data packets that are written to buckets that are released to the hardware are received in apacing send ring500. In contrast, non-paced data (data not written to buckets) are received in abulk send ring502.
In an aspect, the pacing sendring500 is given higher priority in terms of being sent to thering arbitrator506. Thus, the bulk traffic in the bulk sendringer502 advances to thearbitrator506 when the paced traffic (from the pacing send ring500) is blocked or is absent. Apacing timer504 in coupled to the pacing sendring500 and thearbitrator506, wherein the pacing timer is reset at the start of anew bucket300. Thepacing timer504 also blocks traffic at bucket boundaries until the bucket quanta time expires.
FIG. 6 illustrates an example transmit time line in accordance with an aspect of the present disclosure. As shown in thetime line600, the hardware component consumes the data packets in thefirst bucket602 for the entire time quanta which ends at boundary A. In thesecond bucket604, it is shown that the hardware component finishes writing the paced data packets such that additional time remains for the allotted time. The hardware component thereafter processes bulk data for the remaining amount of time in the time quanta until reaching boundary B. The same occurs inbucket606. Inbucket608, there is a period of time where no data is written, which can result in no bulk data or paced data being available for writing to the DMA engine.
FIG. 7 illustrates a flow chart of the software implementation of populating buckets with data packet descriptors in accordance with an aspect of the present disclosure. As shown inFIG. 7, theprocess700 occurs when a first bucket of a plurality of buckets is selected by the application module210 of the network traffic management device (Block702). The application module210 thereafter populates the first bucket with one or more data packet descriptors associated with data packets to be released to the network interface204 (Block704). The application module210 thereafter determines whether there are additional paced or bulk data packets that can be added to the current selected bucket (Block706). If so, the process repeats back to Block704. If not, the process proceeds to Block708.
Once the application module210 has populated the bucket with the last data packet, the application module210 applies a fence which represents the last data packet for that bucket (Block708). The application module210 thereafter determines if the selected bucket is the last bucket in the time calendar (Block710). If so, the application module210 effectively wraps around the time calendar and selects the first bucket (Block702). If not, the application module210 selects the next bucket in the time calendar (Block712), wherein the process proceeds back to Block704.
FIG. 8 illustrates a flow chart of the software implementation of populating buckets with data packet descriptors in accordance with an aspect of the present disclosure. As shown inFIG. 8, theprocess800 occurs when a first bucket of a plurality of buckets is selected by thenetwork interface204 of the network traffic management device (Block802). Thenetwork interface204 thereafter processes paced data packet(s) for the selected bucket (Block804). Thenetwork interface204 thereafter determines whether additional time is available for the selected bucket (Block806). If so, thenetwork interface204 determines if bulk data packets are present in the bucket (Block808). If so, thenetwork interface204 processes the bulk data for the bucket (Block810). The process then repeats back toBlock806.
If no additional allotted time is left, thenetwork interface204 selects thenext bucket812 and the process proceeds back to Block804. Referring back toBlock808, if no bulk data is available for processing for a particular bucket, thenetwork interface204 does not write any data packets and allows the remaining time for the bucket quanta to expire (Block814). The process then proceeds to Block812.
Having thus described the basic concepts, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the examples. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed system and/or processes to any order except as may be specified in the claims. Accordingly, the system and method is limited only by the following claims and equivalents thereto.

Claims (18)

What is claimed is:
1. A method for transmitting data packets at an optimized rate, the method comprising:
populating, by the network traffic management computing device, a plurality of buckets with one or more selected data packet descriptors associated with one or more corresponding ones of a subset of a plurality of data packets to be transmitted as paced and another subset of the data packets to be transmitted as bulk;
releasing, by the network traffic management computing device, one of the buckets to a hardware component comprising a pacing send ring and a bulk send ring, the releasing comprising writing one or more of the subset of the data packets to the pacing send ring and one or more of the another subset of the data packets to the bulk send ring; and
transmitting, by the network traffic management computing device, the one or more of the subset of the data packets from the pacing send ring and the one or more of the another subset of the data packets from the bulk send ring for a predetermined transmit time, wherein the subset of the data packets are strictly prioritized over the another subset of the data packets and the one or more of the another subset of the data packets are only transmitted within the predetermined transmit time when the pacing send ring is emptied during the predetermined transmit time.
2. The method as set forth inclaim 1, further comprising repeating, by the network traffic management computing device, the releasing and transmitting for each other of the buckets.
3. The method as set forth inclaim 1, wherein the one or more of the subset of the data packets from the pacing send ring and the one or more of the another subset of the data packets from the bulk send ring are transmitted by a direct memory access (DMA) transmit engine.
4. The method as set forth inclaim 1, wherein the subset of the data packets to be transmitted is identified as paced and the another subset of the data packets is identified as bulk based on a data packet size, an associated type of traffic, or a quality of service (QoS) parameter.
5. The method as set forth inclaim 1, further comprising:
determining, by the network traffic management computing device, when another one of the buckets should be skipped based on a type of application from which the ones of the data packets associated with the selected data packet descriptors populated therein originated; and
skipping, by the network traffic management computing device, the another one of the buckets such that the releasing and transmitting are not repeated for the another one of the buckets, when the determining indicates that the another one of the buckets should be skipped.
6. The method as set forth inclaim 1, further comprising waiting, by the network traffic management computing device, to release another one of the buckets when the one or more of the subset of the data packets are transmitted from the pacing send ring and the one or more of the another subset of the data packets are transmitted from the bulk send ring prior to the expiration of the predetermined transmit time.
7. A non-transitory computer readable medium having stored thereon instructions for transmitting data packets at an optimized rate, comprising executable code which when executed by at least one processor and/or network interface causes the processor and/or network interface to perform steps comprising:
populating a plurality of buckets with one or more selected data packet descriptors associated with one or more corresponding ones of a subset of a plurality of data packets to be transmitted as paced and another subset of the data packets to be transmitted as bulk;
releasing one of the buckets to a hardware component comprising a pacing send ring and a bulk send ring, the releasing comprising writing one or more of the subset of the data packets to the pacing send ring and one or more of the another subset of the data packets to the bulk send ring; and
transmitting the one or more of the subset of the data packets from the pacing send ring and the one or more of the another subset of the data packets from the bulk send ring for a predetermined transmit time, wherein the subset of the data packets are strictly prioritized over the another subset of the data packets and the one or more of the another subset of the data packets are only transmitted within the predetermined transmit time when the pacing send ring is emptied during the predetermined transmit time.
8. The non-transitory computer readable medium as set forth inclaim 7, wherein the executable code when executed by the processor and/or the network interface further causes the processor and/or the network interface to perform at least one additional step comprising repeating the releasing and transmitting for each other of the buckets.
9. The non-transitory computer readable medium as set forth inclaim 7, wherein the one or more of the subset of the data packets from the pacing send ring and the one or more of the another subset of the data packets from the bulk send ring are transmitted by a direct memory access (DMA) transmit engine.
10. The non-transitory computer readable medium as set forth inclaim 7, wherein the subset of the data packets to be transmitted is identified as paced and the another subset of the data packets is identified as bulk based on a data packet size, an associated type of traffic, or a quality of service (QoS) parameter.
11. The non-transitory computer readable medium as set forth inclaim 7, wherein the executable code when executed by the processor and/or the network interface further causes the processor and/or the network interface to perform at least one additional step comprising:
determining when another one of the buckets should be skipped based on a type of application from which the ones of the data packets associated with the selected data packet descriptors populated therein originated; and
skipping the another one of the buckets such that the releasing and transmitting are not repeated for the another one of the buckets, when the determining indicates that the another one of the buckets should be skipped.
12. The non-transitory computer readable medium as set forth inclaim 7, wherein the executable code when executed by the processor and/or the network interface further causes the processor and/or the network interface to perform at least one additional step comprising waiting to release another one of the buckets when the one or more of the subset of the data packets are transmitted from the pacing send ring and the one or more of the another subset of the data packets are transmitted from the bulk send ring prior to the expiration of the predetermined transmit time.
13. A network traffic management computing device comprising at least one processor and/or network interface and a memory coupled to the processor and/or network interface which is configured to be capable of executing programmed instructions comprising and stored in the memory to:
populate a plurality of buckets with one or more selected data packet descriptors associated with one or more corresponding ones of a subset of a plurality of data packets to be transmitted as paced and another subset of the data packets to be transmitted as bulk;
release one of the buckets to a hardware component comprising a pacing send ring and a bulk send ring, the releasing comprising writing one or more of the subset of the data packets to the pacing send ring and one or more of the another subset of the data packets to the bulk send ring; and
transmit the one or more of the subset of the data packets from the pacing send ring and the one or more of the another subset of the data packets from the bulk send ring for a predetermined transmit time, wherein the subset of the data packets are strictly prioritized over the another subset of the data packets and the one or more of the another subset of the data packets are only transmitted within the predetermined transmit time when the pacing send ring is emptied during the predetermined transmit time.
14. The network traffic management computing device as set forth inclaim 13, wherein the processor and/or network interface coupled to the memory is further configured to be capable of executing at least one additional programmed instruction comprising and stored in the memory to repeat the releasing and transmitting for each other of the buckets.
15. The network traffic management computing device as set forth inclaim 13, wherein the one or more of the subset of the data packets from the pacing send ring and the one or more of the another subset of the data packets from the bulk send ring are transmitted by a direct memory access (DMA) transmit engine.
16. The network traffic management computing device as set forth inclaim 13, wherein the subset of the data packets to be transmitted is identified as paced and the another subset of the data packets is identified as bulk based on a data packet size, an associated type of traffic, or a quality of service (QoS) parameter.
17. The network traffic management computing device as set forth inclaim 13, wherein the processor and/or network interface coupled to the memory is further configured to be capable of executing at least one additional programmed instruction comprising and stored in the memory to:
determine when another one of the buckets should be skipped based on a type of application from which the ones of the data packets associated with the selected data packet descriptors populated therein originated; and
skip the another one of the buckets such that the releasing and transmitting are not repeated for the another one of the buckets, when the determining indicates that the another one of the buckets should be skipped.
18. The network traffic management computing device as set forth inclaim 13, wherein the processor and/or network interface coupled to the memory is further configured to be capable of executing at least one additional programmed instruction comprising and stored in the memory to wait to release another one of the buckets when the one or more of the subset of the data packets are transmitted from the pacing send ring and the one or more of the another subset of the data packets are transmitted from the bulk send ring prior to the expiration of the predetermined transmit time.
US13/732,3372012-12-312012-12-31Transmit rate pacing of large network traffic bursts to reduce jitter, buffer overrun, wasted bandwidth, and retransmissionsActive2033-09-15US9270602B1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US13/732,337US9270602B1 (en)2012-12-312012-12-31Transmit rate pacing of large network traffic bursts to reduce jitter, buffer overrun, wasted bandwidth, and retransmissions

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
US13/732,337US9270602B1 (en)2012-12-312012-12-31Transmit rate pacing of large network traffic bursts to reduce jitter, buffer overrun, wasted bandwidth, and retransmissions

Publications (1)

Publication NumberPublication Date
US9270602B1true US9270602B1 (en)2016-02-23

Family

ID=55314794

Family Applications (1)

Application NumberTitlePriority DateFiling Date
US13/732,337Active2033-09-15US9270602B1 (en)2012-12-312012-12-31Transmit rate pacing of large network traffic bursts to reduce jitter, buffer overrun, wasted bandwidth, and retransmissions

Country Status (1)

CountryLink
US (1)US9270602B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20170104697A1 (en)*2015-10-122017-04-13Mellanox Technologies Ltd.Dynamic Optimization for IP Forwarding Performance
US20170269853A1 (en)*2015-06-302017-09-21International Business Machines CorporationStatistic-based isolation of lethargic drives
US10958248B1 (en)2020-05-272021-03-23International Business Machines CorporationJitter attenuation buffer structure
US11855898B1 (en)2018-03-142023-12-26F5, Inc.Methods for traffic dependent direct memory access optimization and devices thereof

Citations (109)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4914650A (en)*1988-12-061990-04-03American Telephone And Telegraph CompanyBandwidth allocation and congestion control scheme for an integrated voice and data network
US5388237A (en)1991-12-301995-02-07Sun Microsystems, Inc.Method of and apparatus for interleaving multiple-channel DMA operations
US5477541A (en)*1989-09-291995-12-19White; Richard E.Addressing technique for storing and referencing packet data
US5699361A (en)*1995-07-181997-12-16Industrial Technology Research InstituteMultimedia channel formulation mechanism
US5761534A (en)1996-05-201998-06-02Cray Research, Inc.System for arbitrating packetized data from the network to the peripheral resources and prioritizing the dispatching of packets onto the network
US5828835A (en)1995-05-101998-10-273Com CorporationHigh throughput message passing process using latency and reliability classes
US5941988A (en)1997-01-271999-08-24International Business Machines CorporationSession and transport layer proxies via TCP glue
US6026443A (en)1992-12-222000-02-15Sun Microsystems, Inc.Multi-virtual DMA channels, multi-bandwidth groups, host based cellification and reassembly, and asynchronous transfer mode network interface
US6026090A (en)*1997-11-142000-02-15Fore System, Inc.Method and system for receiving ATM cells from an ATM network by a host
US6070219A (en)*1996-10-092000-05-30Intel CorporationHierarchical interrupt structure for event notification on multi-virtual circuit network interface controller
US6115802A (en)1995-10-132000-09-05Sun Mircrosystems, Inc.Efficient hash table for use in multi-threaded environments
US20010038629A1 (en)*2000-03-292001-11-08Masayuki ShinoharaArbiter circuit and method of carrying out arbitration
US6347337B1 (en)*1999-01-082002-02-12Intel CorporationCredit based flow control scheme over virtual interface architecture for system area networks
US6388989B1 (en)*1998-06-292002-05-14Cisco TechnologyMethod and apparatus for preventing memory overrun in a data transmission system
US20020156927A1 (en)2000-12-262002-10-24Alacritech, Inc.TCP/IP offload network interface device
US6529508B1 (en)1999-02-012003-03-04Redback Networks Inc.Methods and apparatus for packet classification with multiple answer sets
US20030067930A1 (en)2001-10-052003-04-10International Business Machines CorporationPacket preprocessing interface for multiprocessor network handler
US6574220B1 (en)*1999-07-062003-06-03Avaya Technology Corp.Traffic shaper that accommodates maintenance cells without causing jitter or delay
US20030204636A1 (en)2001-07-022003-10-30Globespanvirata IncorporatedCommunications system using rings architecture
US20040032830A1 (en)*2002-08-192004-02-19Bly Keith MichaelSystem and method for shaping traffic from a plurality of data streams using hierarchical queuing
US6700871B1 (en)1999-05-042004-03-023Com CorporationIncreased throughput across data network interface by dropping redundant packets
US20040062245A1 (en)*2002-04-222004-04-01Sharp Colin C.TCP/IP offload device
US6748457B2 (en)2000-02-032004-06-08Realtime Data, LlcData storewidth accelerator
US6781990B1 (en)2002-02-112004-08-24Extreme NetworksMethod and system for managing traffic in a packet network environment
US6785236B1 (en)*2000-05-282004-08-31Lucent Technologies Inc.Packet transmission scheduling with threshold based backpressure mechanism
US20040202161A1 (en)1998-08-282004-10-14Stachura Thomas L.Method and apparatus for transmitting and receiving network protocol compliant signal packets over a platform bus
US6820133B1 (en)2000-02-072004-11-16Netli, Inc.System and method for high-performance delivery of web content using high-performance communications protocol between the first and second specialized intermediate nodes to optimize a measure of communications performance between the source and the destination
US20040249881A1 (en)2003-06-052004-12-09Jha Ashutosh K.Transmitting commands and information between a TCP/IP stack and an offload unit
US20040249948A1 (en)2003-03-072004-12-09Sethi Bhupinder S.Performing application layer transactions during the connection establishment phase of connection-oriented protocols
US20040267897A1 (en)2003-06-242004-12-30Sychron Inc.Distributed System Providing Scalable Methodology for Real-Time Control of Server Pools and Data Centers
US20050007991A1 (en)2003-07-102005-01-13Dat TonBandwidth allocation method and apparatus for fixed wireless networks
US20050022623A1 (en)2002-06-182005-02-03Carl ReicheSteering shaft for motor vehicles
US20050083952A1 (en)2003-10-152005-04-21Texas Instruments IncorporatedFlexible ethernet bridge
US20050091390A1 (en)*2003-10-242005-04-28International Business Machines CorporationSpeculative method and system for rapid data communications
US20050114559A1 (en)2003-11-202005-05-26Miller George B.Method for efficiently processing DMA transactions
US20050141427A1 (en)*2003-12-302005-06-30Bartky Alan K.Hierarchical flow-characterizing multiplexor
US20050175014A1 (en)2000-09-252005-08-11Patrick Michael W.Hierarchical prioritized round robin (HPRR) scheduling
US6934776B2 (en)*2002-07-162005-08-23Intel CorporationMethods and apparatus for determination of packet sizes when transferring packets via a network
US20050213570A1 (en)2004-03-262005-09-29Stacy John KHardware filtering support for denial-of-service attacks
US20050226234A1 (en)*2001-11-202005-10-13Sano Barton JSystem having interfaces and switch that separates coherent and packet traffic
US20060007928A1 (en)2004-07-082006-01-12Michael SangilloFlexible traffic rating interworking
US7046628B2 (en)*2001-09-242006-05-16Intel CorporationApparatus and method for just-in-time transfer of transmit commands to a network interface
US20060104303A1 (en)2004-11-162006-05-18Srihari MakineniPacket coalescing
US7065630B1 (en)2003-08-272006-06-20Nvidia CorporationDynamically creating or removing a physical-to-virtual address mapping in a memory of a peripheral device
US7107348B2 (en)2001-03-272006-09-12Fujitsu LimitedPacket relay processing apparatus
US20060221832A1 (en)2005-04-042006-10-05Sun Microsystems, Inc.Virtualized partitionable shared network interface
US20060224820A1 (en)*2005-04-012006-10-05Hyun-Duk ChoFlash memory device supporting cache read operation
US20060221835A1 (en)2005-03-302006-10-05Cisco Technology, Inc.Converting a network device from data rate traffic management to packet rate
US7124196B2 (en)*2002-08-072006-10-17Intel CorporationProcessing a network packet using queues
US20060235996A1 (en)2005-04-192006-10-19AlcatelMethod for operating a packet based data network
US7142540B2 (en)2002-07-182006-11-28Sun Microsystems, Inc.Method and apparatus for zero-copy receive buffer management
US20060288128A1 (en)2005-06-162006-12-21Agere Systems Inc.Emulation of independent active DMA channels with a single DMA capable bus master hardware and firmware
US7164678B2 (en)*2001-06-252007-01-16Intel CorporationControl of processing order for received network packets
US7236491B2 (en)2000-11-302007-06-26Industrial Technology Research InstituteMethod and apparatus for scheduling for packet-switched networks
US20070162619A1 (en)*2006-01-122007-07-12Eliezer AloniMethod and System for Zero Copy in a Virtualized Network Environment
US7281030B1 (en)1999-09-172007-10-09Intel CorporationMethod of reading a remote memory
US7324525B2 (en)2004-12-092008-01-29International Business Machines CorporationMethod and apparatus for coalescing acknowledge packets within a server
US7327674B2 (en)*2002-06-112008-02-05Sun Microsystems, Inc.Prefetching techniques for network interfaces
US7349405B2 (en)*2003-06-232008-03-25Transwitch CorporationMethod and apparatus for fair queueing of data packets
US7355977B1 (en)2002-08-162008-04-08F5 Networks, Inc.Method and system for a weighted allocation table
US7376772B2 (en)2000-02-032008-05-20Realtime Data LlcData storewidth accelerator
US20080126509A1 (en)2006-11-062008-05-29Viswanath SubramanianRdma qp simplex switchless connection
US7403542B1 (en)2002-07-192008-07-22Qlogic, CorporationMethod and system for processing network data packets
US20080184248A1 (en)2007-01-292008-07-31Yahoo! Inc.Optimization of job scheduling for resource clusters with access control and usage reporting
US20080201772A1 (en)*2007-02-152008-08-21Maxim MondaeevMethod and Apparatus for Deep Packet Inspection for Network Intrusion Detection
US7420931B2 (en)2003-06-052008-09-02Nvidia CorporationUsing TCP/IP offload to accelerate packet filtering
US20080219279A1 (en)2007-03-062008-09-11Yen Hsiang ChewScalable and configurable queue management for network packet traffic quality of service
US20090003204A1 (en)2007-06-292009-01-01Packeteer, Inc.Lockless Bandwidth Management for Multiprocessor Networking Devices
US7475122B2 (en)2000-10-042009-01-06Jean-Patrick AzpitarteSystem for remotely managing maintenance of a set of facilities
US7478186B1 (en)2004-06-032009-01-13Integrated Device Technology, Inc.Interrupt coalescer for DMA channel
US20090016217A1 (en)2007-07-132009-01-15International Business Machines CorporationEnhancement of end-to-end network qos
US7496695B2 (en)2005-09-292009-02-24P.A. Semi, Inc.Unified DMA
US7500028B2 (en)2003-03-202009-03-03Panasonic CorporationDMA controller providing for ring buffer and rectangular block transfers
US7512721B1 (en)2004-05-252009-03-31Qlogic, CorporationMethod and apparatus for efficient determination of status from DMA lists
US20090089619A1 (en)2007-09-272009-04-02Kung-Shiuh HuangAutomatic detection of functional defects and performance bottlenecks in network devices
US7533197B2 (en)2006-11-082009-05-12Sicortex, Inc.System and method for remote direct memory access without page locking by the operating system
US20090154459A1 (en)*2001-04-132009-06-18Freescale Semiconductor, Inc.Manipulating data streams in data stream processors
US7558910B2 (en)1998-11-132009-07-07Cray Inc.Detecting access to a memory location in a multithreaded environment
US7571299B2 (en)2006-02-162009-08-04International Business Machines CorporationMethods and arrangements for inserting values in hash tables
US20090222598A1 (en)2004-02-252009-09-03Analog Devices, Inc.Dma controller for digital signal processors
US20090248911A1 (en)2008-03-272009-10-01Apple Inc.Clock control for dma busses
US20090279559A1 (en)*2004-03-262009-11-12Foundry Networks, Inc., A Delaware CorporationMethod and apparatus for aggregating input data streams
US7647416B2 (en)2004-03-022010-01-12Industrial Technology Research InstituteFull hardware based TCP/IP traffic offload engine(TOE) device and the method thereof
US7649882B2 (en)*2002-07-152010-01-19Alcatel-Lucent Usa Inc.Multicast scheduling and replication in switches
US7657659B1 (en)2006-11-302010-02-02Vmware, Inc.Partial copying of data to transmit buffer for virtual network device
US7668851B2 (en)2006-11-292010-02-23International Business Machines CorporationLockless hash table lookups while performing key update on hash table element
US7668727B2 (en)2005-04-292010-02-23Kimberly-Clark Worldwide, Inc.System and method for building loads from requisitions
US20100082849A1 (en)2008-09-302010-04-01Apple Inc.Data filtering using central DMA mechanism
US20100085875A1 (en)*2008-10-082010-04-08Richard SolomonMethods and apparatuses for processing packets in a credit-based flow control scheme
US20100094945A1 (en)2004-11-232010-04-15Cisco Technology, Inc.Caching content and state data at a network element
US7729239B1 (en)2004-12-272010-06-01Emc CorporationPacket switching network end point controller
US7734809B2 (en)2003-06-052010-06-08Meshnetworks, Inc.System and method to maximize channel utilization in a multi-channel wireless communication network
US7735099B1 (en)2005-12-232010-06-08Qlogic, CorporationMethod and system for processing network data
US7742412B1 (en)2004-09-292010-06-22Marvell Israel (M.I.S.L.) Ltd.Method and apparatus for preventing head of line blocking in an ethernet system
US7784093B2 (en)1999-04-012010-08-24Juniper Networks, Inc.Firewall including local bus
US7826487B1 (en)2005-05-092010-11-02F5 Network, IncCoalescing acknowledgement responses to improve network communications
US7877524B1 (en)2007-11-232011-01-25Pmc-Sierra Us, Inc.Logical address direct memory access with multiple concurrent physical ports and internal switching
US7916728B1 (en)2007-09-282011-03-29F5 Networks, Inc.Lockless atomic table update
US7975025B1 (en)2008-07-082011-07-05F5 Networks, Inc.Smart prefetching of data over a network
US8006016B2 (en)2005-04-042011-08-23Oracle America, Inc.Hiding system latencies in a throughput networking systems
US20110228781A1 (en)*2010-03-162011-09-22Erez IzenbergCombined Hardware/Software Forwarding Mechanism and Method
US8103809B1 (en)2009-01-162012-01-24F5 Networks, Inc.Network devices with multiple direct memory access channels and methods thereof
US8112594B2 (en)2007-04-202012-02-07The Regents Of The University Of ColoradoEfficient point-to-point enqueue and dequeue communications
US8112491B1 (en)2009-01-162012-02-07F5 Networks, Inc.Methods and systems for providing direct DMA
US8306036B1 (en)2008-06-202012-11-06F5 Networks, Inc.Methods and systems for hierarchical resource allocation through bookmark allocation
US8447884B1 (en)2008-12-012013-05-21F5 Networks, Inc.Methods for mapping virtual addresses to physical addresses in a network device and systems thereof
US20130250777A1 (en)*2012-03-262013-09-26Michael L. ZieglerPacket descriptor trace indicators
US8880696B1 (en)2009-01-162014-11-04F5 Networks, Inc.Methods for sharing bandwidth across a packetized bus and systems thereof
US8880632B1 (en)2009-01-162014-11-04F5 Networks, Inc.Method and apparatus for performing multiple DMA channel based network quality of service

Patent Citations (114)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US4914650A (en)*1988-12-061990-04-03American Telephone And Telegraph CompanyBandwidth allocation and congestion control scheme for an integrated voice and data network
US5477541A (en)*1989-09-291995-12-19White; Richard E.Addressing technique for storing and referencing packet data
US5388237A (en)1991-12-301995-02-07Sun Microsystems, Inc.Method of and apparatus for interleaving multiple-channel DMA operations
US6026443A (en)1992-12-222000-02-15Sun Microsystems, Inc.Multi-virtual DMA channels, multi-bandwidth groups, host based cellification and reassembly, and asynchronous transfer mode network interface
US5828835A (en)1995-05-101998-10-273Com CorporationHigh throughput message passing process using latency and reliability classes
US5699361A (en)*1995-07-181997-12-16Industrial Technology Research InstituteMultimedia channel formulation mechanism
US6115802A (en)1995-10-132000-09-05Sun Mircrosystems, Inc.Efficient hash table for use in multi-threaded environments
US5761534A (en)1996-05-201998-06-02Cray Research, Inc.System for arbitrating packetized data from the network to the peripheral resources and prioritizing the dispatching of packets onto the network
US6070219A (en)*1996-10-092000-05-30Intel CorporationHierarchical interrupt structure for event notification on multi-virtual circuit network interface controller
US5941988A (en)1997-01-271999-08-24International Business Machines CorporationSession and transport layer proxies via TCP glue
US6026090A (en)*1997-11-142000-02-15Fore System, Inc.Method and system for receiving ATM cells from an ATM network by a host
US6388989B1 (en)*1998-06-292002-05-14Cisco TechnologyMethod and apparatus for preventing memory overrun in a data transmission system
US20040202161A1 (en)1998-08-282004-10-14Stachura Thomas L.Method and apparatus for transmitting and receiving network protocol compliant signal packets over a platform bus
US7558910B2 (en)1998-11-132009-07-07Cray Inc.Detecting access to a memory location in a multithreaded environment
US6347337B1 (en)*1999-01-082002-02-12Intel CorporationCredit based flow control scheme over virtual interface architecture for system area networks
US6529508B1 (en)1999-02-012003-03-04Redback Networks Inc.Methods and apparatus for packet classification with multiple answer sets
US7784093B2 (en)1999-04-012010-08-24Juniper Networks, Inc.Firewall including local bus
US6700871B1 (en)1999-05-042004-03-023Com CorporationIncreased throughput across data network interface by dropping redundant packets
US6574220B1 (en)*1999-07-062003-06-03Avaya Technology Corp.Traffic shaper that accommodates maintenance cells without causing jitter or delay
US7281030B1 (en)1999-09-172007-10-09Intel CorporationMethod of reading a remote memory
US6748457B2 (en)2000-02-032004-06-08Realtime Data, LlcData storewidth accelerator
US7376772B2 (en)2000-02-032008-05-20Realtime Data LlcData storewidth accelerator
US6820133B1 (en)2000-02-072004-11-16Netli, Inc.System and method for high-performance delivery of web content using high-performance communications protocol between the first and second specialized intermediate nodes to optimize a measure of communications performance between the source and the destination
US20010038629A1 (en)*2000-03-292001-11-08Masayuki ShinoharaArbiter circuit and method of carrying out arbitration
US6785236B1 (en)*2000-05-282004-08-31Lucent Technologies Inc.Packet transmission scheduling with threshold based backpressure mechanism
US20050175014A1 (en)2000-09-252005-08-11Patrick Michael W.Hierarchical prioritized round robin (HPRR) scheduling
US7475122B2 (en)2000-10-042009-01-06Jean-Patrick AzpitarteSystem for remotely managing maintenance of a set of facilities
US7236491B2 (en)2000-11-302007-06-26Industrial Technology Research InstituteMethod and apparatus for scheduling for packet-switched networks
US20020156927A1 (en)2000-12-262002-10-24Alacritech, Inc.TCP/IP offload network interface device
US7107348B2 (en)2001-03-272006-09-12Fujitsu LimitedPacket relay processing apparatus
US7929433B2 (en)*2001-04-132011-04-19Freescale Semiconductor, Inc.Manipulating data streams in data stream processors
US20090154459A1 (en)*2001-04-132009-06-18Freescale Semiconductor, Inc.Manipulating data streams in data stream processors
US7164678B2 (en)*2001-06-252007-01-16Intel CorporationControl of processing order for received network packets
US20030204636A1 (en)2001-07-022003-10-30Globespanvirata IncorporatedCommunications system using rings architecture
US7046628B2 (en)*2001-09-242006-05-16Intel CorporationApparatus and method for just-in-time transfer of transmit commands to a network interface
US20030067930A1 (en)2001-10-052003-04-10International Business Machines CorporationPacket preprocessing interface for multiprocessor network handler
US20050226234A1 (en)*2001-11-202005-10-13Sano Barton JSystem having interfaces and switch that separates coherent and packet traffic
US6781990B1 (en)2002-02-112004-08-24Extreme NetworksMethod and system for managing traffic in a packet network environment
US20040062245A1 (en)*2002-04-222004-04-01Sharp Colin C.TCP/IP offload device
US7327674B2 (en)*2002-06-112008-02-05Sun Microsystems, Inc.Prefetching techniques for network interfaces
US20050022623A1 (en)2002-06-182005-02-03Carl ReicheSteering shaft for motor vehicles
US7649882B2 (en)*2002-07-152010-01-19Alcatel-Lucent Usa Inc.Multicast scheduling and replication in switches
US6934776B2 (en)*2002-07-162005-08-23Intel CorporationMethods and apparatus for determination of packet sizes when transferring packets via a network
US7142540B2 (en)2002-07-182006-11-28Sun Microsystems, Inc.Method and apparatus for zero-copy receive buffer management
US7403542B1 (en)2002-07-192008-07-22Qlogic, CorporationMethod and system for processing network data packets
US7124196B2 (en)*2002-08-072006-10-17Intel CorporationProcessing a network packet using queues
US7355977B1 (en)2002-08-162008-04-08F5 Networks, Inc.Method and system for a weighted allocation table
US20040032830A1 (en)*2002-08-192004-02-19Bly Keith MichaelSystem and method for shaping traffic from a plurality of data streams using hierarchical queuing
US20040249948A1 (en)2003-03-072004-12-09Sethi Bhupinder S.Performing application layer transactions during the connection establishment phase of connection-oriented protocols
US7500028B2 (en)2003-03-202009-03-03Panasonic CorporationDMA controller providing for ring buffer and rectangular block transfers
US7734809B2 (en)2003-06-052010-06-08Meshnetworks, Inc.System and method to maximize channel utilization in a multi-channel wireless communication network
US20040249881A1 (en)2003-06-052004-12-09Jha Ashutosh K.Transmitting commands and information between a TCP/IP stack and an offload unit
US7420931B2 (en)2003-06-052008-09-02Nvidia CorporationUsing TCP/IP offload to accelerate packet filtering
US7349405B2 (en)*2003-06-232008-03-25Transwitch CorporationMethod and apparatus for fair queueing of data packets
US20040267897A1 (en)2003-06-242004-12-30Sychron Inc.Distributed System Providing Scalable Methodology for Real-Time Control of Server Pools and Data Centers
US20050007991A1 (en)2003-07-102005-01-13Dat TonBandwidth allocation method and apparatus for fixed wireless networks
US7065630B1 (en)2003-08-272006-06-20Nvidia CorporationDynamically creating or removing a physical-to-virtual address mapping in a memory of a peripheral device
US20050083952A1 (en)2003-10-152005-04-21Texas Instruments IncorporatedFlexible ethernet bridge
US20050091390A1 (en)*2003-10-242005-04-28International Business Machines CorporationSpeculative method and system for rapid data communications
US20050114559A1 (en)2003-11-202005-05-26Miller George B.Method for efficiently processing DMA transactions
US20050141427A1 (en)*2003-12-302005-06-30Bartky Alan K.Hierarchical flow-characterizing multiplexor
US20090222598A1 (en)2004-02-252009-09-03Analog Devices, Inc.Dma controller for digital signal processors
US7647416B2 (en)2004-03-022010-01-12Industrial Technology Research InstituteFull hardware based TCP/IP traffic offload engine(TOE) device and the method thereof
US20090279559A1 (en)*2004-03-262009-11-12Foundry Networks, Inc., A Delaware CorporationMethod and apparatus for aggregating input data streams
US20050213570A1 (en)2004-03-262005-09-29Stacy John KHardware filtering support for denial-of-service attacks
US7512721B1 (en)2004-05-252009-03-31Qlogic, CorporationMethod and apparatus for efficient determination of status from DMA lists
US7478186B1 (en)2004-06-032009-01-13Integrated Device Technology, Inc.Interrupt coalescer for DMA channel
US20060007928A1 (en)2004-07-082006-01-12Michael SangilloFlexible traffic rating interworking
US7742412B1 (en)2004-09-292010-06-22Marvell Israel (M.I.S.L.) Ltd.Method and apparatus for preventing head of line blocking in an ethernet system
EP1813084A1 (en)2004-11-162007-08-01Intel CorporationPacket coalescing
WO2006055494A1 (en)2004-11-162006-05-26Intel CorporationPacket coalescing
US20060104303A1 (en)2004-11-162006-05-18Srihari MakineniPacket coalescing
US20100094945A1 (en)2004-11-232010-04-15Cisco Technology, Inc.Caching content and state data at a network element
US7324525B2 (en)2004-12-092008-01-29International Business Machines CorporationMethod and apparatus for coalescing acknowledge packets within a server
US7729239B1 (en)2004-12-272010-06-01Emc CorporationPacket switching network end point controller
US20060221835A1 (en)2005-03-302006-10-05Cisco Technology, Inc.Converting a network device from data rate traffic management to packet rate
US20060224820A1 (en)*2005-04-012006-10-05Hyun-Duk ChoFlash memory device supporting cache read operation
US20060221832A1 (en)2005-04-042006-10-05Sun Microsystems, Inc.Virtualized partitionable shared network interface
US8006016B2 (en)2005-04-042011-08-23Oracle America, Inc.Hiding system latencies in a throughput networking systems
US20060235996A1 (en)2005-04-192006-10-19AlcatelMethod for operating a packet based data network
US7668727B2 (en)2005-04-292010-02-23Kimberly-Clark Worldwide, Inc.System and method for building loads from requisitions
US7826487B1 (en)2005-05-092010-11-02F5 Network, IncCoalescing acknowledgement responses to improve network communications
US20060288128A1 (en)2005-06-162006-12-21Agere Systems Inc.Emulation of independent active DMA channels with a single DMA capable bus master hardware and firmware
US7496695B2 (en)2005-09-292009-02-24P.A. Semi, Inc.Unified DMA
US7735099B1 (en)2005-12-232010-06-08Qlogic, CorporationMethod and system for processing network data
US20070162619A1 (en)*2006-01-122007-07-12Eliezer AloniMethod and System for Zero Copy in a Virtualized Network Environment
US7571299B2 (en)2006-02-162009-08-04International Business Machines CorporationMethods and arrangements for inserting values in hash tables
US20080126509A1 (en)2006-11-062008-05-29Viswanath SubramanianRdma qp simplex switchless connection
US7533197B2 (en)2006-11-082009-05-12Sicortex, Inc.System and method for remote direct memory access without page locking by the operating system
US7668851B2 (en)2006-11-292010-02-23International Business Machines CorporationLockless hash table lookups while performing key update on hash table element
US7657659B1 (en)2006-11-302010-02-02Vmware, Inc.Partial copying of data to transmit buffer for virtual network device
US20080184248A1 (en)2007-01-292008-07-31Yahoo! Inc.Optimization of job scheduling for resource clusters with access control and usage reporting
US20080201772A1 (en)*2007-02-152008-08-21Maxim MondaeevMethod and Apparatus for Deep Packet Inspection for Network Intrusion Detection
US20080219279A1 (en)2007-03-062008-09-11Yen Hsiang ChewScalable and configurable queue management for network packet traffic quality of service
US8279865B2 (en)2007-04-202012-10-02John GiacomoniEfficient pipeline parallelism using frame shared memory
US8112594B2 (en)2007-04-202012-02-07The Regents Of The University Of ColoradoEfficient point-to-point enqueue and dequeue communications
US20090003204A1 (en)2007-06-292009-01-01Packeteer, Inc.Lockless Bandwidth Management for Multiprocessor Networking Devices
US20090016217A1 (en)2007-07-132009-01-15International Business Machines CorporationEnhancement of end-to-end network qos
US20090089619A1 (en)2007-09-272009-04-02Kung-Shiuh HuangAutomatic detection of functional defects and performance bottlenecks in network devices
US7916728B1 (en)2007-09-282011-03-29F5 Networks, Inc.Lockless atomic table update
US7877524B1 (en)2007-11-232011-01-25Pmc-Sierra Us, Inc.Logical address direct memory access with multiple concurrent physical ports and internal switching
US20090248911A1 (en)2008-03-272009-10-01Apple Inc.Clock control for dma busses
US8306036B1 (en)2008-06-202012-11-06F5 Networks, Inc.Methods and systems for hierarchical resource allocation through bookmark allocation
US7975025B1 (en)2008-07-082011-07-05F5 Networks, Inc.Smart prefetching of data over a network
US20100082849A1 (en)2008-09-302010-04-01Apple Inc.Data filtering using central DMA mechanism
US20100085875A1 (en)*2008-10-082010-04-08Richard SolomonMethods and apparatuses for processing packets in a credit-based flow control scheme
US8447884B1 (en)2008-12-012013-05-21F5 Networks, Inc.Methods for mapping virtual addresses to physical addresses in a network device and systems thereof
US8103809B1 (en)2009-01-162012-01-24F5 Networks, Inc.Network devices with multiple direct memory access channels and methods thereof
US8112491B1 (en)2009-01-162012-02-07F5 Networks, Inc.Methods and systems for providing direct DMA
US8346993B2 (en)2009-01-162013-01-01F5 Networks, Inc.Network devices with multiple direct memory access channels and methods thereof
US8880696B1 (en)2009-01-162014-11-04F5 Networks, Inc.Methods for sharing bandwidth across a packetized bus and systems thereof
US8880632B1 (en)2009-01-162014-11-04F5 Networks, Inc.Method and apparatus for performing multiple DMA channel based network quality of service
US20110228781A1 (en)*2010-03-162011-09-22Erez IzenbergCombined Hardware/Software Forwarding Mechanism and Method
US20130250777A1 (en)*2012-03-262013-09-26Michael L. ZieglerPacket descriptor trace indicators

Non-Patent Citations (23)

* Cited by examiner, † Cited by third party
Title
"Chapter 15, Memory Mapping and DMA," Memory Management in Linux, ch15.13676, accessed on Jan. 25, 2005, pp. 412-463.
"Plan 9 kernel history: overview / file list / diff list," , accessed Oct. 22, 2007, pp. 1-16.
"Plan 9 kernel history: overview / file list / diff list," <http://switch.com/cgi-bin/plan9history.cgi?f=2001/0126/pc/etherga620.com>, accessed Oct. 22, 2007, pp. 1-16.
Alteon Websystems Inc., "Gigabit Ethernet/PCI Network Interface Card; Host/NIC Software Interface Definition," Jul. 1999, pp. 1-80, Revision 12.4.13, P/N 020001, San Jose, California.
Cavium Networks, "Cavium Networks Product Selector Guide-Single & Multi-Core MIPS Processors, Security Processors and Accelerator Boards," 2008, pp. 1-44, Mountain View, CA, US.
Cavium Networks, "NITROX(TM) XL Security Acceleration Modules PCI 3V or 3V/5V-Universal Boards for SSL and IPSec," at http://www.Caviumnetworks.com, 2002, pp. 1, Mountain View, CA USA.
Cavium Networks, "NITROX™ XL Security Acceleration Modules PCI 3V or 3V/5V-Universal Boards for SSL and IPSec," at http://www.Caviumnetworks.com, 2002, pp. 1, Mountain View, CA USA.
Cavium Networks, "PCI, PCI-X," at (http://www.cavium.com/acceleration-boards-PCI-PCI-X.htm (Downloaded Oct. 2008), Cavium Networks-Products > Acceleration Boards > PCI, PCI-X).
Comtech AHA Corporation, "Comtech AHA Announces 3.0 Gbps GZIP Compression/Decompression Accelerator AHA362-PCIX offers high-speed GZIP compression and decompression," www.aha.com, Apr. 20, 2005, pp. 1-2, Moscow, ID, USA.
Comtech AHA Corporation, "Comtech AHA Announces GZIP Compression and Decompression IC Offers the highest speed and compression ratio performance in hardware on the market," www.aha.com, Jun. 26, 2007, pp. 1-2, Moscow, ID, USA.
EventHelix, "DMA and Interrupt Handling," , Jan. 29, 2010, pp. 1-4, EventHelix.com.
EventHelix, "DMA and Interrupt Handling," <http://www.eventhelix.com/RealtimeMantra/FaultHandling/dma-interrupt-handling.htm>, Jan. 29, 2010, pp. 1-4, EventHelix.com.
EventHelix, "TCP-Transmission Control Protocol (TCP Fast Retransmit and Recovery)," Mar. 28, 2002, pp. 1-5, EventHelix.com.
Harvey et al., "DMA Fundamentals on Various PC Platforms," Application Note 011, Apr. 1999, pp. 1-20, National Instruments Corporation.
Mangino, John, "Using DMA with High Performance Peripherals to Maximize System Performance," WW TMS470 Catalog Applications, SPNA105 Jan. 2007, pp. 1-23.
Mogul, Jeffrey C., "The Case for Persistent-Connection HTTP," SIGCOMM '95, Digital Equipment Corporation Western Research Laboratory, 1995, pp. 1-15, Cambridge, Maine.
Rabinovich et al., "DHTTP: An Efficient and Cache-Friendly Transfer Protocol for the Web," IEEE/ACM Transactions on Networking, Dec. 2004, pp. 1007-1020, vol. 12, No. 6.
Salchow, Jr., KJ, "Clustered Multiprocessing: Changing the Rules of the Performance Game," F5 White Paper, Jan. 2008, pp. 1-11, F5 Networks, Inc.
Stevens, W., "TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms," Network Working Group, RFC 2001, Jan. 1997, pp. 1-6.
Wadge, Wallace, "Achieving Gigabit Performance on Programmable Ethernet Network Interface Cards," May 29, 2001, pp. 1-9.
Welch, Von, "A User's Guide to TCP Windows," http://www.vonwelch.com/report/tcp-windows, updated 1996, last accessed Jan. 29, 2010, pp. 1-5.
Wikipedia, "Direct memory access," , accessed Jan. 29, 2010, pp. 1-6.
Wikipedia, "Direct memory access," <http://en.wikipedia.org/wiki/Direct-memory-access>, accessed Jan. 29, 2010, pp. 1-6.

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20170269853A1 (en)*2015-06-302017-09-21International Business Machines CorporationStatistic-based isolation of lethargic drives
US10162535B2 (en)*2015-06-302018-12-25International Business Machines CorporationStatistic-based isolation of lethargic drives
US20170104697A1 (en)*2015-10-122017-04-13Mellanox Technologies Ltd.Dynamic Optimization for IP Forwarding Performance
US10284502B2 (en)*2015-10-122019-05-07Mellanox Technologies, Ltd.Dynamic optimization for IP forwarding performance
US11855898B1 (en)2018-03-142023-12-26F5, Inc.Methods for traffic dependent direct memory access optimization and devices thereof
US10958248B1 (en)2020-05-272021-03-23International Business Machines CorporationJitter attenuation buffer structure

Similar Documents

PublicationPublication DateTitle
CN113728599B (en) Method for injecting packets into an output buffer in a network interface controller NIC and NIC
CN109768939B (en) A method and system for labeling a network stack supporting priority
US10382362B2 (en)Network server having hardware-based virtual router integrated circuit for virtual networking
CN109104373B (en) Method, device and system for processing network congestion
US20190044879A1 (en)Technologies for reordering network packets on egress
US8880632B1 (en)Method and apparatus for performing multiple DMA channel based network quality of service
US20150278148A1 (en)PCIe-BASED HOST NETWORK ACCELERATORS (HNAS) FOR DATA CENTER OVERLAY NETWORK
CN104885420B (en)Method, system and medium for managing multiple packets
US10536385B2 (en)Output rates for virtual output queses
US20200252337A1 (en)Data transmission method, device, and computer storage medium
US9270602B1 (en)Transmit rate pacing of large network traffic bursts to reduce jitter, buffer overrun, wasted bandwidth, and retransmissions
CN104954252A (en)Flow-control within a high-performance, scalable and drop-free data center switch fabric
KR20140004743A (en)An apparatus and a method for receiving and forwarding data packets
US20200348989A1 (en)Methods and apparatus for multiplexing data flows via a single data structure
WO2021041622A1 (en)Methods, systems, and devices for classifying layer 4-level data from data queues
US12175285B1 (en)Processing unit selection mechanism
CN115695578A (en) A data center network TCP and RDMA hybrid flow scheduling method, system and device
US10616116B1 (en)Network traffic load balancing using rotating hash
EP3275139B1 (en)Technologies for network packet pacing during segmentation operations
Iqbal et al.Instant queue occupancy used for automatic traffic scheduling in data center networks
US20200213907A1 (en)Method and system for managing the download of data
US11228532B2 (en)Method of executing QoS policy and network device
WO2024222742A1 (en)Data processing method and apparatus, and network device
CN112737970A (en)Data transmission method and related equipment
WO2019168153A1 (en)Control device, communication control method, and program

Legal Events

DateCodeTitleDescription
ASAssignment

Owner name:F5 NETWORKS, INC., WASHINGTON

Free format text:ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MICHELS, TIMOTHY S.;HAWTHORNE, JONATHAN M.;MIMMS, ALAN B.;AND OTHERS;SIGNING DATES FROM 20150107 TO 20150405;REEL/FRAME:035472/0678

STCFInformation on status: patent grant

Free format text:PATENTED CASE

FEPPFee payment procedure

Free format text:PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:4

MAFPMaintenance fee payment

Free format text:PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment:8


[8]ページ先頭

©2009-2025 Movatter.jp