BACKGROUNDThe invention relates to storing a frame header, for example in connection with a network controller.
Referring to FIG. 1, aserver12 may communicate with aclient10 by transmittingpackets8 of information over anetwork18 pursuant to a network protocol. As an example, the network protocol may be a Transmission Control Protocol/internet Protocol (TCP/IP), and as a result, theclient10 andserver12 may implement protocol stacks, such as TCP/IPstacks17 and19, respectively. For the client10 (as an example), the TCP/IP stack17 conceptually divides the client's software and hardware protocol functions into five hierarchical layers16 (listed in hierarchical order): anapplication layer16a(the highest layer), atransport layer16b, a network layer16c, adata link layer16dand aphysical layer16e(the lowest layer).
More particularly, thephysical layer16etypically includes hardware (a network controller, for example) that establishes physical communication with thenetwork18 by generating and receiving signals (on a network wire9) that indicate bits of thepackets8. Thephysical layer16erecognizes bits and does not recognize packets, as thedata link layer16dperforms this function. In this manner, thedata link layer16dtypically is both a software and hardware layer that may, for transmission purposes, cause theclient10 to package the data to be transmitted into thepackets8. For purposes of receivingpackets8, thedata link layer16dmay, as another example, cause theclient10 to determine the integrity of theincoming packets8 by determining if theincoming packets8 generally conform to predefined formats and if the data of the packets comply with checksums (or cyclic redundancy check (CRC)) of the packets, for example.
The network layer16ctypically is a software layer that is responsible for routing thepackets8 over thenetwork18. In this manner, the network layer16ctypically causes theclient10 to assign and decode Internet Protocol (IP) addresses that identify entities that are coupled to thenetwork18, such as theclient10 and theserver12. Thetransport layer16btypically is a software layer that is responsible for such things as reliable data transfer between two end points and may use sequencing, error control and general flow control of thepackets8 to achieve reliable data transfer. Thetransport layer16bmay cause theclient10 to implement the specific network protocol, such as the TCP/IP protocol or a User Datagram Protocol (UDP) or Realtime Transport Protocol(RTP) which exists on top of UDP, as examples. Theapplication layer16atypically includes network applications that, upon execution, cause theclient10 to generate and receive the data of thepackets8.
Referring to FIG. 2, atypical packet8 may include anIP header20 that indicates such information as the source and destination IP addresses for thepacket8. Thepacket8 may also include asecurity header23 that indicates a security protocol (e.g. IPSec) and attributes of thepacket8 and a protocol header22 (a TCP or an UDP protocol header, as examples) that is specific to the transport protocol being used. As an example, a TCP protocol header might indicate a TCP destination port and a TCP source port that uniquely identify the applications that cause theclient10 andserver12 to transmit and receive thepackets8. Thepacket8 may also include adata portion24, the contents of which are furnished by the source application. Thepacket8 may include additional information, such as a trailer26, for example, that is used in connection with encryption and/or authentication of thedata portion24.
Referring to FIG. 3, as an example, a TCP protocol header22amay include afield30 that indicates the TCP source port address and afield32 that indicates the TCP destination port address. Anotherfield34 of the TCP protocol header22amay indicate a sequence number that is used to concatenate received packets of an associated flow. In this manner,packets8 that have the same IP addresses, transport layer port addresses (and security attributes). are typically part of the same flow, and the sequence number indicates the order of aparticular packet8 in that flow. Thus, as an example, apacket8 with a sequence number of “244” typically is transmitted before apacket8 with a sequence number of “245.”
The TCP protocol header22amay include afield38 that indicates a length of the header22a, afield44 that indicates a checksum for the bytes in the header22aand afield40 that indicates control and status flags.
In order to transmit data from one application to another over the network wire, the data is segmented into frames. The maximum number of bytes that can be packed into one frame is called the maximal transmit unit (MTU). Thus, the operating system may pass data units down to hardware, such as network controller, in units that correspond to the MTU.
There is overhead associated with segmenting the data into MTUs, creating the frame header at all layers, and transmitting multiple messages down the stack to a miniport driver or other drivers for other operating systems or hardware. A driver, containing device specific information, communicates with non-device specific port drivers that in turn communicate with the protocol stack on behalf of the system. When the operating system wishes to offload some of that overhead, it may pass data to the miniport driver or hardware in data units larger than the MTU. This type of transfer is generally called a large send. The miniport driver or hardware can now segment the data and create the framing information.
Generally a large send requires that header information be recreated for successive frames. However, this will result in delay and overhead and also requires the header to be read across the system bus with every segment prior to its modification. This may increase the overall delay to complete the data exchange between the client and the server and consume bus resources that are important especially for server and multiple controller systems.
Thus, there is a continuing need for implementing a large send in a way which reduces the consumption of bus resources.
SUMMARYIn one embodiment of the invention, a method for use with a computer system, includes receiving output data from the computer system, extracting the header of the packet; storing a header from said data in a header memory, retrieving the header from header memory and parsing the header to add additional information to the header.
BRIEF DESCRIPTION OF THE DRAWINGFIG. 1 is a schematic diagram of a network of computers according to the prior art.
FIG. 2 is a schematic diagram of a packet transmitted over the network shown in FIG.1.
FIG. 3 is an illustration of an exemplary protocol header of the packet of FIG.2.
FIG. 4 is a schematic diagram of a computer system according to an embodiment of the invention.
FIG. 5 is a schematic diagram of a network controller of FIG.4.
FIG. 5ais a flow diagram illustrating a large send.
FIG. 5bshows a method of generating a partial checksum.
FIG. 6 is an illustration of a flow tuple stored in memory of the network controller of FIG.5.
FIG. 7 is a schematic diagram illustrating the transfer of packet data according to an embodiment of the invention.
FIG. 8 is a schematic diagram illustrating the transfer of packet data between layers of the network stack of the prior art.
FIGS. 9 and 10 are flow diagrams illustrating parsing of packet data by a receive parser of the network controller of FIG.5.
FIG. 11 is a flow diagram illustrating operation of a zero copy parser of the network controller of FIG.5.
FIG. 12 is another flow diagram illustrating the operation of a zero copy parser.
DETAILED DESCRIPTIONReferring to FIG. 4, anembodiment50 of a computer system in accordance with the invention includes a network controller52 (a local area network (LAN) controller, for example) that communicates packets of information with other networked computer systems via at least onenetwork wire53. Unlike conventional network controllers, thenetwork controller52 may be adapted in one embodiment of the invention, to perform functions that are typically implemented by a processor (a central processing unit (CPU), for example) that executes one or more software layers (a network layer and a transport layer, as examples) of a protocol stack (a TCP/IP stack, for example). As an example, these functions may include parsing headers of incoming packets to obtain characteristics (of the packet) that typically are extracted by execution of the software layers. The characteristics, in turn, may be used to identify a flow that is associated with the packet, as further described below.
Referring to FIG. 5, thenetwork controller52 may include hardware, such as areceive path92, to perform traditional software functions to process packets that are received from the network. For example, thereceive path92 may include areceive parser98 to parse a header of each packet to extract characteristics of the packet, such as characteristics that associate a particular flow with the packet. Because the receivepath92 may be receiving incoming packets from many different flows, the receivepath92 may include amemory100 that stores entries, orflow tuples140, that uniquely identify a particular flow. In this manner, thereceive parser98 may interact with thememory100 to compare parsed information from the incoming packet with thestored flow tuples140 to determine if the flow if detected, or “flow tuple hit,” occurs. If a flow tuple hit occurs, thereceive parser98 may indicate this event to other circuitry (of the controller52) that processes the packet based on the detected flow, as further described below.
Referring also to FIG. 6, eachflow tuple140 may include fields that identify characteristics of a particular flow. As an example, in some embodiments, at least one of theflow tuples140 may be associated with a Transmission Control Protocol (TCP), a User Datagram Protocol (LJDP) or a Realtime Transport Protocol (RTP), as just a few examples. Theflow tuple140 may include afield142 that indicates an internet protocol (IP) destination address (i.e., the address of the computer system to receive the packet); afield144 that indicates an IP source address (i.e., the address of a computer system to transmit the packet); afield146 that indicates a TCP destination port (i.e., the address of the application that caused generation of the packet); afield148 that indicates a TCP source port (i.e., the address of the application that is to receive the packet); and afield150 that indicates security/authentication attributes of the packet.Other flow tuples140 may be associated with other network protocols, such as a User Datagram Protocol (LJDP), for example. The above references to specific network protocols are intended to be examples only and are not intended to limit the scope of the invention.Additional flow tuples140 may be stored in thememory100 and existingflow tuples140 may be removed from thememory100 via a driver program57 (FIG.4).
If the receiveparser98 recognizes (via the flow tuples140) the flow that is associated with the incoming packet, then the receivepath92 may further process the packet. If the receiveparser98 does not recognize the flow, then the receivepath92 may pass the incoming packet via a Peripheral Component Interconnect (PCI)interface130 to software layers of a TCP/IP stack of thecomputer system50 for processing. The PCI Specification is available from The PCI Special Interest Group, Portland, Oreg. 97214. Other bus interfaces may be used in place of thePCI interface130. In this manner, in some embodiments, thecomputer system50 may execute an operating system that provides at least a portion of some layers (network and transport layers, for example) of the protocol stack.
In some embodiments, even if the receiveparser98 recognizes the flow, additional information may be needed before receivepath92 further processes theincoming packet52. For example, an authentication/encryption engine102 may authenticate and/or decrypt the data portion of the incoming packet based on the security attributes that are indicated by the field150 (see FIG.6). In this manner, if thefield150 indicates that the data portion of the incoming packet is encrypted, then theengine102 may need a key to decrypt the data portion. If authenticated, a key may be used to check authenticity.
For purposes of providing the key to theengine102, thenetwork controller52 may include akey memory104 that stores different keys that may be indexed by the different associated flows, for example. Additional keys may be stored in thekey memory104 by execution of thedriver program57, and existing keys may be removed from thekey memory104 by execution of thedriver program57. In this manner, if theengine102 determines that the particular decryption key is not stored in thekey memory104, then theengine102 may submit a request (via the PCI interface130) to the driver program57 (see FIG. 4) for the key. In this manner, thedriver program57, when executed, may furnish the key in response to the request and interact with thePCI interface130 to store the key in thekey memory104. In some embodiments, if the key is unavailable (i.e., the key is not available from thedriver program57 or is not stored in the key memory104), then theengine102 does not decrypt the data portion of the packet. Instead, thePCI interface130 stores the encrypted data in a predetermined location of a system memory56 (see FIG. 4) so that software of one or more layers of the protocol stack may be executed to decrypt the data portion of the incoming packet.
After the parsing, the processing of the packet by thenetwork controller52 may include bypassing the execution of one or more software layers of the protocol stack. For example, the receivepath92 may include a zerocopy parser110 that, via thePCI interface130, copies data associated with the packet into a memory buffer304 (see FIG. 7) that is associated with the application. In this manner, several applications may have associated buffers for receiving the packet data. The operating system creates and maintains thebuffers304 in a virtual address space, and the operating system reserves a multiple number of physical four kilobyte (KB) pages for eachbuffer304. The operating system also associates each buffer with a particular application. This arrangement is to be contrasted to conventional arrangements that may use intermediate buffers to transfer packet data from the network controller to applications, as described below.
Referring to FIG. 8, for example, atypical network controller300 does not directly transfer the packet data into thebuffers304 because thetypical network controller300 does not parse the incoming packets to obtain information that identifies the destination application. Instead, the typical network controller300 (under the control of the data link layer, for example) typically transfers the data portion of the packet intopacket buffers302 that are associated with an intermediate layer e.g. the data link, the network layer or the transport layer. In contrast to thebuffers304, eachbuffer302 may have a size range of 64 to 1518 bytes. The execution of the network layer subsequently associates the data with the appropriate applications and causes the data to be transferred from thebuffers302 to thebuffers304.
Referring back to FIG. 7, in contrast to the conventional arrangement described above, thenetwork controller52 may use the zerocopy parser110 to bypass thebuffers302 and copy the data portion of the packet directly into theappropriate buffer304. To accomplish this, the zero copy parser110 (see FIG. 5) may receive an indication of the TCP destination port (as an example) from the receiveparser98 that, as described above, extracts this information from the header. The TCP or other protocol destination port uniquely identifies the application that is to receive the data and thus, identifies theappropriate buffer304 for the packet data. Besides transferring the data portions to thebuffers304, the zerocopy parser110 may handle control issues between the network controller and the network stack and may handle cases where an incoming packet is missing, as described below.
Referring to FIG. 5, besides the components described above, the receivepath92 may also include one or more first-in-first-out (FIFO)memories106 to synchronize the flow of incoming packets through the receivepath92. A checksum engine108 (of the receive path92) may be coupled to one of theFIFO memories106 for purposes of verifying checksums that are embedded in the packets. The receivepath92 may be interfaced to aPCI bus72 via thePCI interface130. ThePCI interface130 may include an emulated direct memory access (DMA)engine131. In this manner, for purposes of transferring the data portions of the packets directly into thebuffers304, the zerocopy parser110 may use one of a predetermined number (sixteen, for example) of emulated DMA channels to transfer the data into theappropriate buffer304. In some embodiments, it is possible for each of the channels to be associated with aparticular buffer304. However, in some embodiments, when the protocol stack (instead of the zero copy parser110) is used to transfer the data portions of the packets theDMA engine131 may use a lower number (one, for example) of channels for these transfers.
In some embodiments, the receivepath92 may include additional circuitry, such as a serial-to-parallel conversion circuit96 that may receive a serial stream of bits from anetwork interface90 when a packet is received from thenetwork wire53. In this manner, theconversion circuit96 packages the bits into bytes and provides these bytes to the receiveparser98. Thenetwork interface90 may be coupled to generate and receive signals to/from thewire53.
In addition to the receivepath92, thenetwork controller52 may include other hardware circuitry, such as a transmitpath94, to transmit outgoing packets to the network. In the transmitpath94, thenetwork controller52 may include a transmitparser114 that is coupled to thePCI interface130 to receive outgoing packet data from thecomputer system50 and form the header on the packets. To accomplish this, in some embodiments, the transmitparser114 stores the headers of predetermined flows in aheader memory116. Because the headers of a particular flow may indicate a significant amount of the same information (port and IP addresses, for example), the transmitparser114 may slightly modify the stored header for each outgoing packet and assemble the modified header onto the outgoing packet. As an example, for a particular flow, the transmitparser114 may retrieve the header from theheader memory116 and parse the header to add such information as sequence and acknowledgment numbers (as examples) to the header of the outgoing packet. Achecksum engine120 may compute checksums for the IP and network headers of the outgoing packet and incorporate the checksums into the packet.
The transmitpath94 may also include an authentication andencryption engine126 that may encrypt and/or authenticate the data of the outgoing packets. In this manner, all packets of a particular flow may be encrypted (and/or authenticated) via a key that is associated with the flow, and the keys for the different flows may be stored in akey memory124. Thekey memory124 may be accessed (by execution of thedriver program57, for example) via thePCI interface130. The transmitpath94 may also include a parallel-to-serial conversion circuit128 to serialize the data of the outgoing packets. Thecircuit128 may be coupled to thenetwork interface90. The transmitpath94 may also include one ormore FIFO memories122 to synchronize the flow of the packets through the transmitpath94.
Referring to FIG. 5a, in connection with a large send, where the data received by thecontroller52 exceeds the maximal transmit unit (MTU) (diamond502), the beginning and end of the first frame header may be identified (block504). The first frame header may be stored in the header memory116 (block506). In some embodiments this may save the overhead of re-reading the header over the PCI bus in special accesses for every frame. Each ensuing header or headers (diamond508) may then be modified for only the information that is different such as the IP identification field, TCP/UDP checksum and sequence number (block510). Keeping the header in theheader memory116 rather than system memory saves overhead.
A large send is a flow that helps the system with building TCP/IP headers. The system sends thecontroller52, through the driver, a large packet with a prototype header. The controller breaks this large packet into small MTU sized packets and updates the packet header based on the prototype header sent with the large packet (IP identification, sequence number, checksum calculation, flags and so on).
Thecontroller52 loads the prototype header into a header file inmemory116 while theparser114 helps in parsing the header and informs thestate machine115 about offsets in the first header. The first header, i.e. the prototype header, for the first or prototype frame is different than the subsequent headers. A method of aggregating the initial header checksum with the data checksum to get an overall checksum which is not a full checksum, as illustrated in FIG. 5b, may save effort. The driver may compute the sixteen bit sum of the prototype header including all fixed fields and a pseudoheader (block512).
The pseudoheader may include the IP addresses (source and destination) and the protocol fields of the IP frame, and the TCP total length of the header, options and the data length. The TCP total length may include the TCP trailer. In this way, the processing may be done in a layered fashion, using the pseudoheader, between the IP and TCP processing and allows information not available in the original package to be handled in software.
The hardware computes the checksum of the data and when all fields have been added (block514), the hardware performs a one's complement (block516). This method may save complexity from hardware while avoiding unnecessary hardware and software operations.
Again for the prototype frame, only the micro-machine (state machine)115 asserts a busy status bit and sets theparser114 in a parse only mode. In this mode theparser114 analyzes the packet header and does not forward it to theFIFO memory122. The micro-machine examines the parser results and fills in missing data (e.g. including SNAP length, IP total length, TCP sequence number, clear FIN and PSH flags, and/or UDP length and/or RTP fields) to command the checksum engine and authentication/encryption engine for later operations. It then places the parser in a normal mode and enables normal transmit operation of the transmitpath94. The parser loads data fromFIFO122 registers to the state machine to calculate and prepare the header file for the large send transmission. The FIFO registers that are loaded include IP offset in bytes, TCP offset in bytes, and TCP/UDP#.
Next, the MTU sized first packet is produced by the micro-machine commanding the DMA on the number of additional bytes to fetch from thebus72. All subsequent frames of the large send block are treated differently. First, the prototype header is fetched from theheader memory116, then the micro-machine adjusts the content of all header fields subject to change (which may include the IP identification increment, and the TCP sequence number update).
The last frame is different. Its size may be smaller than MTU and some flags may carry different values. For this frame the micro-machine may update the SNAP length, IP total length, IP identification increment, TCP sequence number update fields, while FIN and PSH may be set in case the original prototype header had set them. The transmitpath94 operates in normal mode thereafter. All of the large send packet segments are treated as regular packets, subject to full or partial parsing.
Although normally a large send is not used when the flow includes security frames, a large send may be implemented even with Encapsulating Security Payload (ESP) datagrams. The ESP specification is set forth in “R. Atkinson,IP Encapsulating Security Payload (ESP). Request for comments (proposed standard) RFC 1827, Internet Engineering Task Force, August 1995.” A special memory550 may be provided for the trailer used with ESP datagrams. By storing the ESP trailer in the memory550, the controller hardware can then deal with security frames in large sends as described previously.
In some embodiments, the receiveparser98 may include one or more state machines, counter(s) and timer(s), as examples, to perform the following functions. In particular, referring to FIG. 9, the receiveparser98 may continually check (block200) for another unparsed incoming packet. When another packet is to be processed, the receiveparser98 may check the integrity of the packet, as indicated inblock201. For example, the receiveparser98 may determine if the incoming packet includes an IP header and determine if a checksum of the IP header matches a checksum that is indicated by the IP header.
If the receiveparser98 determines (diamond202) that the incoming packet passes this test, then the receiveparser98 may parse (block206) the header to extract the IP components of a header of the packet to obtain the information needed to determine if a flow tuple hit occurs. For example, the receiveparser98 may extract the network protocol being used, IP destination and source addresses, and the port destination and source addresses. Next, the receiveparser98 may determine if the network protocol is recognized, as indicated indiamond208. (In the case of an IPSec frame, the receiveparser98 may also check whether the frame uses the Authentication Header (AH) or ESP transform and compare it to the expected format stored in the tuple). If not, then the receiveparser98 may pass (block204) further control of the processing to the network stack.
The receiveparser98 may subsequently parse (block212) the protocol header. As an example, if the packet is associated with the TCP/IP protocol, then the receiveparser98 may parse the TCP header of the packet, an action that may include extracting the TCP ports and security attributes of the packet, as examples. The receiveparser98 uses the parsed information from the protocol header to determine (diamond216) if a flow tuple hit has occurred. If not, thereceiver parser98 passes control of further processing of the packet to the stack, as depicted inblock204. Otherwise, the receiveparser98 determines (diamond218) if the data portion of the packet needs to be decrypted. If so, the receiveparser98 determines if the associated key is available in thekey memory104, as depicted indiamond220. If the key is not available, then the receiveparser98 may return to block204 and thus, pass control of further processing of the packet to the stack.
Referring to FIG. 10, if the key is available, the receiveparser98 may update a count of the number of received packets for the associated flow, as depicted inblock224. Next, the receiveparser98 may determine (diamond226) whether it is time to transmit an acknowledgment packet back to the sender of the packet based on the number of received packets in the flow. In this manner, if the count exceeds a predetermined number that exceeds the window (i.e., if the amount of unacknowledged transmitted data exceeds the window), then the receiveparser98 may either (depending on the particular embodiment) notify (block228) the driver program57 (see FIG. 4) or notify (block230) the transmitparser114 of the need to transmit an acknowledgment packet. Thus, in the latter case, the transmitparser114 may be adapted to generate an acknowledgment packet, as no data for the data portion may be needed from the application layer. The receiveparser98 transitions from either block228 or230 todiamond200 to check for another received packet. After an acknowledgment packet is transmitted, the receiveparser98 may clear the count of received packets for that particular flow.
Referring to FIG. 11, in some embodiments, the zerocopy parser110 may include one or more state machines, timer(s) and counter(s) to perform the following functions to transfer the packet data directly to thebuffers304. First, the zerocopy parser110 may determine if control of the transfer needs to be synchronized between the zerocopy parser110 and the stack. In this context, the term “synchronization” generally refers to communication between the stack and the zerocopy parser110 for purposes of determining a transition point at which one of the entities (the stack or the zero copy parser110) takes control from the other and begins transferring data into thebuffers304. Without synchronization, missing packets may not be detected. Therefore, when control passes from the stack to the parser110 (and vice versa), synchronization may need to occur, as depicted inblock254.
Thus, one scenario where synchronization may be needed is when the zerocopy parser110 initially takes over the function of directly transferring the data portions into thebuffers304. As shown in FIG. 12, in this manner, if the zerocopy parser110 determines (diamond250) that the current packet is the first packet being handled by the zerocopy parser110, then theparser110 synchronizes the packet storage, as depicted byblock254. For purposes of determining when the transition occurs, the zerocopy parser110 may continually monitor the status of a bit that may be selectively set by thedriver program57, for example. Another scenario where synchronization is needed is when an error occurs when the zerocopy parser110 is copying the packet data into thebuffers304. For example, as a result of the error, the stack may temporarily resume control of the transfer before the zerocopy parser110 regains control. Thus, if the zerocopy parser110 determines (diamond252) that an error has occurred, the zerocopy parser110 may transition to theblock254.
Synchronization may occur in numerous ways. For example, the zerocopy parser110 may embed a predetermined code into a particular packet to indicate to the stack that the zerocopy parser110 handles the transfer of subsequent packets. The stack may do the same.
Occasionally, the incoming packets of a particular flow may be received out of sequence. This may create a problem because the zerocopy parser110 may store the data from sequential packets one after the other in aparticular buffer304. For example, packet number “267” may be received before packet number “266,” an event that may cause problems if the data for packet number “267” is stored immediately after the data for packet number “265.” To prevent this scenario from occurring, in some embodiments, the zerocopy parser110 may reserve a region308 (see FIG. 7) in theparticular buffer304 for the missing packet data, as indicated in block260 (FIG.11). For purposes of determining the size of the missing packet (and thus, the amount of memory space to reserve), the zerocopy parser110 may use the sequence numbers that are indicated by the adjacent packets in the sequence. In this manner, the sequence number indicates the byte number of the next successive packet. Thus, for the example described above, the acknowledgment numbers indicated by the packet numbers “265” and “267” may be used to determine the boundaries of theregion308.
The zerocopy parser110 subsequently interacts with thePCI interface130 to set up the appropriate DMA channel to perform a zero copy (step262) of the packet data into theappropriate buffer304. The zerocopy parser110 determines theappropriate buffer304 via the destination port that is provided by the receiveparser98.
Referring back to FIG. 4, besides thenetwork controller52, thecomputer system50 may include aprocessor54 that is coupled to ahost bus58. In this context, the term “processor” may generally refer to one or more central processing units (CPUs), microcontrollers or microprocessors (an X86 microprocessor, a Pentium microprocessor or an Advanced RISC Controller (ARM), as examples), as just a few examples. Furthermore, the phase “computer system” may refer to any type of processor-based system that may include a desktop computer, a laptop computer, an appliance or a set-top box, as just a few examples. Thus, the invention is not intended to be limited to the illustratedcomputer system50 but rather, thecomputer system50 is an example of one of many embodiments of the invention.
Thehost bus58 may be coupled by a bridge, ormemory hub60, to an Advanced Graphics Port (AGP)bus62. The AGP is described in detail in the Accelerated Graphics Port Interface Specification, Revision 1.0, published in Jul. 31, 1996, by Intel Corporation of Santa Clara, Calif. TheAGP bus62 may be coupled to, for example, avideo controller64 that controls adisplay65. Thememory hub60 may also couple theAGP bus62 and thehost bus58 to amemory bus61. Thememory bus61, in turn, may be coupled to asystem memory56 that may, as examples, store thebuffers304 and a copy of thedriver program57.
Thememory hub60 may also be coupled (via a hub link66) to another bridge, or input/output (I/O)hub68, that is coupled to an I/O expansion bus70 and thePCI bus72. The I/O hub68 may also be coupled to, as examples, a CD-ROM drive82 and ahard disk drive84. The I/O expansion bus70 may be coupled to an I/O controller74 that controls operation of afloppy disk drive76 and receives input data from akeyboard78 and amouse80, as examples.
Other embodiments are within the scope of the following claims. For example, a peripheral device other than a network controller may implement the above-described techniques. Other network protocols and other protocol stacks may be used.
While the invention has been disclosed with respect to a limited number of embodiments, those skilled in the art, having the benefit of this disclosure, will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of the invention.