CROSS REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 09,750,071 filed Dec. 29, 2000.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH This work was supported by the United States Government under Technology Investment Agreement TIA F30602-98-2-0194. The Government has certain rights in this invention.
FIELD OF THE INVENTION The present invention relates to data communication networks and, in particular, to burst switching in a high capacity network.
BACKGROUND OF THE INVENTION In burst switching, a source node sends a burst transfer request to a core node to indicate that a burst of data is coming, the size of the burst and the destination of the burst. Responsive to this burst transfer request, the core node configures a space switch to connect a link on which the burst will be received to a link to the requested burst destination. In a first scheme, the burst follows the burst transfer request after a predetermined time period (a scheduling time) and it is expected that, when the burst arrives at the core node, the space switch will have been properly configured by the core node. In a second scheme, the source node waits for a message from the core node, where the message acknowledges that the space switch in the core node is properly configured, before sending the burst.
Often core nodes are used that do not have buffers to buffer incoming data. Core nodes without buffers are desirable because: it may not be possible to provide buffers without an expensive optical-electrical conversion at input and electrical-optical conversion at output of an optical space switch; and the core node may be distant from the source and sink (edge) nodes, therefore requiring remote buffer management in an edge-controlled network.
In the first scheme, a burst may arrive at a core node before the space switch is properly configured and, if the core node does not include a buffer, the burst may be lost. Furthermore, until the source node fails to receive an acknowledgement of receipt of the burst from the burst destination, the fact that the burst has been lost at the core node is unknown to the source node. Having not received acknowledgement of receipt of the burst, the source node may then retransmit the burst. In the second scheme, the time delay involved in sending a burst transfer request and receiving an acceptance before sending a burst may be unacceptably high, leading to low network utilization. Despite these shortcomings, burst switching is gaining popularity as a technique to transfer data in high-speed networks since it simplifies many of the control functions and does not require capacity to be reserved when it may not always be in use. Furthermore, burst switching reduces a need for characterizing the traffic. Clearly, a burst switching technique that allows for greater network utilization is desirable.
SUMMARY OF THE INVENTION At a controller of a space switch, a novel burst scheduling technique allows efficient utilization of network resources. Burst transfer requests are received at the space switch controller and pipelined such that the controller may determine a schedule for allowing the bursts, represented by the burst transfer requests, access to the space switch. According to the schedule, scheduling information is distributed to the sources of the burst transfer requests and to a controller of the space switch.
Advantageously, the novel burst scheduling technique allows for utilization of network resources that is more efficient than typical burst switching techniques, especially when the novel burst scheduling technique is used in combination with known time locking methods. The novel burst scheduling technique enables the application of burst switching to wide coverage networks. Instead of handling burst requests one-by-one, burst requests are pipelined and the handling of the bursts is scheduled over a long future period.
In accordance with an aspect of the present invention there is provided a method of controlling a space switch to establish time-varying connections, the method includes receiving a stream of burst transfer requests from a source node, each of the burst transfer requests including parameters specifying a requested connection and a duration for the requested connection, generating scheduling information for each of the burst transfer requests based on the parameters, transmitting the scheduling information to the source node and transmitting instructions to a slave controller for the space switch, where the instructions are based on the scheduling information and instruct the space switch to establish the requested connection. In another aspect of the invention a space switch master controller is provided for performing this method. In a further aspect of the present invention, there is provided a software medium that permits a general purpose computer to carry out this method.
In accordance with another aspect of the present invention there is provided a method of generating scheduling information. The method includes determining a next-available input port among a plurality of input ports and a time index at which the next-available input port will become available and, for each burst transfer request of a plurality of burst transfer requests received in relation to the next-available input port, and where each the each burst transfer request includes an identity of a burst and a destination for the burst: determining, from the destination for the burst, a corresponding output port among a plurality of output ports; determining a time gap, where the time gap is a difference between: the time index at which the next-available input port will become available; and a time index at which the corresponding output port will become available. The method further includes selecting one of the plurality of burst transfer requests as a selected burst transfer request, where the selected burst transfer request has a minimum time gap of the plurality of burst transfer requests, selecting a scheduled time index, where the scheduled time index is one of the time index at which the next-available input port is available and the time index at which the corresponding output port is available and transmitting scheduling information for a burst identified by the selected burst transfer request, the scheduling information based on the scheduled time index. In another aspect of the invention a burst scheduler is provided for performing this method. In a further aspect of the present invention, there is provided a software medium that permits a general purpose computer to carry out this method.
In accordance with a further aspect of the present invention there is provided a core node in a data network. The core node includes a space switch, a plurality of input ports, a plurality of output ports and a slave controller for the space switch for receiving instructions from a master controller of the space switch, the instructions including specifications of temporary connections to establish between the plurality of input ports and the plurality of output ports and indications of timing with which to establish the connections.
In accordance with a still further aspect of the present invention there is provided a data network including a plurality of edge nodes, a plurality of core nodes, each core node of the plurality of core nodes including a space switch and a master controller for one the space switch in one the core node for: receiving a stream of burst transfer requests from one of the plurality of edge nodes, each of the burst transfer requests including parameters specifying a requested connection and a duration for the requested connection; generating scheduling information for each of the burst transfer requests based on the parameters; transmitting the scheduling information to the one of the plurality of edge nodes; and transmitting the instructions to a slave controller for the one the space switch, where the instructions are based on the scheduling information.
Other aspects and features of the present invention will become apparent to those of ordinary skill in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying figures.
BRIEF DESCRIPTION OF THE DRAWINGS In the figures which illustrate example embodiments of this invention:
FIG. 1 schematically illustrates a hub and spoke network including a core node that may employ embodiments of the present invention;
FIG. 2 illustrates the core node ofFIG. 1;
FIG. 3 illustrates a master controller for use in the core node ofFIG. 2;
FIG. 4 illustrates a burst scheduler for use in the space switch controller ofFIG. 3;
FIG. 5A illustrates a data structure for use in an embodiment of the present invention;
FIG. 5B illustrates an entry in the data structure ofFIG. 5A;
FIG. 6 illustrates a time-space map for use in an embodiment of the present invention;
FIG. 7 illustrates an M-entry Map for use in an embodiment of the present invention;
FIG. 8 illustrates steps of a burst scheduling method for use in an embodiment of the present invention;
FIG. 9 illustrates steps of a map maintenance method for use in an embodiment of the present invention;
FIG. 10 illustrates an exemplary configuration of groups of ports of a space switch for parallel processing in an embodiment of the present invention;
FIG. 11 illustrates a data structure adapted from the data structure inFIG. 5A for use in a parallel processing embodiment of the present invention;
FIG. 12 illustrates a data network for use with an embodiment of the present invention;
FIG. 13 illustrates an edge node for use in the data network ofFIG. 12;
FIG. 14 illustrates an electronic core node for use in the data network ofFIG. 12;
FIG. 15 illustrates a data network that is an adaptation of the data network ofFIG. 12 wherein a core node and an edge node have been collocated;
FIG. 16 illustrates an edge node for use in the data network ofFIG. 15;
FIG. 17 illustrates a master controller including a burst scheduler for use in the data network ofFIG. 15;
FIG. 18 illustrates a core node for use in the data network ofFIG. 15;
FIG. 19 illustrates a data network that is an adaptation of the data network ofFIG. 15 wherein a second core node and a second edge node have been collocated;
FIG. 20 illustrates an edge node for use in the data network ofFIG. 19;
FIG. 21 depicts a master time counter cycle and a calendar cycle for a master controller for use in an embodiment of the present invention;
FIG. 22 illustrates scheduling of burst transfers and resultant changes in the state of a calendar in an embodiment of the present invention; and
FIG. 23 illustrates a master controller including a burst scheduler and a circuit scheduler for use in the data network ofFIG. 19.
DETAILED DESCRIPTIONFIG. 1 illustrates a rudimentary “hub and spoke”data network100 wherein a number ofedge nodes108A,108B,108C,108D,108E,108F,108G,108H (referred to individually or collectively as108) connect to each other via acore node102. Anedge node108 includes a source node that supports traffic sources and a sink node that supports traffic sinks. Traffic sources and traffic sinks (not shown) are usually paired and each source node is usually integrated with a sink node with which it shares memory and control.
Thecore node102 may be considered in greater detail in view ofFIG. 2, which illustrates an electronic core node. Thecore node102 includesN input ports202A,202B,202C, . . . ,202N (referred to individually or collectively as202) for receiving data from theedge nodes108 ofFIG. 1. Each of the N input ports202 is connected to acorresponding buffer204A,204B,204C, . . . ,204N (referred to individually or collectively as204) that is connected to acorresponding port controller206A,206B,206C, . . . ,206N (referred to individually or collectively as206). Aspace switch212 directs input received from each of the buffers204 to an appropriate one ofM output ports208A,208B,208C, . . . ,208M (referred to individually or collectively as208) under control of a slavespace switch controller214. Notably, although thecore node102 and thespace switch212 are described as having a number of inputs, N, that is different from the number, M, of outputs, quite often the number of inputs and outputs is equal, i.e., N=M.A master controller210 is communicatively coupled to theport controllers206 and theoutput ports208 as well as to the slavespace switch controller214. Each of the control functions of themaster controller210 can be implemented in application-specific hardware, which is the preferred implementation when high speed is a requirement. In an alternative implementation, themaster controller210 may be loaded with burst scheduling and time locking software for executing methods exemplary of this invention from asoftware medium224 which could be a disk, a tape, a chip or a random access memory containing a file downloaded from a remote source.
As illustrated in detail inFIG. 3, themaster controller210 includes aprocessor302. Theprocessor302 maintains connections to amemory304, aninput interface306, anoutput interface308, aswitch interface312 and amaster time counter314. At theinput interface306, themaster controller210 receives burst transfer requests from theport controllers206. At the output interface, themaster controller210 may communicate with theoutput ports208 to perform conventional operational and maintenance functions. Theprocessor302 is also connected to a burst-scheduling kernel310. Based on the burst transfer requests received from theprocessor302, the burst-scheduling kernel310 determines appropriate timing for switching at thespace switch212. According to the determined timing received from the burst-scheduling kernel310, theprocessor302 passes scheduling information to the slavespace switch controller214 via theswitch interface312. Theprocessor302 also controls the timing of transmission of bursts, from the buffers204 to thespace switch212, by transmitting scheduling information to theport controllers206 via theinput interface306.
The burst-scheduling kernel310 may now be described in view ofFIG. 4. The burst-scheduling kernel310 receives burst transfer requests from theprocessor302 via aprocessor interface402 and aburst parameter receiver404. Theburst parameter receiver404 may, for instance, be implemented as a time slotted bus. The parameters of these bursts are queued at aburst parameter queue406 before being accessed by a burst-scheduling unit408. Included in the burst-scheduling unit408 may be a time-space map and a space-time map as well as comparators and selectors for generating scheduling information (co-ordination between these maps). The maps are implemented in partitioned random-access memories. After generating scheduling information for a burst, the scheduling information is transferred to theprocessor302 via aschedule transmitter410 and theprocessor interface402.
In overview, aninput port202A ofcore node102 receives a burst from a subtendingedge node108. The burst is stored in thebuffer204A. Parameters indicating the size (e.g., two megabits) and destination (e.g., aparticular edge node108B) of the burst are communicated from theport controller206A to themaster controller210 as a burst transfer request. The burst-scheduling unit408 of themaster controller210 executes a burst scheduling algorithm to generate scheduling information and communicates relevant parts of the generated scheduling information to theport controllers206. Themaster controller210 also communicates relevant parts of the generated scheduling information to the slavespace switch controller214. According to the scheduling information received at theport controller206A, thebuffer204A sends bursts to thespace switch212. At thespace switch212, a connection is established between thebuffer204A and theoutput port208B, according to instructions received from the slavespace switch controller214, such that the burst is successfully transferred from anedge node108 associated with the traffic source to theedge node108 associated with the traffic sink.
At the master controller210 (seeFIG. 3), the burst transfer request is received by theinput interface306 and passed to theprocessor302. Theprocessor302 then sends the burst transfer request to the burst-scheduling kernel310. At the burst-scheduling kernel310 inFIG. 4, the burst transfer request is received at theprocessor interface402 and the included burst parameters are extracted at theburst parameter receiver404. The parameters are queued at theburst parameter queue406 and subsequently stored at the burst-scheduling unit408 in a data structure500 (FIG. 5A). The parameters are stored as anentry506 in arecord504, where theentry506 is associated with the burst described by the received parameters. Eachrecord504 has a plurality ofentries506, and eachentry506 is associated with a burst waiting in a buffer204. As the number of bursts waiting in each buffer204 may be different, therecords504 may be of varying sizes. As well, the plurality ofentries506 in each record504 may be a linked list as will be described hereinafter. Furthermore, thedata structure500 is made up ofN records504, where each record504 corresponds to one of the N input ports202 (FIG. 2). As illustrated inFIG. 5B, eachentry506 includes adestination field508 for storing the destination parameter of the burst and asize field510 for storing the transfer-time (size) parameter of the burst.
A generic memory device storing an array that has a time-varying number of data units must have a sufficient capacity to store the expected maximum number of data units. If several arrays, each having a time-varying number of data units, share the generic memory device, then the allocation of the expected maximum number of data units for each array may be considered wasteful. Thedata structure500stores entries506 containing parameters of burst transfer requests received from each of the input ports202. The number ofentries506 for any particular input port202 may vary violently with time, i.e., number ofentries506 for the particular input port202 may have a high coefficient of variation. However, the total number ofentries506 waiting in thedata structure500 and corresponding to the N input ports202 would have a much smaller coefficient of variation when N is large, as would be expected in this case. The size of memory required for thedata structure500 can then be significantly reduced if theentries506 are stored as N interleaved linked lists. Interleaved linked lists are well known in the art and are not described here. Essentially, interleaved linked lists allow dynamic sharing of a memory by X (where X>1) data groupings using X insertion pointers and X removal pointers. Thus, the interleaved linked lists are addressed independently but they share the same memory device.
The number, X, of data groupings in thedata structure500 is at least equal to the number of input ports, N, though X may be higher than N if traffic classes are introduced. X may also be higher than N if data from a source node to a sink node uses multiple paths through different core nodes (as will be described hereinafter), since the data of each path must be identified. Thus, the use of an interleaved linked list is preferred to the use of a memory structured to provide a fixed memory partition per traffic stream. A traffic stream is an aggregation of traffic from a particularsource edge node108 to a particulardestination edge node108, often resulting in a succession of bursts.
The burst-scheduling unit408 maintains two other data structures, namely a calendar (i.e., a time-space map)600 (seeFIG. 6) and an M-element array (i.e., a space-time map)700 (seeFIG. 7).
Thecalendar600 is divided intoK time slots604; indexed from 1 to K. Some of thetime slots604 in thecalendar600 containidentifiers606 of input ports202. Thosetime slots604 that do not containinput port identifiers606 contain, instead,null identifiers608. Eachtime slot604 contains either aninput port identifier606 or anull identifier608. The presence, in a giventime slot604, of a particularinput port identifier606 indicates to themaster controller210 that an input port202 (an identifier of which is contained in a particular input port identifier606) is available to transmit data (if it has waiting data) to thespace switch212 from the time corresponding to the giventime slot604 forward. Each of thetime slots604 in thecalendar600 is representative of a short time period, say 100 nanoseconds.
Thus, the instant of time at which a given input port202 is determined to be available is represented by atime slot604 in thecalendar600. This will typically force a rounding up of the actual availability time to anearest time slot604. The duration of atime slot604 in thecalendar600, therefore, should be small enough to permit an accurate representation of time and should be large enough to reduce the mean number of times a memory holding thecalendar600 has to be accessed before finding an indication of an input port202.Several time slots604 in thecalendar600 contain null identifiers608 (i.e., all thetime slots604 that don not contain an input port identifier606) and these must be read since thecalendar600 must be read sequentially. The memory holding thecalendar600 must be a random-access memory however, since an address (index) at which aninput port identifier606 is written is arbitrary.
Preferably, the number, K, oftime slots604 in thecalendar600, is significantly larger than the number of input ports202, N (each port of thespace switch212 has an entry in the calendar, even if the port is not active for an extended period of time). In general, K must be greater than N, whereN time slots604 containinput port identifiers606 and (K-N)time slots604 containnull identifiers608. Further, the duration of thecalendar600 must be larger than a maximum burst span. With a specified maximum burst span of 16 milliseconds, for example, an acceptable number (K) oftime slots604 in thecalendar600 is 250,000 with a slot time of 64 nanoseconds.
There is a requirement that thecalendar600 be time locked to themaster time counter314 as will be described hereinafter. In one embodiment of the present invention, eachtime slot604 in thecalendar600 has a duration equivalent to a single tick of themaster time counter314. In other embodiments, eachtime slot604 in thecalendar600 has a duration equivalent to an integer multiple of the duration of a single tick of themaster time counter314. Eachport controller206 has an awareness of time at themaster time counter314, so that scheduling information received at theport controller206 may be used to send a burst to thespace switch212 at the time indicated by scheduling information. This awareness may be derived from access to a clock bus or through a time locked local counter.
In order to speed up the process, thecalendar600 may be implemented in multiple memory devices. For example, a calendar of 262,144 (218)time slots604, can be implemented in 16 memory devices each having a capacity to store of 16,384time slots604. Addressing atime slot604 in a multiple-memory calendar is known in the art.
In the M-element array700, eachelement704 corresponds to one of theoutput ports208. Eachelement704 in the M-element array700 holds a state-transition-time indicator706. The state-transition-time indicator706 is an index of atime slot604 in thecalendar600 representative of a point in time at which therespective output port208 will be available to transmit data. If, for instance, thecalendar600 has sixteen thousand time slots604 (i.e., K=16,000), eachelement704 in the M-element array700 may be two bytes long (i.e., capable of holding a binary representation of a time slot index as high as 65,536). Where each of thetime slots604 is 100 nanoseconds long, a sixteen thousandslot calendar600 may accommodate bursts having a length up to 1.6 milliseconds (i.e., 16 megabits at ten gigabits per second) without having to wrap around the current time when writing the availability of the input port202 to thecalendar600.
To examine scheduling in detail, we may first assume that themaster controller210 has already been operating, that is, assume that burst transfer requests have been satisfied and bursts are therefore flowing from the input ports202 to theoutput ports208 of thecore node102.
The burst-scheduling unit408 scans thecalendar600 to detect afuture time slot604 containing an input port identifier606 (step802), resulting in a detectedtime slot604A. The burst-scheduling unit408 then communicates with theburst parameter queue406 to acquire entries506 (step804) from therecord504, in the data structure500 (FIG. 5), that corresponds to the input port202 identified in theinput port identifier606 in the detectedtime slot604A. It is then determined whether there areentries506 in therecord504 that corresponds to the identified input port202 (step805). Each of theentries506 identifies a destination and, from the destination, the burst-scheduling unit408 may deduce anoutput port208. If there are entries to schedule (i.e., waiting burst requests), the burst-scheduling unit408 extracts a state-transition-time indicator706 (step806) from eachelement704, in the M-element array700 (FIG. 7), that corresponds to anoutput port208 deduced from destinations identified by the acquiredentries506. The burst-scheduling unit408 then determines a “gap” (step808) by subtracting the index of the detectedtime slot604A from the index of the time slot found in each state-transition-time indicator706. Each gap represents a time difference between a time at which the input port202 is available and a time at which therespective output port208, requested in the respective burst transfer request, is available. The burst-scheduling unit408 does this for each of the acquiredentries506 for the input port202. Eachentry506 identifies a single burst transfer request. The burst-scheduling unit408 then selects the burst transfer request corresponding to the minimum gap (step810). As will be mentioned hereinafter, to simplify circuitry, the step of acquiringentries506 from the record504 (step804) may only require acquisition of a limited number ofentries506.
If the gap of the selected burst transfer request is positive, then the input port202 is available before theoutput port208. The time slot index identified in the state-transition-time indicator706 corresponding to the availability of theoutput port208 which was requested for the selected burst transfer request is then designated as a “scheduled time slot.” If the gap of the selected burst transfer request is negative, then the input port202 is available after theoutput port208. The time slot index in which theinput port identifier606 was detected in step802 (corresponding to the time when the input port202 is available) is then designated as the scheduled time slot. The burst-scheduling unit408 then transmits scheduling information (index of the scheduled time slot and identity of the burst transfer request) to the processor302 (step812) via theschedule transmitter410 and theprocessor interface402. When determining a minimum gap instep810, a negative gap is preferred to a positive gap because use of the input port202 may begin at the time corresponding to the detectedtime slot604A, as the negative gap indicates that the requestedoutput port208 is already available.
The burst-scheduling unit408 then updates thecalendar600 and the M-element array700 (step814).FIG. 9 illustrates steps of the update method ofstep814. The burst-scheduling unit408 first sums the index of the scheduled time slot and the transfer-time determined from thesize field510 of the selected burst transfer request (step902) and writes theinput port identifier606 of the selected burst transfer request in thetime slot604 indexed by the sum (step904). The writing of theinput port identifier606 effectively identifies, to the burst-scheduling unit408, the time at which the input port202 will be available after transferring the burst corresponding to the selected burst transfer request. Notably, only oneinput port identifier606 may occupy asingle time slot604. Consequently, if anotherinput port identifier606 is already present in thetime slot604 indexed by the sum, the burst-scheduling unit408 will write to the nextavailable time slot604. After writing theinput port identifier606 to thetime slot604 indexed by the sum, the burst-scheduling unit408 writes anull identifier608 in the scheduled time slot (step906).
Subsequently, or concurrently, the burst-scheduling unit408 writes a state-transition-time indicator706 to the M-element array700 (step908) in theelement704 corresponding to theoutput port208 of the selected burst transfer request. The state-transition-time indicator706 is an index of thetime slot604 indexed by the sum determined instep902. As will be apparent to a person skilled in the art, pipelining techniques may also be used to reduce processing time.
If, as determined instep805, there are no entries to schedule (i.e., waiting burst requests), the burst-scheduling unit408 generates an artificial burst (step816) where the size of the artificial burst is the “size of the selected burst” as far asstep902 is concerned. The result of this generation of an artificial burst is that (in step814) theinput port identifier606 is written to adeferred time slot604.
Theprocessor302, having received the scheduling information, transmits to theappropriate port controller206, via theinput interface306, scheduling information to indicate a time at which to begin sending the burst corresponding to the selected burst transfer request to thespace switch212. Theprocessor302 also sends scheduling information (input-output configuration instructions) to the slavespace switch controller214 via theswitch interface312.
As the above assumes that themaster controller210 has already been operating, it is worth considering initial conditions, for thecalendar600 especially. As all of the input ports202 are available initially, yet only oneinput port identifier606 may occupy eachtime slot604, the firstN time slots604 may be occupied by theinput port identifiers606 that identify each of the N input ports202. Initially, thedata structure500 is clear of burst transfer requests and the state-transition-time indicator706 present in eachelement704 of the M-element array700 may be an index of thefirst time slot604 in thecalendar600.
When an input port202 is determined to be available, i.e., when theinput port identifier606 is read from a detectedtime slot604A (step802), thecorresponding record504 in thedata structure500 is accessed to acquireentries506. If thecorresponding record504 is found to be empty, the burst-scheduling unit408 writes anull identifier608 in the detectedtime slot604A and writes theinput port identifier606 at a deferred time slot. The deferred time slot may be separated from the detectedtime slot604A by, for example, 128 time slots. At 100 nanoseconds pertime slot604, this would be amount to a delay of about 13 microseconds.
If the M-element array700 (FIG. 7) can only respond to a single read request at a time, the requests to read each state-transition-time indicator706 from theelements704 will be processed one after the other. To conserve time then, it may be desirable to maintain multiple identical copies of the M-element array700. Where multiple copies are maintained, extraction of a state-transition-time indicator706 fromelements704 instep806 may be performed simultaneously. It is preferable that the writing of a particular state-transition-time indicator706 to a givenelement704 of each copy of the M-element array700 (step908) be performed in a parallel manner.
Where maintaining multiple identical copies of the M-element array700 conserves time, this is done at the cost of memory. Thus, the number ofentries506 acquired instep804 should be limited to a value, J.If J entries506 are acquired instep804, then there is only a requirement for J identical copies of the M-element array700. It is preferred that J not exceed four.
When thespace switch212 has a relatively high number of ports (input and output) themaster controller210, and in particular the burst-scheduling kernel310, may take advantage of a parallel processing strategy to further conserve processing time. Such a parallel processing strategy may, for instance, involve considering a 64 by 64 space switch (64 input ports, 64 output ports) as comprising an arrangement of four16 by 16 space switches. However, so that each input may be connected to any output, four arrangements must be considered. Anexemplary configuration1000 for considering these arrangements is illustrated inFIG. 10. Theexemplary configuration1000 includes four input port groups (sub-sets)1002A,1002B,1002C,1002D (referred to individually or collectively as1002) and four output port groups (sub-sets)1004A,1004B,1004C,1004D (referred to individually or collectively as1004). Each input port group includes 16 input ports and each output port group includes 16 output ports.
Four processors may perform scheduling for the 64 by 64 space switch, where each processor schedules on behalf of one input port group1002. A scheduling session may be divided into as many scheduling time periods as there are processors. For each scheduling time period, a given processor (scheduling on behalf of one input port group1002) will schedule only those connections destined for a particular output group1004. The output group changes after every scheduling time period such that, by the end of the scheduling session, all four output port groups1004 have been considered for connections from the input port group1002 corresponding to the given processor. The state of theexemplary configuration1000 at a particular scheduling time period is illustrated inFIG. 10. The intersection of the output port group1004 with the corresponding input port group1002 for the particular scheduling time period is identified with a bold border.
A parallel processing data structure1100, which is an alternative to thedata structure500 illustrated inFIG. 5A, is illustrated inFIG. 11. Each of theN records1104 in the parallel processing data structure1100 is divided into sub-records, where each sub-record in a givenrecord1104 corresponds to a single output port group1004. Parameters of received burst transfer requests are stored asentries506 in arecord1104 according to the input port202 and in a sub-record according to the output port group1004. The sub-records that correspond to the output port groups1004 are illustrated inFIG. 11 as a number ofrows1102A,1102B,1102C,1102D.
When a given processor of the parallel processors in the burst-scheduling unit408 scans thecalendar600 to detect afuture time slot604 containing an input port identifier606 (step802), theinput port identifier606 must be from the input port group1002 to which the given processor corresponds. The given processor then communicates with theburst parameter queue406 to acquire entries506 (step804) from the parallel processing data structure1100. Theentries506 are acquired from therecord1104 that corresponds to the input port202 identified in theinput port identifier606 in the detectedtime slot604 and, furthermore, only from the sub-record corresponding to the output port group1004 under consideration by the given processor in the current scheduling time period. InFIG. 11, therow1102A of sub-records corresponding to the output port group1004 under consideration by the given processor associated with a particularinput port group1002A (which includes input ports N-3, N-2, N-1 and N) is identified with a bold border.
A hub and spokedata network1200 is illustrated inFIG. 12, including abufferless core node1210X in place of thecore node102. In thedata network1200, a number oftraffic sources104A,104B,104C,104N (referred to individually or collectively as104) connect, via theedge nodes108 and thebufferless core node1210X, to a number of traffic sinks106A,106B,106C,106M (referred to individually or collectively as106). In practice, the traffic sources104 and the traffic sinks106 are integrated, for instance, as a personal computer. A space switch and space switch controller are maintained at thebufferless core node1210X.
Anedge node108, typical of theedge nodes108 inFIG. 12, is illustrated inFIG. 13. Traffic is received from the traffic sources104 or sent to the traffic sinks106 attraffic interfaces1302A,1302B,1302C (referred to individually or collectively as1302). The traffic interfaces1302 connect to buffers1304A,1304B,1304C (referred to individually or collectively as1304). The buffers1304 are controlled bybuffer controllers1306A,1306B,1306C (referred to individually or collectively as1306) with regard to the timing of passing traffic to a core interface1308X that subsequently passes the traffic to thebufferless core node1210X. The buffer controllers1306 also connect to the core interface1308X for sending, to thebufferless core node1210X, burst transfer requests in a manner similar to the manner in which theport controllers206 send burst transfer requests to themaster controller210 inFIG. 2. The core interface1308X maintains a connection to aslave time counter1314 for time locking with a master time counter in a master controller.
At thebufferless core node1210X, illustrated in detail inFIG. 14, aspace switch1412 connectsN input ports1402A,1402B,1402C, . . . ,1402N (referred to individually or collectively as1402) to Moutput ports1408A,1408B,1408C, . . . ,1408M (referred to individually or collectively as1408) under control of a slavespace switch controller1414. Each of the N input ports1402 is arranged to send burst transfer requests received from theedge nodes108 to amaster controller1410 and to send burst traffic to thespace switch1412. If, for instance, a particular input port1402 is arranged to receive a Wavelength Division Multiplexed (WDM) signal having 16 channels, one channel (i.e., one wavelength) may be devoted to the transfer of burst transfer requests from the subtendingedge node108 to themaster controller1410. As in thecore node102 ofFIG. 2, themaster controller1410 passes scheduling information to the slavespace switch controller1414.
Themaster controller1410 may consult theedge nodes108, via the output ports1408, to perform conventional operational and maintenance functions. However, to avoid consulting theedge nodes108, edge-to-edge rate allocations may be introduced and updated as the need arises. The interval between successive updates may vary between 100 milliseconds and several hours, which is significantly larger than a mean burst duration.
In overview, atraffic interface1302A at asource edge node108A receives a burst from a subtendingtraffic source104A. The burst is stored in thebuffer1304A. Parameters indicating the size and destination (e.g., adestination edge node108E) of the burst are communicated from thebuffer controller1306A, via the core interface1308X, to thebufferless core node1210X in a burst transfer request. At thebufferless core node1210X, the burst transfer request is received at one of the input ports1402 and sent to themaster controller1410. Themaster controller1410 executes a burst scheduling algorithm to generate scheduling information and communicates relevant parts of the generated scheduling information to theedge nodes108. Themaster controller1410 also communicates relevant parts of the generated scheduling information to the slavespace switch controller1414. At theedge node108A, thebuffer1304A sends the burst to thebufferless core node1210X, via the core interface1308X, according to the scheduling information received at thebuffer controller1306A. At thespace switch1412 of thebufferless core node1210X, a connection is established between theinput port1402A and theoutput port1408B such that the burst is successfully transferred from thesource edge node108A to thedestination edge node108E.
As will be apparent to a person skilled in the art, the duty of routing of burst transfer requests to themaster controller1410 and bursts to thespace switch1412 may present a problem to the design of the input ports1402 if thespace switch1412 is optical. One solution to this problem is to relieve the input ports1402 of this duty. In a version of thedata network1200 ofFIG. 12, which is altered to suit an optical space switch and illustrated inFIG. 15, abufferless core node1210Z is collocated with anedge node108J at alocation112. Additionally, a stand-alone master controller1610Z exists separate from thebufferless core node1210Z. The collocatededge node108J maintains a connection to the stand-alone master controller1610Z for transferring burst transfer requests, received from other edge nodes108 (via thebufferless core node1210Z) and the subtending traffic sources104, to the space switch controller in thebufferless core node1210Z. In this solution, it is necessary that theedge nodes108 be aware that burst transfer requests are to be sent to the collocatededge node108J. This solution avoids dedication of an entire wavelength to signaling, which typically has a low bit rate.
InFIG. 16, the collocatededge node108J is illustrated in detail. Like thetypical edge node108 ofFIG. 13, the collocatededge node108J includestraffic interfaces1602A,1602B,1602C, buffers1604A,1604B,1604C,buffer controllers1606A,1606B,1606C (referred to individually or collectively as1606) and a core interface1608X. The core interface1608X also maintains a connection to aslave time counter1614Z for time locking with a master time counter in themaster controller1610Z. However, in addition to thetypical edge node108 inFIG. 13, the collocatededge node108J also includes acontroller interface1612 for sending burst transfer requests to the stand-alone master controller1610Z. The buffer controllers1606 communicate burst transfer requests to thecontroller interface1612 rather than to the core interface1608X, as is the case in thetypical edge node108 inFIG. 13. The core interface1608X also communicates other burst transfer requests to thecontroller interface1612, in particular, burst transfer requests received fromother edge nodes108. The stand-alone master controller1610Z generates scheduling information based on the burst transfer requests and sends the scheduling information to the slave space switch controller in thebufferless core node1210Z.
As illustrated in detail inFIG. 17, the stand-alone master controller1610Z includes aprocessor1702. Theprocessor1702 maintains connections to amemory1704, anedge node interface1706, acore node interface1712 and amaster time counter1714. At theedge node interface1706, themaster controller210 receives burst transfer requests from the collocatededge node108J. Theprocessor1702 is also connected to a burst-scheduling kernel1710. Based on the burst transfer requests received from theprocessor1702, the burst-scheduling kernel1710 determines appropriate timing for switching at the space switch at thebufferless core node1210Z. According to the determined timing received from the burst-scheduling kernel1710, theprocessor1702 passes scheduling information to thebufferless core node1210Z via thecore node interface1712. Theprocessor1702 also controls the timing of transmission of bursts, from theedge nodes108 to thebufferless core node1210Z, by transmitting scheduling information to theedge nodes108 via theedge node interface1706 and the collocatededge node108J.
At thebufferless core node1210Z, illustrated in detail inFIG. 18, aspace switch1812 connectsN input ports1802A,1802B,1802C, . . . ,1802N (referred to individually or collectively as1802) to Moutput ports1808A,1808B,1808C, . . . ,1808M (referred to individually or collectively as1808) under control of a slavespace switch controller1814. Instead of requiring that the N input ports1802 be arranged to send burst transfer requests from theedge nodes108 to a master controller and send bursts to thespace switch1812, burst transfer requests pass through thebufferless core node1210Z and are sent to the collocatededge node108J. The collocatededge node108J then forwards the burst transfer requests to the stand-alone master controller1610Z, where scheduling information is generated. The scheduling information is received from the stand-alone master controller1610Z by amaster controller interface1816. The slavespace switch controller1814 then receives the scheduling information from themaster controller interface1816.
Thebufferless core node1210Z need not be limited to asingle space switch1812. Especially where each input port1802 and output port1808 supports multiple channels over respective links to or fromrespective edge nodes108, as is the case in WDM, thebufferless core node1210Z may include an assembly of multiple parallel space switches (not shown). Each of the multiple space switches may require an associated burst-scheduling kernel, such as the burst-scheduling kernel1710 inFIG. 17, to be located at themaster controller1610Z of thebufferless core node1210Z. Alternatively, each of the multiple space switches may be associated with a unique burst scheduling unit (see408 inFIG. 4).
The space switches in the assembly of multiple parallel space switches operate totally independently. The traffic to aspecific edge node108 may, however, be carried by any of the channels of a multi-channel link (WDM fiber link) from asource edge node108 to the bufferless core node1210. Preferably, a load-balancing algorithm (not described herein) is used to balance the traffic and thus increase throughput and/or decrease scheduling delay.
Successive bursts to the samesink edge node108 may be transferred using different channels (different wavelengths) and, hence, may be switched in different space switches in the bufferless core node1210. However, the transfer of successive bursts to the samesink edge node108 using different channels should not be expanded to include the transfer of successive bursts to the samesink edge node108 using different links where the delay differential between links (possibly several milliseconds) may complicate assembly of the bursts at thesink edge node108.
Note that conventional WDM demultiplexers and WDM multiplexers are required at the input ports1802 and output ports1808 of a bufferless core node1210 employing multiple parallel space switches. They are not illustrated in the figures, however, their use being well known in the art.
An advantage of burst switching is a freedom to select a space switch on a per-burst basis, as long as a predetermined time separation (a microsecond or so) is provided between successive bursts of a single data stream. The time separation is required to offset the effect of propagation delay differentials present in different wavelengths of the same WDM signal.
Returning toFIG. 12, propagation delay may be considered in view of thedata network1200. If theedge node108A is one kilometer away from thebufferless core node1210X, scheduling information may take five microseconds to pass from thebufferless core node1210X to theedge node108A in an optical-fiber link. Similarly, a burst sent from theedge node108A would take five microseconds to travel to thebufferless core node1210X. A time period lasting five microseconds is represented in thecalendar600 by 500time slots604 of 100-nanoseconds each. It may be that, as a consequence of propagation delay, a burst may arrive at thebufferless core node1210X after the time at which the burst was scheduled to be passing through thespace switch1412. Consequently, given knowledge, at thebufferless core node1210X, of an estimate of a maximum round trip propagation delay associated with theedge nodes108, scheduling can be arranged to take the propagation delay into account. For instance, the burst-scheduling kernel1710 may schedule such that the earliest a burst may be scheduled, relative to a current time in themaster time counter1714, is at least the estimated maximum round trip propagation delay time into the future.
Notably, propagation delay differential was not a problem in thecore node102 ofFIG. 2, which had input buffers. The collocation of the collocatededge node108J with thebufferless core node1210Z inFIG. 15 removes concern of propagation delay differentials for traffic originating at thetraffic sources104A,104B connected to the collocatededge node108J. However, for theother edge nodes108, a time locking scheme is required so that bursts may be correctly scheduled.
The propagation delay between the time at which a burst leaves one of the other edge nodes108 (i.e., theedge nodes108 that are not collocated with thebufferless core node1210Z) and the time at which the burst arrives at thebufferless core node1210Z may be different for each of theother edge nodes108. To switch these bursts, without contention or a requirement for burst storage at thebufferless core node1210Z, theother edge nodes108 must be time locked to thebufferless core node1210Z. A time locking technique, also called time coordination, is described in the applicant's U.S. patent application Ser. No. 09/286,431, filed on Apr. 6, 1999, and entitled “Self-Configuring Distributed Switch,” the contents of which are incorporated herein by reference. With time locking, the scheduling method in accordance with the present invention guarantees that bursts arrive to available resources at thebufferless core node1210Z.
Given the collocation of the collocatededge node108J with thebufferless core node1210Z and the corresponding fact that all burst transfer requests of thebufferless core node1210Z pass though the collocatededge node108J, eachother edge node108 may “time lock,” with the collocatededge node108J.
The time locking may be performed using any one of a number of time locking schemes. In one such scheme, eachedge node108 includes at least one local time counter (e.g., theslave time counter1314 ofFIG. 13) of equal width W. In one embodiment of the present invention, a time locking request may be sent from aparticular edge node108E (FIG. 15), while noting the sending time (i.e., the value of the slave time counter at theparticular edge node108E when the time locking request is sent), to themaster controller1610Z. When the time locking request is received at themaster controller1610Z, the arrival time (i.e., the value of themaster time counter1714 at the arrival time) is noted. A time locking response is generated, including an indication of the arrival time, and sent to theparticular edge node108E. A time difference between sending time and arrival time is determined at theparticular edge node108E and used to adjust the slave time counter at theparticular edge node108E. In future, scheduling information is received at theparticular edge node108E from the stand-alone master controller1610Z, for instance, “start sending burst number 73 at a time counter state3564.” If theparticular edge node108E starts sending burst number73 at slave time counter state3564, the beginning of the burst will arrive at thebufferless core node1210Z at master time counter state3564. Preferably, the duration of each time counter cycle is equal and substantially larger than a maximum round-trip propagation delay from anyedge node108 to any core node1210 in thedata network1200. Furthermore, the maximum round-trip propagation delay should be taken into account when performing scheduling at the stand-alone master controller1610Z. Preferably, the counters related to the time locking scheme are included in thecontroller interface1612 of the collocatededge node108J ofFIG. 16 and in the core interface of thegeneric edge node108 ofFIG. 13.
FIG. 19 illustrates thedata network1200 supplemented with an additionalbufferless core node1210Y. With the additionalbufferless core node1210Y, a flow control process, which operates at a higher level than the switch operations, may assign one of thebufferless core nodes1210Z,1210Y (referred to individually or collectively as1210) to each traffic stream originating at aparticular edge node108, where a traffic stream is an aggregation of traffic with identical source anddestination edge nodes108. When a burst arrives at a givenedge node108, the givenedge node108 may send a burst transfer request to the core node (saybufferless core node1210Z) assigned to the traffic stream of which the burst is part. Scheduling information is returned to the givenedge node108. The givenedge node108 may then send the burst to the assignedbufferless core node1210Z according to the timing represented in the scheduling information. The additionalbufferless core node1210Y is illustrated as collated with anedge node108K at anadditional location114. Anadditional master controller1610Y, corresponding to the additionalbufferless core node1210Y, is also present at theadditional location114.
Anedge node108 communicates with all core nodes1210 in the sending and receiving modes. As such, theedge nodes108 should be adapted to communicate with more than one bufferless core node1210. This adaptation is shown for the collocatededge node108J inFIG. 20. Notably different from the collocatededge node108J as illustrated inFIG. 16 is the addition of acore interface1608Y corresponding to thebufferless core node1210Y. Thecore interface1608Y corresponding to thebufferless core node1210Y requires a connection to aslave time counter1614Y. As will be apparent to a person skilled in the art, there may be many more than two bufferless core nodes1210 in a data network and many more than eightedge nodes108.
As stated above, there is a requirement that a slave time counter at a givenedge node108 be time locked to the master time counter of the master controller1610 of the bufferless core node1210. The scheduling information transmitted by the master controller1610 to theedge nodes108 is based on the time indication of themaster time counter1714 as it corresponds to the scheduled time slot in thecalendar600. Thetime slots604 in thecalendar600 must, therefore, also be time locked to themaster time counter1714. The selection of the time counter cycle in use at themaster time counter1714 and the calendar cycle are important design choices. Where amaster time counter1714 counts using W bits, the duration of the master time counter cycle is 2Wmultiplied by the duration of a period of a clock used to drive the master time counter. With W=32, and a clock period of 16 nanoseconds, for example, the number of counter states is about 4.29×109and duration of the master time counter is more than 68 seconds. This is orders of magnitude higher than the round-trip propagation delay between any two points on Earth (assuming optical transmission).
Increasing the duration of themaster time counter1714 involves adding a few bits, resulting in a very small increase in hardware cost and transport of time locking signals across the network. By contrast, increasing the duration of thecalendar600 requires increasing the depth of a memory used to maintain thecalendar600 and/or increasing the duration of eachtime slot604 in the calendar. The latter results in decreasing the accuracy of time representation, and hence in wasted time, as will be explained below.
If, for example, eachtime slot604 has a duration of eight microseconds and the number ofcalendar time slots604 is 65,536, the duration of thecalendar600 is more than 500 milliseconds. Atime slot604 of eight microseconds is, however, comparable with the duration of a typical burst. At 10 Gb/s, an eight microsecond bursts is about ten kilobyte long. It is desirable that the duration of eachtime slot604 be a small fraction of the mean burst duration. A reasonable duration for atime slot604 is 64 nanoseconds. However, if the duration of thecalendar600 is to be maintained at 500 milliseconds, thecalendar600 requires eight million slots. A compromise is to select a duration of thecalendar600 that is just sufficient to handle the largest possible burst and use an associated adder or cycle counter to be cognizant of the calendar time relationship to the master time counter time. The largest burst duration would be imposed by a standardization process. In a channel of 10 Gb/s, a burst of one megabyte has a duration of less than one millisecond. A standardized upper-bound of the burst length is likely to be even less than one megabyte in order to avoid delay jitter. Thus, the duration of thecalendar600 can be selected to be less than 16 milliseconds. With a duration of eachtime slot604 set to 64 nanoseconds, the number of requiredtime slots604 would be about 262,144. This can be placed in four memory devices of 65,536 words each, a word corresponding to atime slot604.
Relating atime slot604 in thecalendar600 to the state of themaster time counter1714 is greatly simplified if the ratio of the number of master time counter states to the number oftime slots604 is a power of two, and the ratio of the duration of atime slot604 to the duration of the clock used to drive the master time counter is also a power of two. Notably, the number of master time counter states exceeds or equals the number of calendar slots and the duration of a calendar slots exceeds or equals the clock period.
If the width of the master time counter is 32 bits, the width of a calendar address is 18 bits (218, i.e., 262,144 time slots604), and the duration of atime slot604 is four times the period of the clock used to drive the master time counter, then duration of the master time counter is 4,096 times the duration of thecalendar600. Reducing the width of the master time counter to 24 bits, with 262,144 calendar slots, a clock period of 16 nanoseconds and a duration of eachtime slot604 of 64 nanoseconds, the duration of themaster time counter1714 becomes about 268.72 milliseconds, which is 16 times the calendar period of about 16.77 milliseconds. The master clock period is selected to be reasonably short to ensure accurate time representation for time locking purposes.
FIG. 21 depicts a mastertime counter cycle2102 and acalendar cycle2104 for an exemplary case wherein aduration2112 of the mastertime counter cycle2102 is exactly four times aduration2114 of thecalendar cycle2104. Time locking of the calendar to the master time counter is essential as indicated inFIG. 21.
The scheduling of future burst transfers based on burst transfer requests received from aspecific edge node108, associated with a specific input port of a bufferless core node1210, is illustrated inFIG. 22. Changes in the state of acalendar2200 are illustrated as they correspond to the specific input port. In particular, thecalendar2200 has 32 time slots and is shown as fourcalendar cycles2200S,2200T,2200U,2200V. In this example, the duration of the master time counter is four times the duration of thecalendar2200. Atime slot2202A contains an identifier of the input port, typically an input port number. Each other time slot in thecalendar2200 contains either an input port identifier or a null identifier, although, for simplicity, these identifiers are not shown. As thecalendar2200 is scanned, thetime slot2202A is encountered and aninput port identifier2206 is recognized. The burst scheduling method ofFIG. 8 is then performed, along with the map maintenance method ofFIG. 9. These methods result in theinput port identifier2206 being replaced with a null identifier intime slot2202A and theinput port identifier2206 being written intime slot2202B. The methods ofFIGS. 8 and 9 are repeated when theinput port identifier2206 is encountered intime slot2202B, resulting in a null identifier intime slot2202B and theinput port identifier2206 being written intime slot2202C. When theinput port identifier2206 is encountered intime slot2202C, theinput port identifier2206 being written intime slot2202D which is in thesecond calendar cycle2200T and has a numerically smaller index in thecalendar2200. The index oftime slot2202D is smaller than the index oftime slot2202C because the adder determining the index of the time slot in which to write the input port identifier2206 (step902) has a word length that exactly corresponds to the number of time slots in the calendar2200 (note that calendar length is a power of 2). When theinput port identifier2206 is encountered intime slot22021 in thefourth calendar cycle2200V, theinput port identifier2206 is written totime slot2202X in thefirst calendar cycle2200S. Scheduling availability of the input port in thefirst calendar cycle2200S means that the input port will not be available until the master clock cycle subsequent to the master clock cycle in whichtime slot22021 was encountered.
It is emphasized that the scheduling procedure described above enables scheduling bursts for a look-ahead period as large as the duration of the master time counter. Where the duration of the master time counter is 268 milliseconds (224 master time counter slots, 16 nanosecond clock period), for example, at 10 GHz, bursts of cumulative length as high as 200 megabytes can be scheduled.
To compute a scheduled time indication, i.e., the master time counter state corresponding to the scheduled time slot, to be reported to a respective edge node, an indication of the relative calendar cycle number with respect to the master time counter cycle must be provided along with the scheduled time slot. In the example ofFIG. 22, this indication is 0, 1, 2 or 3. The scheduled time indication is then the cycle indication, left shifted by 5 bits (log232) added to the scheduled time slot from the calendar. For example, iftime slot2202G, which is at time index20 in the third calendar cycle (relative calendar indication 2), is the scheduled time slot, the scheduled time indication is 2×32+20=84. The scheduled time indication that is communicated to the requesting edge node is 84.
A portion of the network capacity in thedata network1200 may be dedicated to relatively well-behaved traffic. That is, non-bursty traffic. To this end, a master controller may include a second scheduler dedicated to more traditional circuit switching. Like themaster controller1610Z illustrated inFIG. 17, themaster controller1610Y illustrated inFIG. 23 includes aprocessor2302. Theprocessor2302 maintains connections to amemory2304, anedge node interface2306, acore node interface2312 and amaster time counter2314. Themaster controller1610Y illustrated inFIG. 23 also includes a circuit-scheduling kernel2316 for scheduling transfers betweenedge nodes108 on a longer term basis.
In one embodiment of the present invention, the edge nodes108 (or the port controllers206) may perform some processing of bursts. This processing may include expansion of bursts to have a length that is a discrete number of segments or aggregation of small bursts.
Notably, the present invention is applicable without dependence on whether switching in thedata network1200 is electrical or optical and without dependence on whether transmission in thedata network1200 is wireline or wireless. The optical switching example is particularly instructive, however, in that, given recent developments in Dense Wavelength Division Multiplexing, a link between anedge node108 and a bufferless core node1210 may include multiple (e.g., 32) channels. If thedata network1200 is to work as described in conjunction withFIG. 12, one of the multiple channels may be completely dedicated to the transfer of burst transfer requests. However, the transfer of burst transfer requests represent a very small percentage of the available capacity of such a channel and the unused capacity of the dedicated channel is wasted. This is why co-location of anedge node108 with a bufferless core node1210 is used. The optical example is also well suited to the consideration herein that thecore node1210X (FIG. 14) is bufferless, as an efficient method for buffering optically received data has not yet been devised.
Advantageously, the present invention allows bursts that switch through the core nodes to employ the core nodes, and associated space switches, nearly constantly, that is, with virtually no data loss. As such, the network resources are used more efficiently.
Other modifications will be apparent to those skilled in the art and, therefore, the invention is defined in the claims.