CROSS REFERENCE TO RELATED PATENT APPLICATIONSThe present invention is a continuation-in-part application of U.S. patent application Ser. No. 09/563,561 filed May 3, 2000 which is a continuation-in-part application of U.S. patent application Ser. No. 09/481,902 filed Jan. 13, 2000 which is a continuation of U.S. patent application Ser. No. 08/992,763 filed Dec. 17, 1997 for: “Multiprocessor Computer Architecture Incorporating a Plurality of Memory Algorithm Processors in the Memory Subsystem”, assigned to SRC Computers, Inc., Colorado Springs, Colo., assignee of the present invention, the disclosures of which are herein specifically incorporated by this reference.[0001]
BACKGROUND OF THE INVENTIONThe present invention relates, in general, to the field of computer architectures incorporating multiple processing elements such as multi-adaptive processors (“MAP™”, is a trademark of SRC Computers, Inc., Colorado Springs, Colo.). More particularly, the present invention relates to systems and methods for accelerating web site access and processing utilizing a computer system incorporating reconfigurable processors operating under a single operating system image.[0002]
Presently, many different forms of electronic business and commerce are transacted by means of individual computers coupled to the Internet. By virtue of its computer-based nature, many electronic commerce (“e-commerce”) web sites employ various methods to allow their content to be varied based on the demographics of the particular user.[0003]
This demographic information may be obtained in a variety of ways, with some sites simply requesting the site visitor respond to one or more questions while others may employ more sophisticated techniques such as “click stream” processing. In this latter instance, the prospective interests of the site visitor are inferred by determination and analysis of, for example, the previous sites he has visited. In either instance however, this data must be processed by the site such that the web page content may be altered in an effort to maximize it appeal to that particular site visitor with a view toward ultimately maximizing site revenue.[0004]
Since studies have shown that the average Internet user will wait but a maximum of twenty seconds or so for a web page to be updated, it is vitally important that the updating of the page contents be completed as rapidly as possible. Consequently, a great deal of effort is placed into maximizing the software performance of algorithms that process the user demographic data. Currently, all known web servers that accomplish this processing employ industry standard microprocessor based servers and, as a result, their maximum performance is thereby limited by the limitations inherent in the standard microprocessor “load/store” architecture.[0005]
SUMMARY OF THE INVENTIONSRC Computers, Inc., assignee of the present invention, is an industry leader in the design and development of multiprocessor computer systems including those employing industry standard processors together with multi-adaptive processors (“MAP™”) utilizing, for example, field programmable gate arrays functioning as the programmable MAP elements.[0006]
Particularly disclosed herein is a system and method for accelerating web site access and processing utilizing a multiprocessor computer system incorporating one or more microprocessors and a number of reconfigurable processors operating under a single operating system image. In an exemplary embodiment, a web site may be serviced with a hybrid multiprocessor computer system that contains both industry standard microprocessors and one or more reconfigurable processors that share all the system's resources and operate under a single operating system image, (although, in an alternative embodiment, cluster management software may be used to make a cluster of microprocessors appear to the user as a single copy of the operating system). In such a system, demographic data processing algorithms may be loaded into the reconfigurable processors which may be provided in the form of specially adapted field programmable gate arrays (“FPGAs”). In this manner, the appropriate algorithm may be implemented in hardware gates (as opposed to software) which can process the data up to 1000 times faster than a standard microprocessor based server.[0007]
As an exemplary implementation, one particularly efficacious hybrid computing system is the SRC Computers, Inc. SRC-6 incorporating multi-adaptive processors (MAP). In such a system, the algorithms loaded into the MAP elements to process the data can be completely changed in under 100 msec. This allows for the possibility of quickly altering even the processing algorithm without significantly delaying the site visitor. The ability to change the algorithm, coupled with highly accelerated processing times allows for more complex algorithms to be employed leading to even more refined web page content adjustment.[0008]
Through the use of such a hybrid system operating under a single operating system image, a standard operating system, such as Solaris™ (trademark of Sun Microsystems, Inc., Palo Alto, Calif.) may be employed and can be easily administered, a feature which is important in such e-commerce based applications. Since the MAP elements are inherently tightly-coupled into the system and are not an attached processor located, for example, on an input/output (“I/O”) port, their effectiveness and ease of use is maximized.[0009]
Demographic data processing is merely an example of how the unique capabilities of such reconfigurable processing systems can be utilized to accelerate e-commerce, and “secure socket” operation is yet another possible application. In this instance, such operations can often consume as much as 80% of the typical, traditional site server microprocessor cycles. SRC Computers, Inc. has demonstrated that reconfigurable processor based systems, such as the SRC-6, can perform decryption algorithms up to 1000 times faster than a conventional microprocessor thereby also allowing for faster web site access while concomitantly allowing more robust data encryption techniques to be employed. Similarly significant speed advantages could be realized in, for example, implementing database searches wherein the search algorithms can be directly implemented in the hardware of the reconfigurable system providing two to three orders of magnitude execution time improvements over conventional microprocessor based solutions.[0010]
In general, the use of hybrid computer systems with a single system image of the operating system for web site hosting allows the site to employ user selected hardware accelerated versions of software algorithms currently implemented in a wide array of e-commerce related functions. This results in an easy to use system with significantly faster processing capability which translates into shorter site visitor waiting periods.[0011]
BRIEF DESCRIPTION OF THE DRAWINGSThe aforementioned and other features and objects of the present invention and the manner of attaining them will become more apparent and the invention itself will be best understood by reference to the following description of a preferred embodiment taken in conjunction with the accompanying drawings, wherein:[0012]
FIG. 1 is a simplified, high level, functional block diagram of a multiprocessor computer architecture employing multi-adaptive processors (“MAP™”) in accordance with the disclosure of the aforementioned patent applications in an alternative embodiment wherein direct memory access (“DMA”) techniques may be utilized to send commands to the MAP elements in addition to data;[0013]
FIG. 2 is a simplified logical block diagram of a possible computer application program decomposition sequence for use in conjunction with a multiprocessor computer architecture utilizing a number of MAP elements located, for example, in the computer system memory space, in accordance with a particular embodiment of the present invention;[0014]
FIG. 3 is a more detailed functional block diagram of an exemplary individual one of the MAP elements of the preceding figures and illustrating the bank control logic, memory array and MAP assembly thereof;[0015]
FIG. 4 is a more detailed functional block diagram of the control block of the MAP assembly of the preceding illustration illustrating its interconnection to the user FPGA thereof in a particular embodiment;[0016]
FIG. 5 is a functional block diagram of an alternative embodiment of the present invention wherein individual MAP elements are closely associated with individual processor boards and each of the MAP elements comprises independent chain ports for coupling the MAP elements directly to each other;[0017]
FIG. 6 is a functional block diagram of an individual MAP element wherein each comprises on board memory and a control block providing common memory DMA capabilities;[0018]
FIG. 7 is an additional functional block diagram of an individual MAP element illustrating the on board memory function as an input buffer and output FIFO portions thereof;[0019]
FIG. 8 is a more detailed functional block diagram of an individual MAP element as illustrated in FIGS. 6 and 7;[0020]
FIG. 9 is a user array interconnect diagram illustrating, for example, four user FPGAs interconnected through horizontal, vertical and diagonal buses to allow for expansion in designs that exceed the capacity of a single FPGA;[0021]
FIG. 10 is a functional block diagram of another alternative embodiment of the present invention wherein individual MAP elements are closely associated with individual memory arrays and each of the MAP elements comprises independent chain ports for coupling the MAP elements directly to each other;[0022]
FIGS. 11A and 11B are timing diagrams respectively illustrating input and output timing in relationship to the system clock (“Sysclk”) signal[0023]
FIG. 12 is a simplified illustration of a representative operating environment for the system and method of the present invention including a typical web site server as would be replaced by an SRC-6 reconfigurable server;[0024]
FIG. 13 is a flowchart illustrating a conventional data processing sequence in a conventional application of the typical web site server depicted in the preceding figure. and[0025]
FIG. 14 is a corresponding flowchart illustrating the processing of demographic or other data utilizing a reconfigurable server for implementing the system and method of the present invention and which results in significantly improved access and data processing times.[0026]
DESCRIPTION OF A PREFERRED EMBODIMENTWith reference now to FIG. 1, a[0027]multiprocessor computer10 architecture in accordance with one embodiment of the present invention is shown. Themultiprocessor computer10 incorporates N processors120through12Nwhich are bi-directionally coupled to amemory interconnect fabric14. Thememory interconnect fabric14 is then also coupled to M memory banks comprising memory bank subsystems160(Bank 0) through 16M (Bank M). N number of multi-adaptive processors (“MAP™”)1120through112Nare also coupled to thememory interconnect fabric14 as will be more fully described hereinafter.
With reference now to FIG. 2, a representative application program decomposition for a[0028]multiprocessor computer architecture100 incorporating a plurality of multi-adaptive processors in accordance with the present invention is shown. Thecomputer architecture100 is operative in response to user instructions and data which, in a coarse grained portion of the decomposition, are selectively directed to one of (for purposes of example only) four parallel regions1021through1024inclusive. The instructions and data output from each of the parallel regions1021through1024are respectively input to parallel regions segregated into data areas1041through1044and instruction areas1061through1064. Data maintained in the data areas1041through1044and instructions maintained in the instruction areas1061through1064are then supplied to, for example, corresponding pairs of processors1081,1082(P1 and P2);1083,1084(P3 and P4);1085,1086(P5 and P6); and1087,1088(P7 and P8) as shown. At this point, the medium grained decomposition of the instructions and data has been accomplished.
A fine grained decomposition, or parallelism, is effectuated by a further algorithmic decomposition wherein the output of each of the processors[0029]1081through1088, is broken up, for example, into a number of fundamental algorithms1101A,1101B,1102A,1102Bthrough1108Bas shown. Each of the algorithms is then supplied to a corresponding one of theMAP elements1121A,1121B,1122A,1122B, through1128Bwhich may be located in the memory space of thecomputer architecture100 for execution therein as will be more fully described hereinafter.
With reference additionally now to FIG. 3, an exemplary implementation of a[0030]memory bank120 in a MAPsystem computer architecture100 of the present invention is shown for a representative one of theMAP elements112 illustrated in the preceding figure. Eachmemory bank120 includes a bankcontrol logic block122 bi-directionally coupled to the computer system trunk lines, for example, a 72line bus124. The bankcontrol logic block122 is coupled to a bi-directional data bus126 (for example 256 lines) and supplies addresses on an address bus128 (for example 17 lines) for accessing data at specified locations within amemory array130.
The[0031]data bus126 andaddress bus128 are also coupled to aMAP element112. TheMAP element112 comprises acontrol block132 coupled to theaddress bus128. Thecontrol block132 is also bi-directionally coupled to a user field programmable gate array (“FPGA”)134 by means of a number of signal lines136. Theuser FPGA134 is coupled directly to thedata bus126. In a particular embodiment, theFPGA134 may be provided as a Lucent Technologies OR3T80 device.
The[0032]computer architecture100 comprises a multiprocessor system employing uniform memory access across common shared memory with one ormore MAP elements112 which may be located in the memory subsystem, or memory space. As previously described, eachMAP element112 contains at least one relativelylarge FPGA134 that is used as a reconfigurable functional unit. In addition, acontrol block132 and a preprogrammed or dynamically programmable configuration ROM (as will be more fully described hereinafter) contains the information needed by thereconfigurable MAP element112 to enable it to perform a specific algorithm. It is also possible for the user to directly download a new configuration into theFPGA134 under program control, although in some instances this may consume a number of memory accesses and might result in an overall decrease in system performance if the algorithm was short-lived.
FPGAs have particular advantages in the application shown for several reasons. First, commercially available FPGAs now contain sufficient internal logic cells to perform meaningful computational functions. Secondly, they can operate at speeds comparable to microprocessors, which eliminates the need for speed matching buffers. Still further, the internal programmable routing resources of FPGAs are now extensive enough that meaningful algorithms can now be programmed without the need to reassign the locations of the input/output (“1/0”) pins.[0033]
By, for example, placing the[0034]MAP element112 in the memory subsystem or memory space, it can be readily accessed through the use of memory read and write commands, which allows the use of a variety of standard operating systems. In contrast, other conventional implementations may propose placement of any reconfigurable logic in or near the processor, however these conventional implementations are generally much less effective in a multiprocessor environment because, unlike the system and method of the present invention, only one processor has rapid access to it. Consequently, reconfigurable logic must be placed by every processor in a multiprocessor system, which increases the overall system cost. In addition,MAP element112 can access thememory array130 itself, referred to as Direct Memory Access (“DMA”′), allowing it to execute tasks independently and asynchronously of the processor. In comparison, were it placed near the processor, it would have to compete with the processors for system routing resources in order to access memory, which deleteriously impacts processor performance. BecauseMAP element112 has DMA capability, (allowing it to write to memory), and because it receives its operands via writes to memory, it is possible to allow aMAP element112 to feed results to anotherMAP element112. This is a very powerful feature that allows for very extensive pipelining and parallelizing of large tasks, which permits them to complete faster.
Many of the algorithms that may be implemented will receive an operand and require many clock cycles to produce a result. One such example may be a multiplication that takes 64 clock cycles. This same multiplication may also need to be performed on thousands of operands. In this situation, the incoming operands would be presented sequentially so that while the first operand requires 64 clock cycles to produce results at the output, the second operand, arriving one clock cycle later at the input, will show results one clock cycle later at the output. Thus, after an initial delay of 64 clock cycles, new output data will appear on every consecutive clock cycle until the results of the last operand appears. This is called “pipelining”.[0035]
In a multiprocessor system, it is quite common for the operating system to stop a processor in the middle of a task, reassign it to a higher priority task, and then return it, or another, to complete the initial task. When this is combined with a pipelined algorithm, a problem arises (if the processor stops issuing operands in the middle of a list and stops accepting results) with respect to operands already issued but not yet through the pipeline. To handle this issue, a solution involving the combination of software and hardware is disclosed herein.[0036]
To make use of any type of conventional reconfigurable hardware, the programmer could embed the necessary commands in his application program code. The drawback to this approach is that a program would then have to be tailored to be specific to the MAP hardware. The system of the present invention eliminates this problem. Multiprocessor computers often use software called parallelizers. The purpose of this software is to analyze the user's application code and determine how best to split it up among the processors. The present invention provides significant advantages over a conventional parallelizer and enables it to recognize portions of the user code that represent algorithms that exist in[0037]MAP elements112 for that system and to then treat theMAP element112 as another computing element. The parallelizer then automatically generates the necessary code to utilize theMAP element112. This allows the user to write the algorithm directly in his code, allowing it to be more portable and reducing the knowledge of the system hardware that he has to have to utilize theMAP element112.
With reference additionally now to FIG. 4, a block diagram of the[0038]MAP control block132 is shown in greater detail. Thecontrol block132 is coupled to receive a number of command bits (for example,17) from theaddress bus128 at acommand decoder150. Thecommand decoder150 then supplies a number of register control bits to a group of status registers iS2 on an eightbit bus154. Thecommand decoder150 also supplies a single bit last operand flag online156 to apipeline counter158. Thepipeline counter158 supplies an eight bit output to anequality comparitor160 onbus162. Theequality comparitor160 also receives an eight bit signal from theFPGA134 onbus136 indicative of the pipeline depth. When theequality comparitor160 determines that the pipeline is empty, it provides a single bit pipeline empty flag online164 for input to the status registers152. The status registers152 are also coupled to receive an eight bit status signal from theFPGA134 on bus136and it produces a sixty four bit status word output onbus166 in response to the signals onbus136,154 andline164.
The[0039]command decoder150 also supplies a five bit control signal online168 to a configuration multiplexer (“MUX”)170 as shown. Theconfiguration MUX170 receives a single bit output of a 256 bit parallel-serial converter172 online176. The inputs of the 256 bit parallel-to-serial converter172 are coupled to a 256 bit userconfiguration pattern bus174. Theconfiguration MUX170 also receives sixteen single bit inputs from the configuration ROMs (illustrated as ROM182) onbus178 and provides a single bit configuration file signal online180 to theuser FPGA134 as selected by the control signals from thecommand decoder150 on thebus168.
In operation, when a processor[0040]108 is halted by the operating system, the operating system will issue a last operand command to theMAP element112 through the use of command bits embedded in the address field onbus128. This command is recognized by thecommand decoder150 of thecontrol block132 and it initiates ahardware pipeline counter158. When the algorithm was initially loaded into theFPGA134, several output bits connected to thecontrol block132 were configured to display a binary representation of the number of clock cycles required to get through its pipeline (i.e. pipeline “depth”) onbus136 input to theequality comparitor160. After receiving the last operand command, thepipeline counter158 in the control block132 counts clock cycles until its count equals the pipeline depth for that particular. algorithm. At that point, theequality comparitor160 in the control block132 de-asserts a busy bit online164 in an internal group of status registers152. After issuing the last operand signal, the processor108 will repeatedly read the status registers152 and accept any output data onbus166. When the busy flag is de-asserted, the task can be stopped and theMAP element112 utilized for a different task. It should be noted that it is also possible to leave theMAP element112 configured, transfer the program to a different processor108 and restart the task where it left off.
In order to evaluate the effectiveness of the use of the[0041]MAP element112 in a given application, some form of feedback to the use is required. Therefore, theMAP element112 may be equipped with internal registers in the control block132 that allow it to monitor efficiency related factors such as the number of input operands versus output data, the number of idle cycles over time and the number of system monitor interrupts received over time. One of the advantages that theMAP element112 has is that because of its reconfigurable nature, the actual function and type of function that are monitored can also change as the algorithm changes. This provides the user with an almost infinite number of possible monitored factors without having to monitor all factors all of the time.
With reference additionally now to FIG. 5, a functional block diagram of a portion of an alternative embodiment of a[0042]computer system20 in accordance with the of the present invention is shown. In thecomputer system20 illustrated,individual MAP elements112A,112Betc. are each closely associated with individual processor boards22A,22Brespectively. As depicted, each of theMAP elements112 comprisesindependent chain ports24 for coupling theMAP elements112 directly to each other.
Individual ones of the[0043]MAP elements112 are coupled between the processor board22write trunk26 and readtrunk28 of each processor board22 in addition to their coupling to each other by means of thechain ports24. A switch couples thewrite trunk26 and readtrunk28 of any given processor board to any othermemory subsystem bank16A,16Betc. As generally illustrated, each of thememory subsystem banks16 includes a control-block122 and one ormore memory arrays130.
With reference additionally now to FIG. 6, a functional block diagram of an[0044]individual MAP element112 is shown wherein eachMAP element112 comprises an onboard memory40 and acontrol block46 providing common memory DMA capabilities. Briefly, thewrite trunk26 and readtrunk28 are coupled to thecontrol block46 from the common memory switch which provides addresses to thememory40 and receives addresses from theuser array42 on address lines48. Data supplied on thewrite trunk26 is provided by thecontrol block46 to thememory40 ondata lines44 and data read out of thememory40 is provided on these same lines both to theuser array42 as well as thecontrol block46 for subsequent presentation on the readtrunk28. As indicated, thechain port24 is coupled to theuser array42 for communication of read and write data directly withother MAP elements112.
With reference additionally now to FIG. 7, an additional functional block diagram of an[0045]individual MAP element112 is shown particularly illustrating thememory40 of the preceding figure functioning as aninput buffer40 andoutput FIFO74 portions thereof. In this figure, an alternative view of theMAP element112 of FIG. 6 is shown in which memory input data on line50 (or the write trunk26) is supplied to an input buffer (memory40) as well as to areconfigurable user array42 coupled to thechain port24. The output of thereconfigurable array42 is supplied to anoutput FIFO74 to provide memory output data on line94 (or the read trunk28) as well as to thechain port24. Theinput buffer40,reconfigurable array42 andoutput FIFO74 operate under the control of thecontrol block46.
With respect to the foregoing figures, each[0046]MAP element112 may consist of a printed circuit board containing input operand storage (i.e. the memory/input buffer40),user array42, intelligent addressgenerator control block46, outputresult storage FIFO74 and I/O ports to allow connections toother MAP elements112 through thechain port24 as well as the host system memory array.
Input Operand Storage[0047]
The input storage consists of memory chips that are initially loaded by memory writes from one of the microprocessors[0048]12 in the host system or by MAP DMA. Thebuffer40 may be, in a particular embodiment, 72 bits wide and 2M entries deep. This allows for storage of 64 bit operands and 8 error correction code (“ECC”) bits for data correction if needed. Operands or reference data can be read from thisbuffer40 by theuser array42. Data is not corrupted after use allowing for operand reuse by theMAP elements112. By reading operands only after thebuffer40 is loaded, operands do not need to arrive at theMAP elements112 in time order.MAP elements112 only require that store order be maintained thus allowing for out-of-order arrival of operands prior to storage in theinput buffer40. This means cache line transfers, which typically can not be performed in a timed order but have four times the bandwidth of un-cached transfers, can be used to load the input buffers40.
Intelligent Address Generator[0049]
The[0050]input buffer40 contents are accessed by providing address and read enable signals to it from thecontrol block46. These addresses may be generated in one of two ways. First the address bits can be provided by theprogrammable user array42 to the addressgenerator control block46 where it is combined with other control signals and issued to theinput buffer40. This allows for very random access into thebuffer40 such as would be needed to access reference data. Another address mode requires the user to issue a start command which contains a start address, stop address, and stride. The addressgenerator control block46 will then start accessing theinput buffer40 at the start address and continue accessing it by adding the stride value to the last address sent until the stop address is reached. This is potentially a very useful technique when performing vector processing where like elements are extracted out of an array. Since the stride can be any number less than the delta between the start and stop addresses, it is very easy for theMAP element112 to perform a data gather function which is highly valuable in the high performance computing market.
User Array[0051]
The[0052]array42 performs the actual computational functions of theMAP element112. It may comprise one or more high performance field programmable gate arrays (“FPGAs”) interconnected to the other elements of theMAP element112. A particular implementation of the present invention disclosed in more detail hereinafter, may use four such devices yielding in excess of 500,000 usable gates. These components are configured by user commands that load the contents of selected configuration ROMs into the FPGAs. After configuration, theuser array42 can perform whatever function it was programmed to do. In order to maximize its performance for vector processing, thearray42 should be able to access two streams of operands simultaneously. This is accomplished by connecting one 72 bit wide input port to the input operand storage and a second 72 bit wide port to the chaininput connector port24. This connector allows theMAP element112 to use data provided to it by aprevious MAP element112. Thechain port24 allows functions to be implemented that would far exceed the capability of asingle MAP element112 assembly. In addition, since in the particular implementation shown, only operands are transferred over thechain port24, the bandwidth may exceed the main memory bandwidth resulting in superior performance to that of the fixed instruction microprocessor-based processors12.
The FPGAs may also contain on board phase locked loops (“PLLs”) that allow the user to specify at what multiple or sub-multiple of the system clock frequency the circuit will run. This is important because certain complex functions may require clocks that are slower than the system clock frequency. It may also be that the user desires to synthesize a function resulting in lower performance but faster time to market. By using PLLs, both of these constraints can be accommodated. Another benefit in the potential utilization of a PLL is that future generation FPGAs that can operate faster than the current system clock speeds can be retrofitted into slower systems and use the PLL frequency multiplication feature to allow the[0053]MAP element112 to run faster than the rest of the system. This is turn results in a higherperformance MAP element112.
Output Result Storage[0054]
When the[0055]user array42 produces a result, it may be sent over a 72 bit wide path to an output result storage element (for example, output FIFO74) which can then pass the data to either a 72 bit wide read port or a 72 bitwide chain port24 to thenext MAP element112. This storage device can made from a number of different memory types. The use of aFIFO74 storage device will temporarily hold results that cannot be immediately read by a host microprocessor or passed over theoutput chain port24 to the next stage. This feature allows forMAP elements112 in a chain to run at different frequencies. In this case theoutput FIFO74 functions like a speed matching buffer. In non-chained operation, the microprocessor that is reading the results may be delayed. In this case theFIFO74 prevents theMAP element112 from “stalling” while waiting for results to be read. In a particular embodiment of the present invention, aFIFO74 that is 72 bits wide and 512K entries deep may be utilized. As disclosed in the aforementioned patent applications, the output storage may also be a true memory device such as those found in common memory. In this case, write addresses must be provided by theuser array42 or address generator and read addresses provided by the entity reading the results from the memory. While this may be somewhat more electrically complicated, it has the advantage that results may be accessed in any order.
DMA Enhancements[0056]
In the aforementioned patent applications, the ability of[0057]MAP elements112 to perform DMA to common memory was disclosed. While this capability was discussed primarily with respect to the movement of operands and results, it is also possible to apply the same concept to commands. The microprocessor that would normally write a series of commands directly to theMAP element112 may also write the same commands into common memory as well. After writing a series of commands, the microprocessor could then send an interrupt to theMAP element112. TheMAP element112 would then read the commands from common memory and execute them as contemplated. Since this command list could contain DMA instructions as specified in the previously mentioned patent applications, theMAP element112 could retrieve all of its input operands and store all of its results without any further processor12 intervention. At the completion ofMAP element112 processing, theMAP element112 could then interrupt the microprocessor to signal that results are available in common memory. Operation in this manner reduces the interaction required between theMAP element112 and the microprocessor.
On Board Library[0058]
As originally disclosed, electrically erasable programmable ROMs (“EEPROMs”) or similar devices may be utilized to hold a library of functions for the[0059]user array42. By placing these algorithms in ROMs on theMAP element112 itself, theuser array42 function can be changed very rapidly. In this manner, the user program can download a new function into one of the on board ROMs thus updating its contents and allowing theMAP element112 to perform new functions. In a particular implementation, this may be accomplished by reserving one of the library functions to perform the function of an EEPROM programmer. When a command to update a ROM is received, theuser array42 may be configured with this special function and data read from theMAP element112 input storage (e.g. input buffer40) and then loaded into the ROMs to complete the update process.
With reference additionally now to FIG. 8 a more detailed functional block diagram of an[0060]individual MAP element112 is shown as previously illustrated in FIGS. 6 and 7. In this depiction, theMAP element112 includes an enhanced synchronous dynamic random access memory (ESDRAM™, a trademark of Enhanced Memory Systems, Inc., Colorado Springs, Colo.) functioning as the memory, orinput buffer40. ESDRAM memory is a very high speed memory device incorporating a dynamic random access memory (“DRAM”) array augmented with an on-chip static random access memory (“SRAM”) row register to speed device read operations.
In this figure, like structure to that previously described is like numbered and the foregoing description thereof shall suffice herefor. Memory input data on[0061]lines50 is supplied throughtransmission gates52 to the data lines44 for provision to thememory40 anduser array42. In like manner, address input is received onlines54 for provision throughtransmission gates56 to the address lines48 coupled to thememory40 andcontrol block46. Thecontrol block46 operatively controls thetransmission gates52,56 and receives an FS11 signal online60 and provides a LOCKOUT signal online62.
The[0062]user array42 may be coupled, as shown, to thechain port24 and it provides a user address signal onlines64 and a next address signal onlines66 to thecontrol block46. Thecontrol block46, provides an indication of whether or not an input is valid to theuser array42 onlines68. Output of theuser array42 is provided onlines70 together with a write clock (“WRTCLK”) signal online72 to theFIFO74 or other output storage device. TheFIFO74 receives a read clock (“RDCLK”) signal online78 from thecontrol block46. Output from theFIFO74 orcontrol block46 may be selectively supplied onlines80 throughtransmission gates76 to thechain port24 and/or throughtransmission gates82 to provide memory data onlines94. Thecontrol block46 also receives a chain read signal onlines90 and returns a chain valid output onlines92. Thecontrol block46 operatively controls thetransmission gates76 and82 in addition totransmission gates86 which serve to provide error correction code (“ECC”) output signals onlines88.
As mentioned previously, the[0063]MAP elements112 may comprise one or more circuit boards, utilizing, for example, one Lucent Orca™ OR3T80 FPGA to function as thecontrol block46 and, four OR3TI25 FPGAs forming theuser array42. The user can implement algorithms in these FPGAs that alter data that is written to it and provide this altered data when theMAP element112 is then read. In addition, eachMAP element112 may also comprise eight sets of four configuration ROMs on board. These ROMs are preprogrammed by the user and configure the four user FPGAs of theuser array42 under program control. These ROMs may be reprogrammed either externally or while on theMAP element112 located in a system.
The[0064]MAP elements112 are accessed through the use of normal memory READ and WRITE commands. In the representative embodiment illustrated and described, the user can provide operands to theMAP elements112 either by directly writing 128-bit packets (i.e. in the form of two 64-bit words) into theuser array42 chips or by writing 256-bit packets (in the form of four 64-bit words) into a dedicated 16-MB ESDRAM memoryinput data buffer40. A read from aMAP element112 always returns a 2-word packet and part of this returned packet contains status information as will be more fully described hereinafter. In addition, the incoming addresses are decoded into commands as will also be defined later.
[0065]MAP elements112 also have the ability to be chained via hardware. This allows the output data from oneMAP element112 to move directly to theuser array 42 chips of thenext MAP element112 without processor12 intervention. Chain length is limited by the quantity ofMAP elements112 in the overall system. The total number ofMAP elements112 may also be broken down into several smaller independent chains. In a chained mode of operation, aMAP element112 can still read from itsinput buffer40 to access reference information such as reciprocal approximation tables.
Logic Conventions[0066]
In the representative implementation of the computer system of the present invention disclosed herein, the processors[0067]12 may comprise Pentium™ (a trademark of Intel Corporation, Santa Clara, Calif.) processors and these devices utilize an active “low” logic convention which applies to all address bits and data words transmitted to or from theMAP elements112 including the returned status word.
With reference additionally now to FIG. 9, a[0068]user array interconnect200 diagram is shown, for example, utilizing four user FPGAs interconnected through horizontal, vertical and diagonal buses to allow for expansion in designs that might exceed the capacity of a single FPGA. In this regard, the interconnect diagram200 corresponds to theuser array42 of the preceding figures withinput data bus210 corresponding to the data lines44, thechain input bus212 corresponding to thechain port24 and theoutput bus214 corresponding to thelines70 of FIG. 8. The fourFPGAs202,204,206 and208 comprising theuser array42 are each coupled to theinput data bus210,chain input bus212 andoutput bus214 as well as to each other by means oftop bus216,right bus218,bottom bus220, leftbus222 anddiagonal buses224 and226.
User Array Interconnect[0069]
As previously described, the four user FPGAs ([0070]202,204,206 and208) are interconnected through a series of horizontal, vertical, and diagonal buses which allow the easiest expansion of the existing symmetric internal chip routing for designs that exceed the capacity of a single FPGA for theuser array42. In the exemplary illustration shown, bus sizes were chosen to utilize as many pins as possible while maintaining a bus width of at least 64 bits.
Address Structure[0071]
Because MAP may be located in the memory array of the system and decodes a portion of the address field, the address generated by the processor[0072]12 must be correctly assembled. The following Table 1 shows the address bit allocation as seen by the processor12 and theMAP element112 board. The processor board bridge elements will reallocate the bit positions that are actually transmitted to theMAP element112 based on system size.
Field Select Bits[0073]
The Field Select bits are the two most significant address bits leaving the bridge elements and are used to select which of the four possible mezzanine cards in the memory stack is being accessed. The Field Select bits for all mezzanine cards are determined by the state of P[0074]6 bus bits A[21:20]. If bit A21 is set, aMAP element112 operation is underway and the Field Select bits are set to 11. TheMAP element112 is always located just above the semaphore registers with thefirst MAP element112 insegment0bank0, the second insegment1bank0 and so on until oneMAP element112 is each segment'sbank0. They are then placed insegment0bank1 and the same pattern is followed until all are placed. This keeps them in a continuous address block.
Chip Select Bits[0075]
The next 3 most significant bits are Chip Select bits. These normally select which one of the eight rows of memory chips on a mezzanine board are activated. For[0076]MAP elements112,Chip Selects0 and1 are used.Chip Select0 is used to write to the ESDRAMmemory input buffer40 andChip Select1 is used to access thecontrol block46 and user chips of theuser array42.
Memory Address Bits[0077]
The next 19 most significant bits on the P[0078]6 bus are Memory Address bits that normally select the actual location within the memory chip of the cache line in use. Five of these bits are decoded by theMAP element112 into various commands that are discussed in greater detail hereinafter.
Bank Select Bits[0079]
The next 4 most significant bits are the Bank Select bits. These bits are used to select the specific bank within a segment in which the desired memory or[0080]MAP element112 is located.
Trunk Select Bits[0081]
The next 4 most significant bits are the Trunk Select bits. The number of these bits range from 0 to 4 depending upon the number of segments in the system. These bits are used to select the segment that contains the desired memory or MAP. Unused bits are set to 0.
[0082]| TABLE 1 |
|
|
| P6 to Packet Bit Translation |
| Address | P6 Bus | PacketBit | Bridge Output | |
| |
| 0 | 0 | | |
| 1 | 0 |
| 2 | 0 |
| 3 | Cmd 0 | 13 | Cmd 0 |
| 4 | Cmd 1 | 14 | Cmd I |
| 5 | 0 | 15 | Map Sel 4 |
| 6 | 0 | 19 | Map Sel 0 |
| 7 | 0 | 20 | Map Sel 1 |
| 8 | 0 | 21 | Map Sel 2 |
| 9 | 0 | 22 | Map Sel 3 |
| 10 | Cmd 2 | 23 | Cmd 2 |
| 11 | Cmd 3 | 24 | Cmd 3 |
| 12 | Sel 0 | 25 | Sel 0 |
| 13 | Sel 1 | 26 | Sel 1 |
| 14 | Sel 2 | 27 | Sel 2 |
| 15 | 0 | 28 | 0 |
| 16 | Map Sel 0 | 29 | 0 |
| 17 | Map Sel 1 | 30 | 0 |
| 18 | Map Sel 2 | 31 | 0 |
| 19 | Map Sel 3 | 32 | 0 |
| 20 | Map Sel 4 | 33 | 0 |
| 21 | 1 | 34 | 0 |
| 22 | 0 | 35 | 0 |
| 23 | 0 | 36 | 0 |
| 24 | 0 | 37 | 0 |
| 25 | 0 | 38 | 0 |
| 26 | 0 | 39 | 0 |
| 27 | 0 | 40 | 0 |
| 28 | 0 | 41 | 0 |
| 29 | 0 | 42 | Chip Sel 0 |
| 30 | 0 | 43 | Chip Sel 1 |
| 31 | 0 | 44 | Chip Sel 2 |
| 32 | 0 | 45 | 1 |
| 33 | 0 | 46 | 1 |
| 34 | 0 |
| 35 | 0 |
| |
Word Select Bits[0083]
The next 2 most significant bits are the Word Select bits. These bits determine the order in which each word of a 4-word cache line is being used. With CS[[0084]1:0] set to 01, these bits are part of the decoded command.
MAP Command Decode[0085]
CMD[
[0086]3:
0] are decoded into the following commands by the
MAP control block46 chip when CS[
1:
0] are 01 as shown in the following Table 2. This decode is also dependent upon the transaction being either a READ or WRITE. In addition, SEL[
2:
0] are used in conjunction with the RECON and LDROM commands described hereinafter to select which one of the eight ROM's is to be used.
| TABLE 2 |
|
|
| Address Bit Command Decode |
| 3 | 2 | 1 | 0 | Read/Write | Command | Basic Function | |
|
| 1 | 1 | 1 | 1 | Write | Null | MAP operation continues as before this was |
| | | | | | received. |
| 1 | 1 | 1 | 0 | Write | RMB | Resets MAP Board user chips and |
| | | | | | reconfigures control chips. |
| 1 | 1 | 0 | 1 | Write | RUC | Resets User and control chip latches. |
| 1 | 1 | 0 | 0 | Write | RECON | RECONfigures user circuits. Used with |
| | | | | | SEL[2:0]. |
| 1 | 0 | 1 | 1 | Write | LASTOP | LAST OPerand is being written. |
| 1 | 0 | 1 | 0 | Write | WRTOP | WRiTe OPerand to user circuit. |
| 1 | 0 | 0 | 1 | Write | DONE | Processor is DONE with MAP clears busy |
| | | | | | flag. |
| 1 | 0 | 0 | 0 | Write | LDROM | Loads a new algorithm from input buffer into |
| | | | | | the ROM selected by SEL[2:0]. |
| 0 | 1 | 1 | 1 | Write | START | Sends start address, stop address, auto/user, |
| | | | | | and stride to input control chip starting MAP |
| | | | | | operation. |
| 0 | 1 | 1 | 0 | Write | Future | Reserved. |
| 0 | 1 | 0 | 1 | Write | Future | Reserved. |
| 0 | 1 | 0 | 0 | Write | Future | Reserved. |
| 0 | 0 | 1 | 1 | Write | Future | Reserved. |
| 0 | 0 | 1 | 0 | Write | Future | Reserved. |
| 0 | 0 | 0 | 1 | Write | Future | Reserved. |
| 0 | 0 | 0 | 0 | Write | Future | Reserved. |
| 1 | 1 | 1 | 1 | Read | Null | MAP operation continues as before this was |
| | | | | | received. |
| 1 | 1 | 1 | 0 | Read | RDSTAT | Reads status word. |
| 1 | 1 | 0 | 1 | Read | RDDAT | Reads 2 data words. |
| 1 | 1 | 0 | 0 | Read | RDDAST | Reads status word and 1 data word. |
| 1 | 0 | 1 | 1 | Read | Future | Reserved. |
| 1 | 0 | 1 | 0 | Read | Future | Reserved. |
| 1 | 0 | 0 | 1 | Read | Future | Reserved. |
| 1 | 0 | 0 | 0 | Read | Future | Reserved. |
| 0 | 1 | 1 | 1 | Read | Future | Reserved. |
| 0 | 1 | 1 | 0 | Read | Future | Reserved. |
| 0 | 1 | 0 | 1 | Read | Future | Reserved. |
| 0 | 1 | 0 | 0 | Read | Future | Reserved. |
| 0 | 0 | 1 | 1 | Read | Future | Reserved. |
| 0 | 0 | 1 | 0 | Read | Future | Reserved. |
| 0 | 0 | 0 | 1 | Read | Future | Reserved. |
| 0 | 0 | 0 | 0 | Read | Future | Reserved. |
|
Null Command Description[0087]
When a[0088]MAP element112 is not actively receiving a command, all inputs are set to 1 and all internal circuits are held static. Therefore, an incoming command of “1 1 1 1” cannot be decoded as anything and is not used.
RMB[0089]
This command, issued during a write transaction, causes the[0090]control block46 chips to generate a global set (“GSR”) to the user chips of theuser array42 and reprograms the control chips. All internal latches are reset but the configuration of the user chip is not changed. Any data that was waiting to be read will be lost.
RUC[0091]
This command, issued during a write transaction, causes the control chips to generate GSR signal to all four user FPGAs of the[0092]user array42. All internal latches are reset, but the configuration is not changed. Any operands will be lost, but data waiting to be read in thecontrol block46 chips will not.
RECON[0093]
This command, issued during a write transaction, causes the control chips to reconfigure the four user FPGAs of the[0094]user array42 with the ROM selected by SEL[2:0]. Any operands still in process will be lost, but data waiting to be read in the control chip will not.
LASTOP[0095]
This command is issued during a write transaction to inform the[0096]MAP element112control block46 chip that no more operands will be sent and the pipeline should be flushed. The control chips start the pipeline counter and continue to provide read data until the pipeline depth is reached.
WRTOP[0097]
This command is issued during a write transaction to inform the[0098]MAP element112control block46 chip that it is receiving a valid operand to be forwarded directly to the user circuits.
DONE[0099]
This command is issued during a write transaction to inform the[0100]MAP element112control block46 chip that the processor12 is done using theMAP element112. The control chips reset the busy bit in the status word and wait for a new user. The configuration currently loaded into the user circuits is not altered.
LDROM[0101]
This command is issued during a write transaction to inform the[0102]MAP element112control block46 chip that the ROM specified by SEL[2:0] is to be reloaded with the contents of theinput buffer40 starting ataddress0. This will cause a nonvolatile change to be made to one of the eight on-board algorithms.
START[0103]
This command is issued during a write transaction and sends the start address, stop address, auto/user selection and stride to input controller. The input controller then takes control of[0104]input buffer40 and starts transferring operands to the user chips of theuser array42 using these parameters until the stop address is hit. Thedata word0 that accompanies this instruction contains the start address inbits0 through20, the stop address in bits23 through43, the stride inbits46 through51 and the user/auto bit inbit position54. In all cases the least significant bit (“LSB”) of each bit group contains the LSB of the value.
RDSTAT[0105]
This command is issued during a read transaction to cause a status word to be returned to the processor[0106]12. This transaction will not increment the pipeline counter if it follows a LASTOP command. Details of the status word are shown in the following Table 4.
RDDAT[0107]
This command is issued during a read transaction to cause 2 data words to be returned to the processor[0108]12. This transaction will increment the pipeline counter if it follows a LASTOP command. Details of the status word are also shown in Table 4.
RDDAST[0109]
This command is issued during a read transaction to cause a status word and data word to be returned to the processor[0110]12.
SEL[[0111]2:0] Decode
The SEL[
[0112]2:
0] bits are used for two purposes. When used in conjunction with the RECON or LDROM commands, they determine which of the eight on-board ROM sets are to be used for that instruction. This is defined in the following Table 3.
| 2 | 1 | 0 | ROMSelect Function |
| |
| 0 | 0 | 0 | ROM set 0 |
| 0 | 0 | 1 | ROM set 1 |
| 0 | 1 | 0 | ROM set 2 |
| 0 | 1 | 1 | ROM set 3 |
| 1 | 0 | 0 | ROM set 4 |
| 1 | 0 | 1 | ROM set 5 |
| 1 | 1 | 0 | ROM set 6 |
| 1 | 1 | 1 | ROM set 7 |
| |
Status Word Structure[0113]
Whenever a read transaction occurs, a status word is returned to the processor
[0114]12 issuing the read. The structure of this 64-bit word is as follows:
| TABLE 4 |
|
|
| Status Word Structure |
|
|
| Bits | Function |
|
| 0-7 | Contains the pipeline depth of the current user algorithm |
| 8 | A 1 indicates that the pipeline is empty following a LASTOP |
| command. |
| 9-31 | These lines are tied low and are not used at this time. |
| 32-35 | Contains the current configuration selection loaded into |
| the user FPGA's. |
| 36-58 | These lines are tied low and are not used at this time. |
| 59 | A 1 indicates that data was written and has overflowed the |
| input buffers. |
| 60 | A 1 indicates that a reconfiguration of the user FPGA's |
| is complete. |
| 61 | A 1 indicates that the data word is valid |
| 62 | A 1 indicates that at least 128 words are available |
| 63 | A 1 indicates that the MAP is busy and cannot be used by |
| another processor. |
|
| Note: |
| Bit 63 is always the most significant bit (“MSB”) as indicated in the |
| following illustration: |
Single MAP Element Operation[0115]
Normal operation of the[0116]MAP elements112 are as follows. After power up, theMAP element112control block46 chip automatically configures and resets itself. No configuration exists in the four user chips of theuser array42. A processor12 that wants to use aMAP element112 first sends an RDSTAT command to theMAP element112.
If the[0117]MAP element112 is not currently in use, the status word is returned with bit63 “0” (not busy) and the busy bit is then set to 1 on theMAP element112. Any further RDSTAT or RDDAST commands showMAP element112 to be busy.
After evaluating the busy bit and observing it to be “low”, the processor[0118]12 issues a RECON command along with the appropriate configuration ROM selection bits set. This causes theMAP element112 to configure the user chips of theuser array42. While this is happening,status bit60 is “low”. The processor12 issues an RDSTAT and evaluatesbit60 until it returns “high”. At this point, configuration is complete and the user chips of theuser array42 have reset themselves clearing all internal registers. The user then issues an RUC command to ensure that any previous data left in theuser array42 orcontrol block46 circuits has been cleared.
The user now has two methods available to present data to the[0119]MAP element112. It can either be directly written two quad words at a time into the user chips of theuser array42 or theinput buffer40 can be loaded.
Writing quad words is useful for providing a small number of reference values to the[0120]user array42 but does have lower bandwidth than using the input buffers40 due to the 128-bit per transfer limit on un-cached writes. To use this mode, a WRTOP command is sent that delivers two 64-bit words to the user circuits. Based on previous knowledge of the algorithm, the program should know how many operands can be issued before an RDDAST could be performed. Evaluatingstatus bits0 through7 after configuration also indicates the pipeline depth for this calculation.
If a large data set is to be operated on, or if a large quantity of the operands are to be reused, the[0121]input data buffer40 should be used. In a particular embodiment of the present invention, this buffer may comprise 2M quad words of ESDRAM memory storage. This memory is located on theMAP element112 and is accessed by performing cache line writes. This allows the loading of four 64-bit words per transaction. Once the data set is loaded, a START command is issued.
The[0122]control block46 chip will assert the lockout bit signaling the memory controller not to access theinput buffer40. It will also evaluate data word “0” of this transaction in accordance with the previously defined fields.
If the Auto/User bit is a “1”, the addresses will automatically be generated by the[0123]control block46 chip. The first address will be the start address that was transferred. The address is then incremented by the stride value until the stop address is hit. This address is the last address accessed.
At this point the lockout bit is released and the memory controller can access the[0124]input buffer40. it should be noted that the input control chip must interleave accesses to theinput buffer40 with refresh signals provided by the memory controller in order to maintain the ESDRAM memory while the lockout bit is set.
If the Auto/User bit was at “0”, the operation is the same except the addresses are provided to the[0125]input control block46 chip by the user algorithm.
Once the START command is issued, the processor[0126]12 can start to read the output data. The user must first issue a RDDAST, which will return a status word and a data word. If bit61 of the status word is a 1, the data word is valid. The user will continue this process untilstatus word bit62 is a 1. At this point the user knows that theoutput FIFO74 on theMAP element112 contains at least 128 valid data words and the RDDAT command can now be used for the next 64 reads. This command will return two valid data words without any status. After the 64 RDDAT commands the user must again issue a RDDAST command and checkbits61 and62. If neither is set, theFIFO74 has no further data. If only61 is set the program should continue to issue RDDAST commands to empty theFIFO74. If61 and62 are set, the program can resume with another set of 64 RDDAT commands and repeat the process until all results are received.
After all data is read and the user has completed his need for a[0127]MAP element112, a DONE command is issued. This will clear the busy flag and allow other processors12 to use it. It should be noted that data in theinput buffer40 is not corrupted when used and can therefore be reused until a DONE is issued.
Chained MAP Operation[0128]
[0129]MAP elements112 have the ability to run in a vectored or VMAP™ mode (VMAP is a trademark of SRC Computers, Inc., assignee of the present invention). This mode allows the output data from oneMAP element112 to be sent directly to the user chips in theuser array42 of thenext MAP element112 with no processor12 intervention. In a representative embodiment, this link, orchain port24, operates at up to 800 MB/sec and connects allMAP elements112 in a system in a chain. A chain must consist of a sequential group of at least twoMAP elements112 and up to as many as the system contains. Multiple non-overlapping chains may coexist.
To use this mode, the user simply designs the algorithm to accept input data from the chainin[[0130]00:63] pins. Output data paths are unchanged and always go to both the memory data bus and the chainout[00:63] pins.
VMAP mode operation is identical to[0131]single MAP element112 operation except thedata buffer40 on thefirst MAP element112 in the chain is loaded with data and all results are read from thelast MAP element112.Chained MAP elements112 simultaneously read from theirinput buffer40 while accepting operands from the chainin port. This allows thebuffers40 used to supply reference during chained operation. To do this the input buffers40 must first be loaded and then START commands must be sent to all MAP elements in the chain. Thefirst MAP element112 in the chain must be the last one to receive a START command. AllMAP elements112 other than the first in the chain must receive a START command with the user address mode selected.
LDROM Operation[0132]
[0133]MAP elements112 have the capability to allow the contents of an on-board ROM to be externally reloaded while the system is operating, thus changing the algorithm. It should be noted that the same ROM for all four user chips in theuser array42 will simultaneously be updated.
To accomplish this, the configuration files of the four ROMs of a given set are converted from a serial stream to 16-bit words. The first words of each ROM file are then combined to form a 64-bit word.[0134]User chip0 of theuser array42 files fillbits0 through15,chip1 is16 through31,chip2 is31 through47, andchip3 is48 through64. This process is repeated until all four of the individual files are consumed. This results in a file that is 64-bits wide and 51,935 entries deep.
If the contents of a particular ROM in the set are to be unaltered, its entries must be all 0. At the top of this file, a header word is added that contains all[0135]1's in all bit positions for all ROMs in the set that are to be updated. ROMs that are to be unaltered will contain zeros in this word. This file is then loaded into theMAP element112input buffer40 with the header loaded intoaddress0.
Upon receiving an LDROM command, the input controller will load the user chips of the[0136]user array42 with a special algorithm that turns them into ROM programmers. These chips will then start accessing the data in theinput buffer40 and will evaluateword0.
If this is a 0, no further action will be taken by that chip. If it is a 1, the chip will continue to extract data, serialize it, and load it into the ROM that was selected by the state of the SEL lines during the LDROM command. While this is happening, bit[0137]60 of the status word is 0. When complete,bit60 will return to a 1.
The user must always issue a RECON command following an LDROM command in order to load a valid user algorithm back into the[0138]user array42 and overwrite the ROM programmer algorithm.
With reference additionally now to FIG. 10, a functional block diagram of another[0139]alternative embodiment230 of the present invention is shown whereinindividual MAP elements112 are closely associated with individual memory arrays and each of theMAP elements112 comprisesindependent chain ports24 for coupling theMAP elements112 directly to each other. The system illustrated comprises a processor assembly comprising one or more processors12 bi-directionally coupled through a processor switch (which may comprise an FPGA) to awrite trunks26 and readtrunks28.
In the example illustrated, a number of[0140]MAP elements112 are associated with a particular memory array246 under control of a memory controller238 (which may also comprise an FPGA). As illustrated, each of the memory controllers238Aand238Bare coupled to theprocessor assembly232 through theprocessor switch234 by means of the write and readtrunks26,28. Each of the memory controllers may be coupled to a plurality ofMAP elements112 and associated memory array246 and toadditional MAP elements112 by means of achain port24 as previously described. In the embodiment illustrated, memory controller238Ais in operative association with a pair of MAP elements, the first comprising buffer240A1, user array242A1and FIFO244A1associated with memory array246A1and the second comprising buffer240A2, user array242A2and FIFO244A2associated with memory array246A2. In like manner, memory controller238Bis in operative association with a pair of MAP elements, the first comprising buffer240B1, user array242B1and FIFO244B1associated with memory array246B1and the second comprising buffer240B2, user array242B2and FIFO244B2associated with memory array246B2.
With reference additionally now to FIGS. 11A and 11B separate timing diagrams are illustrated respectively depicting input and output timing in relationship to the system clock (“Sysclk”) signal.[0141]
Interface Timing[0142]
The[0143]MAP element112user array42 can accept data from the input memory bus,input buffer40 or thechain port24. In the embodiment of the present invention previously described and illustrated, all sixty four bits from any of these sources are sent to all four of the user chips (202,204,206 and208; FIG. 9) along with a VALID IN signal on lines68 (FIG. 8) sent from thecontrol block46 that enables the input clock in the user chips of theuser array42.
This signal stays high for ten, twenty or forty nanoseconds depending on whether one, two or four words are being transferred. This VALID IN signal on[0144]lines68 connects to the clock enable pins of input latches in the user chips of theuser array42. These latches then feed the user circuit in theMAP element112. The timing for the various write operations is shown in with particularity in FIG. 11A.
Input Timing[0145]
After the algorithm operation has completed, output data is formed into 64-bit words-in the user chips of the[0146]user array42 on pins connected to the DOUT[00:63] nets. These nets, in turn, connect to the output FIFO74 (FIG. 8) that ultimately provides the read data to the memory controller or thenext MAP element112 in the chain. After forming the 64-bit result, the user circuitry must ensure that a “FULL” signal is “low”. When the signal is “low”, the transfer is started by providing a “low” from theuser array42 to thecontrol block46 and the FIFO#WE input on theFIFO74.
At the same time, valid data must appear on the data out (“DOUT”) nets. This data must remain valid for 10 nanoseconds and FIFO#WE must remain “low” until the end of this 10-nanosecond period. If multiple words are to be transferred simultaneously, the FIFO#WE input must remain “low” until the end of this 10-nanosecond period as shown with particularity in FIG. 11B.[0147]
Output Timing[0148]
Three result words can be transferred out of the[0149]user array42 before a “read” should occur to maximize the “read” bandwidth. The output FIFO74 (FIG. 8) is capable of holding 512 k words in the embodiment illustrated. When three words are held in thecontrol block46, the word counter in the status word will indicate binary “11”.
Pipeline Depth[0150]
To aid in system level operation, the[0151]user array42 must also provide the pipeline depth of the algorithm to thecontrol block46. In a particular embodiment of the present invention, this will be equal to the number of 100-MHz clock cycles required to accept a data input word, process that data, and start the transfer of the results to theFIFO74.
If an algorithm is such that initialization parameters or reference numbers are sent prior to actual operands, the pipeline depth is equal only to the number of clock cycles required to process the operands. This depth is provided as a static 8-bit number on nets DOUT[[0152]64:71] fromFPGAs202 and/or204 (FIG. 9). Each of the eight bits are generally output from only of the FPGAs of theuser array42 but the eight bits may be spread across both chips.
In a particular embodiment of the present invention, the ROMs that are used on the[0153]MAP elements112 may be conveniently provided as ATMEL™AT17LVO1O in a 20-pin PLCC package. Each ROM contains the configuration information for one of the four user FPGAs of theuser array42. There may be eight or more ROM sockets allocated to each of the user chips of theuser array42 to allow selection of up to eight or more unique algorithms. In an embodiment utilizing eight ROMs, the first ROM listed for each of the four user chips may be selected by choosing configuration Oh and the last ROM selected by choosing configuration 8h.
If all four user chips of the[0154]user array42 are not needed for an algorithm, the unused chips do not require that their ROM sockets be populated. However, at least one of the user chips must always contain a correctly programmed ROM even if it is not used in the algorithm because signals related to the configuration timing cycle are monitored by the control block. The user FPGA that directly connects to both the DIN and DOUT signals, should always be used first when locating the algorithm circuit.
With reference additionally now to FIG. 12, a simplified illustration of a[0155]representative operating environment300 for the system and method of the present invention is shown including a typicalweb site server306 as would be replaced by, for example, an SRC-6 reconfigurable server308 (comprising, for example, themultiprocessor computer10 orcomputer system20 of the preceding figures) or other computer system incorporating one or more industry standard processors together with one or more reconfigurable processors having all of the processor controlled by a single system image of the operating system. In this simplified illustration, a number ofpersonal computers302 or other computing devices are coupled to either the typical web site server306 (in a prior art implementation) or the reconfigurable sever308 (in accordance with the system and method of the present invention) through theInternet304.
With reference additionally now to FIG. 13, a flowchart is shown illustrating a conventional[0156]data processing sequence310 in a conventional application of a typicalweb site server306 as depicted in the preceding figure. Thesequence310 begins with the input of a number “N” of demographic data elements for processing by the typicalweb site server306. These N data elements are then serially processed atstep314 until the last of the data elements is determined and processed atdecision step316. Therefore, N iterations by the microprocessor of the typicalweb site server306 are required to complete processing of the input data elements.
Following this protracted data processing period, the typical[0157]web site server306 then can undertake to select the new web page content specifically adapted to the particular web site visitor atstep318, which updated site content is displayed atstep320.
With reference additionally now to FIG. 14, a corresponding flowchart is shown illustrating the processing of demographic or other data utilizing the[0158]reconfigurable server308 of FIG. 12 in a significantly fasterdata processing sequence330. Theprocessing sequence330 again begins with the input of N demographic data elements or other secure socket, database or other data for processing by the site server atinput step332. Importantly, thereconfigurable server308 is now able to process the individual data elements in parallel through the use of a single reconfigurable processor, (such as a MAP element), due to its ability to instantiate more than one processing unit that is tailored to the job as opposed to reusing one or two processing units located within a microprocessor. In the exemplary embodiment shown, all of reconfigurable processors may share all of the system's resources and be controlled by a single system image of the operating system although, in alternative embodiments, cluster management software may be utilized to effectively make a cluster of microprocessors appear to a user to be but a single copy of the operating system. In any event, the completion of steps3341through334Nrequires only 1 iteration to prepare the site to select the new content atstep336 and then display it atstep338.
While there have been described above the principles of the present invention in conjunction with one or more specific embodiments of the present invention and MAP elements, it is to be clearly understood that the foregoing description is made only by way of example and not as a limitation to the scope of the invention. Particularly, it is recognized that the teachings of the foregoing disclosure will suggest other modifications to those persons skilled in the relevant art for use in processing differing types of data at a web site. Such modifications may involve other features which are already known per se and which may be used instead of or in addition to features already described herein. Although claims have been formulated in this application to particular combinations of features, it should be understood that the scope of the disclosure herein also includes any novel feature or any novel combination of features disclosed either explicitly or implicitly or any generalization or modification thereof which would be apparent to persons skilled in the relevant art, whether or not such relates to the same invention as presently claimed in any claim and whether or not it mitigates any or all of the same technical problems as confronted by the present invention. The applicants hereby reserve the right to formulate new claims to such features and/or combinations of such features during the prosecution of the present application or of any further application derived therefrom.[0159]