Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

History of supercomputing

From Wikipedia, the free encyclopedia

ACray-1 supercomputer preserved at theDeutsches Museum

Thehistory of supercomputing goes back to the 1960s when a series ofcomputers atControl Data Corporation (CDC) were designed bySeymour Cray to use innovative designs and parallelism to achieve superior computational peak performance.[1] TheCDC 6600, released in 1964, is generally considered the firstsupercomputer.[2][3] However, some earlier computers were considered supercomputers for their day such as the 1954IBM NORC and 1955AN/FSQ-7vacuum tube computers in the 1950s,[4][5] and in the early 1960s, theUNIVAC LARC (1960),[6] theIBM 7030 Stretch (1962),[7] and theManchesterAtlas (1962), all[specify] of which were of comparable power.[citation needed]

While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records.

By the end of the 20th century, massively parallel supercomputers with thousands of "off-the-shelf" processors similar to those found in personal computers were constructed and broke through theteraFLOPS computational barrier.

Progress in the first decade of the 21st century was dramatic and supercomputers with over 60,000 processors appeared, reaching petaFLOPS performance levels.

Beginnings: 1950s and 1960s

[edit]
See also:Vector processor § History

The term "Super Computing" was first used in theNew York World in 1929[8] to refer to large custom-builttabulators thatIBM had made forColumbia University.[9]

There were several lines of second generation computers that were substantially faster than most contemporary mainframes. These included

The second generation saw the introduction of features intended to supportmultiprogramming andmultiprocessor configurations, including master/slave (supervisor/problem) mode, storage protection keys, limit registers, protection associated with address translation, andatomic instructions.

In 1957, a group of engineers leftSperry Corporation to formControl Data Corporation (CDC) inMinneapolis, Minnesota.Seymour Cray left Sperry a year later to join his colleagues at CDC.[1] In 1960, Cray completed theCDC 1604, one of the first generation of commercially successfultransistorized computers and at the time of its release, the fastest computer in the world.[10] However, the sole fully transistorizedHarwell CADET was operational in 1951, and IBM delivered its commercially successful transistorizedIBM 7090 in 1959.

TheCDC 6600 with the system console

Around 1960, Cray decided to design a computer that would be the fastest in the world by a large margin. After four years of experimentation along with Jim Thornton, and Dean Roush and about 30 other engineers, Cray completed theCDC 6600 in 1964. Cray switched from germanium to silicon transistors, built byFairchild Semiconductor, that used the planar process. These did not have the drawbacks of the mesa silicon transistors. He ran them very fast, and thespeed of light restriction forced a very compact design with severe overheating problems, which were solved by introducing refrigeration, designed by Dean Roush.[11] The 6600 outperformed the industry's prior recordholder, theIBM 7030 Stretch,[clarification needed] by a factor of three.[12][13] With performance of up to three megaFLOPS,[14][15] it was dubbed asupercomputer and defined the supercomputing market when two hundred computers were sold at $9 million each.[10][16]

The 6600 gained speed by "farming out" work to peripheral computing elements, freeing the CPU (Central Processing Unit) to process actual data. The MinnesotaFORTRAN compiler for the machine was developed by Liddiard and Mundstock at theUniversity of Minnesota and with it the 6600 could sustain 500 kiloflops on standard mathematical operations.[17] In 1968, Cray completed theCDC 7600, again the fastest computer in the world.[10] At 36 MHz, the 7600 had 3.6 times theclock speed of the 6600, but ran significantly faster due to other technical innovations. They sold only about 50 of the 7600s, not quite a failure. Cray left CDC in 1972 to form his own company.[10] Two years after his departure CDC delivered theSTAR-100, which at 100 megaflops was three times the speed of the 7600. Along with theTexas Instruments ASC, the STAR-100 was one of the first machines to usevector processingthe idea having been inspired around 1964 by theAPL programming language.[18][19]

The University of ManchesterAtlas in January 1963.

In 1956, a team atManchester University in the United Kingdom began development ofMUSEa name derived frommicrosecondenginewith the aim of eventually building a computer that could operate at processing speeds approaching one microsecond per instruction, about one millioninstructions per second.[20]Mu (the name of the Greek letterμ) is a prefix in the SI and other systems of units denoting a factor of 10−6 (one millionth).

At the end of 1958,Ferranti agreed to collaborate with Manchester University on the project, and the computer was shortly afterwards renamedAtlas, with the joint venture under the control ofTom Kilburn. The first Atlas was officially commissioned on 7 December1962nearly three years before the Cray CDC 6600 supercomputer wasintroducedas one of the world's firstsupercomputers. It was considered at the time of its commissioning to be the most powerful computer in the world, equivalent to fourIBM 7094s. It was said that whenever Atlas went offline half of the United Kingdom's computer capacity was lost.[21] The Atlas pioneeredvirtual memory andpaging as a way to extend its working memory by combining its 16,384 words of primarycore memory with an additional 96K words of secondarydrum memory.[22] Atlas also pioneered theAtlas Supervisor, "considered by many to be the first recognizable modernoperating system".[21]

The Cray era: mid-1970s and 1980s

[edit]
AFluorinert-cooledCray-2 supercomputer

Four years after leaving CDC, Cray delivered the 80 MHzCray-1 in 1976, and it became the most successful supercomputer in history.[19][23] The Cray-1, which used integrated circuits with two gates per chip, was avector processor. It introduced a number of innovations, such aschaining, in which scalar and vector registers generate interim results that can be used immediately, without additional memory references which would otherwise reduce computational speed.[11][24] TheCray X-MP (designed bySteve Chen) was released in 1982 as a 105 MHz shared-memoryparallelvector processor with better chaining support and multiple memory pipelines. All three floating-point pipelines on the X-MP could operate simultaneously.[24] By 1983 Cray and Control Data were supercomputer leaders; despite its lead in the overall computer market, IBM was unable to produce a profitable competitor.[25]

TheCray-2, released in 1985, was a four-processorliquid cooled computer totally immersed in a tank ofFluorinert, which bubbled as it operated.[11] It reached 1.9 gigaflops and was the world's fastest supercomputer, and the first to break the gigaflop barrier.[26] The Cray-2 was a totally new design. It did not use chaining and had a high memory latency, but used much pipelining and was ideal for problems that required large amounts of memory.[24] The software costs in developing a supercomputer should not be underestimated, as evidenced by the fact that in the 1980s the cost for software development at Cray came to equal what was spent on hardware.[27] That trend was partly responsible for a move away from the in-house,Cray Operating System toUNICOS based onUnix.[27]

TheCray Y-MP, also designed by Steve Chen, was released in 1988 as an improvement of the X-MP and could have eightvector processors at 167 MHz with a peak performance of 333 megaflops per processor.[24] In the late 1980s, Cray's experiment on the use ofgallium arsenide semiconductors in theCray-3 did not succeed. Seymour Cray began to work on amassively parallel computer in the early 1990s, but died in a car accident in 1996 before it could be completed. Cray Research did, however, produce such computers.[23][11]

Massive processing: the 1990s

[edit]

TheCray-2 which set the frontiers of supercomputing in the mid to late 1980s had only 8 processors. In the 1990s, supercomputers with thousands of processors began to appear. Another development at the end of the 1980s was the arrival of Japanese supercomputers, some of which were modeled after the Cray-1.

During the first half of theStrategic Computing Initiative, some massively parallel architectures were proven to work, such as theWARP systolic array, message-passingMIMD like theCosmic Cube hypercube,SIMD like theConnection Machine, etc. In 1987, a TeraOPS Computing Technology Program was proposed, with a goal of achieving 1 teraOPS (a trillion operations per second) by 1992, which was considered achievable by scaling up any of the previously proven architectures.[28]

Rear of theParagon cabinet showing the bus bars and mesh routers

TheSX-3/44R was announced byNEC Corporation in 1989 and a year later earned the fastest-in-the-world title with a four-processor model.[29] However, Fujitsu'sNumerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994. It had a peak speed of 1.7 gigaflops per processor.[30][31] TheHitachi SR2201 obtained a peak performance of 600 gigaflops in 1996 by using 2,048 processors connected via a fast three-dimensionalcrossbar network.[32][33][34]

In the same timeframe theIntel Paragon could have 1,000 to 4,000Intel i860 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was aMIMD machine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes; communicating via theMessage Passing Interface.[35] By 1995, Cray was also shipping massively parallel systems, e.g. theCray T3E with over 2,000 processors, using a three-dimensionaltorus interconnect.[36][37]

The Paragon architecture soon led to the IntelASCI Red supercomputer in the United States, which held the top supercomputing spot to the end of the 20th century as part of theAdvanced Simulation and Computing Initiative. This was also a mesh-based MIMD massively-parallel system with over 9,000 compute nodes and well over 12 terabytes of disk storage, but used off-the-shelfPentium Pro processors that could be found in everyday personal computers. ASCI Red was the first system ever to break through the 1 teraflop barrier on the MP-Linpack benchmark in 1996; eventually reaching 2 teraflops.[38]

Petascale computing in the 21st century

[edit]
Main article:Petascale computing
ABlue Gene/P supercomputer atArgonne National Laboratory

Significant progress was made in the first decade of the 21st century. The efficiency of supercomputers continued to increase, but not dramatically so. TheCray C90 used 500 kilowatts of power in 1991, while by 2003 theASCI Q used 3,000 kW while being 2,000 times faster, increasing the performance per watt 300 fold.[39]

In 2004, theEarth Simulator supercomputer built byNEC at the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) reached 35.9 teraflops, using 640 nodes, each with eight proprietaryvector processors.[40]

TheIBMBlue Gene supercomputer architecture found widespread use in the early part of the 21st century, and 27 of the computers on theTOP500 list used that architecture. The Blue Gene approach is somewhat different in that it trades processor speed for low power consumption so that a larger number of processors can be used at air cooled temperatures. It can use over 60,000 processors, with 2048 processors "per rack", and connects them via a three-dimensional torus interconnect.[41][42]

Progress inChina has been rapid, in that China placed 51st on the TOP500 list in June 2003; this was followed by 14th in November 2003, 10th in June 2004, then 5th during 2005, before gaining the top spot in 2010 with the 2.5 petaflopTianhe-I supercomputer.[43][44]

In July 2011, the 8.1 petaflop JapaneseK computer became the fastest in the world, using over 60,000SPARC64 VIIIfx processors housed in over 600 cabinets. The fact that the K computer is over 60 times faster than the Earth Simulator, and that the Earth Simulator ranks as the 68th system in the world seven years after holding the top spot, demonstrates both the rapid increase in top performance and the widespread growth of supercomputing technology worldwide.[45][46][47] By 2014, the Earth Simulator had dropped off the list and by 2018 the K computer had dropped out of the top 10. By 2018,Summit had become the world's most powerful supercomputer, at 200 petaFLOPS. In 2020, the Japanese once again took the top spot with theFugaku supercomputer, capable of 442 PFLOPS. Finally, starting in 2022 and until the present (as of December 2023[update]), theworld's fastest supercomputer had become the Hewlett Packard EnterpriseFrontier, also known as the OLCF-5 and hosted at theOak Ridge Leadership Computing Facility (OLCF) inTennessee, United States. The Frontier is based on theCray EX, is the world's firstexascalesupercomputer, and uses onlyAMDCPUs andGPUs; it achieved anRmax of 1.102exaFLOPS, which is 1.102 quintillion operations per second.[48][49][50][51][52]

Historical TOP500 table

[edit]
For a more comprehensive list, seeList of fastest computers.

This is a list of the computers which appeared at the top of theTOP500 list since 1993.[53] The "Peak speed" is given as the "Rmax" rating.

Rapid growth of supercomputers performance, based on data from top500.org site. The logarithmicy-axis shows performance in GFLOPS.
  Combined performance of 500 largest supercomputers
  Fastest supercomputer
  Supercomputer in 500th place
YearSupercomputerPeak speed
(Rmax)
Power efficiency
(GFLOPS per Watt)
Location
1993FujitsuNumerical Wind Tunnel124.50 GFLOPSNational Aerospace Laboratory,Tokyo,Japan
1993IntelParagon XP/S 140143.40 GFLOPSDoE-Sandia National Laboratories,New Mexico,USA
1994FujitsuNumerical Wind Tunnel170.40 GFLOPSNational Aerospace Laboratory,Tokyo,Japan
1996HitachiSR2201/1024220.40 GFLOPSUniversity of Tokyo,Japan
HitachiCP-PACS/2048368.20 GFLOPSUniversity of Tsukuba,Tsukuba,Japan
1997IntelASCI Red/91521.338 TFLOPSDoE-Sandia National Laboratories,New Mexico,USA
1999IntelASCI Red/96322.3796 TFLOPS
2000IBMASCI White7.226 TFLOPSDoE-Lawrence Livermore National Laboratory,California,USA
2002NECEarth Simulator35.860 TFLOPSEarth Simulator Center,Yokohama,Japan
2004IBMBlue Gene/L70.720 TFLOPSDoE/IBM Rochester,Minnesota,USA
2005136.800 TFLOPSDoE/U.S. National Nuclear Security Administration,
Lawrence Livermore National Laboratory,California,USA
280.600 TFLOPS
2007478.200 TFLOPS
2008IBMRoadrunner1.026 PFLOPSDoE-Los Alamos National Laboratory,New Mexico,USA
1.105 PFLOPS0.445
2009CrayJaguar1.759 PFLOPSDoE-Oak Ridge National Laboratory,Tennessee,USA
2010Tianhe-IA2.566 PFLOPS0.635National Supercomputing Center,Tianjin,China
2011FujitsuK computer10.510 PFLOPS0.825Riken,Kobe,Japan
2012IBM Sequoia16.320 PFLOPSLawrence Livermore National Laboratory,California,USA
2012CrayTitan17.590 PFLOPSOak Ridge National Laboratory,Tennessee,USA
2013NUDTTianhe-233.860 PFLOPS2.215Guangzhou,China
2016Sunway TaihuLight93.010 PFLOPS6.051Wuxi,China
2018IBM Summit122.300 PFLOPS14.668DoE-Oak Ridge National Laboratory,Tennessee,USA
2020Fugaku415.530 PFLOPS15.418Riken,Kobe,Japan
2021Frontier1.353 EFLOPSOak Ridge Leadership Computing Facility,Tennessee,USA
2024El Capitan1.742 EFLOPSLawrence Livermore National Laboratory,California,USA

Export controls

[edit]

TheCoCom and its later replacement, theWassenaar Arrangement, legally regulated, i.e. required licensing and approval and record-keeping; or banned entirely, the export ofhigh-performance computers (HPCs) to certain countries. Such controls have become harder to justify, leading to loosening of these regulations. Some have argued these regulations were never justified.[54][55][56][57][58][59]

See also

[edit]

External links

[edit]

References

[edit]
  1. ^abChen, Sao-Jie; Lin, Guang-Huei; Hsiung, Pao-Ann; Hu, Yu-Hen (2009).Hardware software co-design of a multimedia SOC platform.Springer Science+Business Media. pp. 70–72.ISBN 9781402096235. Retrieved20 February 2018.
  2. ^Impagliazzo, John; Lee, John A. N. (2004).History of computing in education. Springer. p. 172.ISBN 1-4020-8135-9. Retrieved20 February 2018.
  3. ^Sisson, Richard; Zacher, Christian K. (2006).The American Midwest: an interpretive encyclopedia. Indiana University Press. p. 1489.ISBN 0-253-34886-2.
  4. ^Frank da Cruz (25 October 2013) [2004]."IBM NORC". Retrieved20 February 2018.
  5. ^SAGE - Computer of the Cold War: The AN/FSQ-7: Whirlwind II i-programmer.info
  6. ^Lundstrom, David E. (1984).A Few Good Men from UNIVAC. MIT Press.ISBN 9780735100107. Retrieved20 February 2018.
  7. ^David Lundstrom,A Few Good Men from UNIVAC, page 90, lists LARC and STRETCH as supercomputers.
  8. ^Eames, Charles; Eames, Ray (1973).A Computer Perspective. Cambridge, Mass: Harvard University Press. p. 95.. Page 95 identifies the article as"Super Computing Machines Shown". New York World. March 1, 1920.. However, the article shown on page 95 references the Statistical Bureau in Hamilton Hall, and an article at the Columbia Computing History web site states that such did not exist until 1929. SeeThe Columbia Difference Tabulator - 1931
  9. ^"Super Computing Machines Shown (inNew York World)". 1920. Retrieved26 February 2024.
  10. ^abcdHannan, Caryn (2008).Wisconsin Biographical Dictionary. State History Publications. pp. 83–84.ISBN 978-1-878592-63-7. Retrieved20 February 2018.
  11. ^abcdMurray, Charles J. (1997).The Supermen. Wiley & Sons.ISBN 9780471048855.
  12. ^"Designed by Seymour Cray, the CDC 6600 was almost three times faster than the next fastest machine of its day, the IBM 7030 Stretch."Making a World of Difference: Engineering Ideas into Reality. National Academy of Engineering. 2014.ISBN 978-0309312653.
  13. ^"In 1964 Cray's CDC 6600 replaced Stretch as the fastest computer on Earth."Sofroniou, Andreas (2013).Expert Systems, Knowledge Engineering for Human Replication. Lulu.com.ISBN 978-1291595093.
  14. ^Anthony, Sebastian (April 10, 2012)."The History of Supercomputers".ExtremeTech. Retrieved2015-02-02.
  15. ^"CDC 6600".Encyclopædia Britannica. Retrieved2015-02-02.
  16. ^Ceruzzi, Paul E. (2003).A history of modern computing. MIT Press. p. 161.ISBN 978-0-262-53203-7. Retrieved20 February 2018.
  17. ^Frisch, Michael J. (December 1972)."Remarks on algorithm 352 [S22], algorithm 385 [S13], algorithm 392 [D3]".Communications of the ACM.15 (12): 1074.doi:10.1145/361598.361914.S2CID 6571977.
  18. ^Fosdick, Lloyd Dudley (1996).An Introduction to high-performance scientific computing. MIT Press. p. 418.ISBN 0-262-06181-3.
  19. ^abHill, Mark Donald;Jouppi, Norman Paul; Sohi, Gurindar (1999).Readings in computer architecture. Gulf Professional. pp. 41–48.ISBN 978-1-55860-539-8.
  20. ^"The Atlas". University of Manchester. Archived fromthe original on 28 July 2012. Retrieved21 September 2010.
  21. ^abLavington, Simon Hugh (1998).A History of Manchester Computers (2 ed.). Swindon: The British Computer Society. pp. 41–52.ISBN 978-1-902505-01-5.
  22. ^Creasy, R. J. (September 1981),"The Origin of the VM/370 Time-Sharing System"(PDF),IBM Journal of Research & Development, vol. 25, no. 5, p. 486
  23. ^abReilly, Edwin D. (2003).Milestones in computer science and information technology. Bloomsbury Academic. p. 65.ISBN 1-57356-521-0.
  24. ^abcdTokhi, M. O.; Hossain, Mohammad Alamgir (2003).Parallel computing for real-time signal processing and control. Springer. pp. 201-202.ISBN 978-1-85233-599-1.
  25. ^Greenwald, John (1983-07-11)."The Colossus That Works".Time.Archived from the original on 2008-05-14. Retrieved2019-05-18.
  26. ^Due to Soviet propaganda, it can be read sometimes that the Soviet supercomputer M13 was the first to reach the gigaflops barrier. Actually, the M13 construction began in 1984, but it was not operational before 1986.Rogachev Yury Vasilievich, Russian Virtual Computer Museum
  27. ^abMacKenzie, Donald (1998).Knowing machines: essays on technical change. MIT Press. pp. 149–151.ISBN 0-262-63188-1.
  28. ^Roland, Alex; Shiman, Philip (2002).Strategic computing: DARPA and the quest for machine intelligence, 1983 - 1993. History of computing. Cambridge, Mass.: MIT Press. p. 296.ISBN 978-0-262-18226-3.
  29. ^Glowinski, R.; Lichnewsky, A. (January 1990).Computing methods in applied sciences and engineering. pp. 353–360.ISBN 0-89871-264-5.
  30. ^"TOP500 Annual Report 1994". 1 October 1996.
  31. ^Hirose, N.; Fukuda, M. (1997).Numerical Wind Tunnel (NWT) and CFD Research at National Aerospace Laboratory. Proceedings of HPC-Asia '97. IEEE Computer Society.doi:10.1109/HPC.1997.592130.
  32. ^Fujii, H.; Yasuda, Y.; Akashi, H.; Inagami, Y.; Koga, M.; Ishihara, O.; Kashiyama, M.; Wada, H.; Sumimoto, T. (April 1997).Architecture and performance of the Hitachi SR2201 massively parallel processor system.Proceedings of 11th International Parallel Processing Symposium. pp. 233–241.doi:10.1109/IPPS.1997.580901.ISBN 0-8186-7793-7.
  33. ^Iwasaki, Y. (January 1998). "The CP-PACS project".Nuclear Physics B - Proceedings Supplements.60 (1–2):246–254.arXiv:hep-lat/9709055.Bibcode:1998NuPhS..60..246I.doi:10.1016/S0920-5632(97)00487-8.
  34. ^A.J. van der Steen, Overview of recent supercomputers, Publication of the NCF, Stichting Nationale Computer Faciliteiten, the Netherlands, January 1997.
  35. ^Reed, Daniel A. (2003).Scalable input/output: achieving system balance. MIT Press. p. 182.ISBN 978-0-262-68142-1.
  36. ^"Cray Sells First T3E-1350 Supercomputer to PhillipsPetroleum" (Press release). Seattle: Gale Group. Business Wire. 7 August 2000.
  37. ^Agida, N. R.; et al. (et al.) (March–May 2005)."Blue Gene/L Torus Interconnection Network"(PDF).IBM Journal of Research and Development.45 (2–3): 265. Archived fromthe original(PDF) on 15 August 2011. Retrieved9 February 2012.
  38. ^Greenberg, David S. (1998). Heath, Michael T. (ed.)."Enabling Department-Scale Supercomputing".Algorithms for Parallel Processing.105: 323.ISBN 0-387-98680-4. Retrieved20 February 2018.
  39. ^Feng, Wu-chun (1 October 2003)."Making a Case for Efficient Supercomputing".ACM Queue.1 (7):54–64.doi:10.1145/957717.957772.S2CID 11283177.
  40. ^Sato, Tetsuya (2004). "The Earth Simulator: Roles and Impacts".Nuclear Physics B: Proceedings Supplements.129: 102.Bibcode:2004NuPhS.129..102S.doi:10.1016/S0920-5632(03)02511-8.
  41. ^Almasi, George; et al. (et al.) (2005). Cunha, José Cardoso; Medeiros, Pedro D. (eds.).Early Experience with Scientific Applications on the Blue Gene/L Supercomputer.Euro-Par 2005 parallel processing: 11th International Euro-Par Conference. pp. 560–567.ISBN 9783540319252.
  42. ^Morgan, Timothy Prickett (22 November 2010)."IBM uncloaks 20 petaflops BlueGene/Q super".The Register.
  43. ^Graham, Susan L.; Snir, Marc; Patterson, Cynthia A. (2005).Getting up to speed: the future of supercomputing. National Academies Press. p. 188.ISBN 0-309-09502-6.
  44. ^Vance, Ashlee (28 October 2010)."China Wrests Supercomputer Title From U.S."The New York Times. Retrieved20 February 2018.
  45. ^"Japanese supercomputer 'K' is world's fastest".The Telegraph. 20 June 2011. Retrieved20 June 2011.
  46. ^"Japanese 'K' Computer Is Ranked Most Powerful".The New York Times. 20 June 2011. Retrieved20 June 2011.
  47. ^"Supercomputer 'K computer' Takes First Place in World". Fujitsu. Retrieved20 June 2011.
  48. ^Wells, Jack (March 19, 2018)."Powering the Road to National HPC Leadership". OpenPOWER Summit 2018.Archived from the original on August 4, 2020. RetrievedMarch 25, 2018.
  49. ^Bethea, Katie (February 13, 2018)."Frontier: OLCF'S Exascale Future – Oak Ridge Leadership Computing Facility".Oak Ridge National Laboratory - Leadership Computing Facility.Archived from the original on March 10, 2018.
  50. ^"DOE Under Secretary for Science Dabbar's Exascale Update".insideHPC. October 9, 2020.Archived from the original on October 28, 2020.
  51. ^Don Clark (May 30, 2022)."U.S. Retakes Top Spot in Supercomputer Race".The New York Times.Archived from the original on June 1, 2022. RetrievedJune 1, 2022.
  52. ^Larabel, Michael (May 30, 2022)."AMD-Powered Frontier Supercomputer Tops Top500 At 1.1 Exaflops, Tops Green500 Too".Phoronix.Archived from the original on June 6, 2022. RetrievedJune 1, 2022.
  53. ^"Sublist Generator". Top500. 2017. Retrieved20 February 2018.
  54. ^"Complexities of Setting Export Control Thresholds: Computers".Export controls and nonproliferation policy(PDF). DIANE Publishing. May 1994.ISBN 9781428920521.
  55. ^Wolcott, Peter; Goodman, Seymour; Homer, Patrick (November 1998)."High Performance Computing Export Controls: Navigating Choppy Waters".Communications of the ACM.41 (11). New York, USA:27–30.doi:10.1145/287831.287836.S2CID 18519822.
  56. ^McLoughlin, Glenn J.; Fergusson, Ian F. (10 February 2003).High Performance Computers and Export Control Policy(PDF) (Report).
  57. ^Brugger, Seth (1 September 2000)."U.S. Revises Computer Export Control Regulations".Arms Control Association.
  58. ^"Export Controls for High Performance Computers". 24 June 2011.
  59. ^Blagdon, Jeff (30 May 2013)."US removes sanctions on computer exports to Iran".
Retrieved from "https://en.wikipedia.org/w/index.php?title=History_of_supercomputing&oldid=1337559570"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp