Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

TeraGrid

From Wikipedia, the free encyclopedia
TeraGrid
Type of site
Scientific support
Available inEnglish
URLwww.teragrid.org
CommercialNo
Launched2004

TeraGrid was ane-Sciencegrid computing infrastructure combining resources at eleven partner sites. The project started in 2001 and operated from 2004 through 2011.

The TeraGrid integrated high-performance computers, data resources and tools, and experimental facilities. Resources included more than apetaflops of computing capability and more than 30 petabytes of online and archival data storage, with rapid access and retrieval over high-performancecomputer network connections. Researchers could also access more than 100 discipline-specific databases.

TeraGrid was coordinated through the Grid Infrastructure Group (GIG) at theUniversity of Chicago, working in partnership with the resource provider sites in the United States.

History

[edit]

The USNational Science Foundation (NSF) issued a solicitation asking for a "distributed terascale facility" from program director Richard L. Hilderbrandt.[1]The TeraGrid project was launched in August 2001 with $53 million in funding to four sites: theNational Center for Supercomputing Applications (NCSA) at theUniversity of Illinois at Urbana-Champaign, theSan Diego Supercomputer Center (SDSC) at theUniversity of California, San Diego, theUniversity of ChicagoArgonne National Laboratory, and the Center for Advanced Computing Research (CACR) at theCalifornia Institute of Technology inPasadena, California.

The design was meant to be an extensible distributedopen system from the start.[2]In October 2002, thePittsburgh Supercomputing Center (PSC) atCarnegie Mellon University and theUniversity of Pittsburgh joined the TeraGrid as major new partners when NSF announced $35 million in supplementary funding. The TeraGrid network was transformed through the ETF project from a 4-sitemesh to a dual-hubbackbone network with connection points inLos Angeles and at the Starlight facilities inChicago.

In October 2003, NSF awarded $10 million to add four sites to TeraGrid as well as to establish a third network hub, inAtlanta. These new sites wereOak Ridge National Laboratory (ORNL),Purdue University,Indiana University, and theTexas Advanced Computing Center (TACC) at TheUniversity of Texas at Austin.

TeraGrid construction was also made possible through corporate partnerships withSun Microsystems,IBM,Intel Corporation,Qwest Communications,Juniper Networks,Myricom,Hewlett-Packard Company, andOracle Corporation.

TeraGrid construction was completed in October 2004, at which time the TeraGrid facility began full production.

Operation

[edit]

In August 2005, NSF's newly created office ofcyberinfrastructure extended support for another five years with a $150 million set of awards. It included $48 million for coordination and user support to the Grid Infrastructure Group at theUniversity of Chicago led byCharlie Catlett.[3]Using high-performance network connections, the TeraGrid featured high-performance computers, data resources and tools, and high-end experimental facilities around the USA. The work supported by the project is sometimes callede-Science.In 2006, theUniversity of Michigan's School of Information began a study of TeraGrid.[4]

In May 2007, TeraGrid integrated resources included more than 250 teraflops of computing capability and more than 30 petabytes (quadrillions of bytes) of online and archival data storage with rapid access and retrieval over high-performance networks. Researchers could access more than 100 discipline-specific databases. In late 2009, The TeraGrid resources had grown to 2 petaflops of computing capability and more than 60 petabytes storage. In mid 2009, NSF extended the operation of TeraGrid to 2011.

Transition to XSEDE

[edit]

A follow-on project was approved in May 2011.[5]In July 2011, a partnership of 17 institutions announced theExtreme Science and Engineering Discovery Environment (XSEDE). NSF announced funding the XSEDE project for five years, at $121 million.[6]XSEDE was led by John Towns at theUniversity of Illinois'sNational Center for Supercomputing Applications[6] until it ended August 31, 2022, being followed by another program named ACCESS.[7]

Architecture

[edit]
TeraGrid equipment atUCSD in 2007

TeraGrid resources are integrated through aservice-oriented architecture in that each resource provides a "service" that is defined in terms of interface and operation. Computational resources run a set of software packages called "Coordinated TeraGrid Software and Services" (CTSS). CTSS provides a familiar user environment on all TeraGrid systems, allowing scientists to more easily port code from one system to another. CTSS also provides integrative functions such as single-signon, remote job submission, workflow support, data movement tools, etc. CTSS includes the Globus Toolkit, Condor, distributed accounting and account management software, verification and validation software, and a set of compilers, programming tools, andenvironment variables.

TeraGrid uses a 10 Gigabits per second dedicated fiber-optical backbone network, with hubs in Chicago, Denver, and Los Angeles. All resource provider sites connect to a backbone node at 10 Gigabits per second. Users accessed the facility through national research networks such as theInternet2Abilene backbone andNational LambdaRail.

Usage

[edit]

TeraGrid users primarily came from U.S. universities. There are roughly 4,000 users at over 200 universities. Academic researchers in the United States can obtain exploratory, ordevelopment allocations (roughly, in "CPU hours") based on an abstract describing the work to be done. More extensive allocations involve a proposal that is reviewed during a quarterly peer-review process. All allocation proposals are handled through the TeraGrid website. Proposers select a scientific discipline that most closely describes their work, and this enables reporting on the allocation of, and use of, TeraGrid by scientific discipline. As of July 2006 the scientific profile of TeraGrid allocations and usage was:

Allocated (%)Used (%)Scientific Discipline
1923Molecular Biosciences
1723Physics
1410Astronomical Sciences
1221Chemistry
104Materials Research
86Chemical, Thermal Systems
77Atmospheric Sciences
32Advanced Scientific Computing
20.5Earth Sciences
20.5Biological and Critical Systems
10.5Ocean Sciences
10.5Cross-Disciplinary Activities
10.5Computer and Computation Research
0.50.25Integrative Biology and Neuroscience
0.50.25Mechanical and Structural Systems
0.50.25Mathematical Sciences
0.50.25Electrical and Communication Systems
0.50.25Design and Manufacturing Systems
0.50.25Environmental Biology

Each of these discipline categories correspond to a specific program area of theNational Science Foundation.

Starting in 2006, TeraGrid provided application-specific services toScience Gateway partners, who serve (generally via a web portal) discipline-specific scientific and education communities. Through the Science Gateways program TeraGrid aims to broaden access by at least an order of magnitude in terms of the number of scientists, students, and educators who are able to use TeraGrid.

Resource providers

[edit]

Similar projects

[edit]

References

[edit]
  1. ^Distributed Terascale Facility (DTF).National Science Foundation. January 2001. RetrievedSeptember 23, 2011.{{cite book}}:|work= ignored (help)
  2. ^Charlie Catlett (May 21, 2002).The Philosophy of TeraGrid: Building an Open, Extensible, Distributed TeraScale Facility. 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid. p. 8.doi:10.1109/CCGRID.2002.1017101.ISBN 0-7695-1582-7.
  3. ^"$150 Million TeraGrid Award Heralds New Era for Scientific Computing".News release. National Science Foundation. August 17, 2005. RetrievedSeptember 23, 2011.
  4. ^Ann Zimmerman; Thomas A. Finholt (August 2008).Report from the TeraGrid Evaluation Study, Part 1: Project Findings(PDF). National Science Foundation. RetrievedSeptember 23, 2011.
  5. ^National Science Board (May 26, 2011)."Summary Report of the May 10-11, 2011 Meeting"(PDF). RetrievedSeptember 23, 2011.
  6. ^ab"XSEDE Project Brings Advanced Cyberinfrastructure, Digital Services, and Expertise to Nation's Scientists and Engineers".News release. National Science Foundation. July 25, 2011. RetrievedSeptember 23, 2011.
  7. ^XSEDE (August 31, 2022)."Thank you for your interest in XSEDE". Retrieved2026-01-22.>
  8. ^"Big Red at IU".rt.uits.iu.edu. Archived fromthe original on 9 February 2015. Retrieved9 Feb 2015.
  9. ^"LONI Gets Funding for TeraGrid Research"(PDF).News release. Louisiana State University. September 9, 2009. Archived fromthe original(PDF) on July 26, 2011. RetrievedSeptember 23, 2011.
  10. ^S. Matsuokaet; et al. (March 2005). "Japanese Computational Grid Research Project: NAREGI".Proceedings of the IEEE.93 (3):522–533.doi:10.1109/JPROC.2004.842748.S2CID 22562197.

External links

[edit]
Retrieved from "https://en.wikipedia.org/w/index.php?title=TeraGrid&oldid=1334259116"
Categories:
Hidden category:

[8]ページ先頭

©2009-2026 Movatter.jp