Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Single system image

From Wikipedia, the free encyclopedia
Cluster dedicated operating system

Indistributed computing, asingle system image (SSI) cluster is acluster of machines that appears to be one single system.[1][2][3] The concept is often considered synonymous with that of adistributed operating system,[4][5] but a single image may be presented for more limited purposes, justjob scheduling for instance, which may be achieved by means of an additional layer of software over conventionaloperating system images running on eachnode.[6] The interest in SSI clusters is based on the perception that they may be simpler to use and administer than more specialized clusters.

Different SSI systems may provide a more or less completeillusion of a single system.

Features of SSI clustering systems

[edit]

Different SSI systems may, depending on their intended usage, provide some subset of these features.

Process migration

[edit]
Main article:Process migration

Many SSI systems provideprocess migration.[7]Processes may start on onenode and be moved to another node, possibly forresource balancing or administrative reasons.[note 1] As processes are moved from one node to another, other associated resources (for exampleIPC resources) may be moved with them.

Process checkpointing

[edit]

Some SSI systems allowcheckpointing of running processes, allowing their current state to be saved and reloaded at a later date.[note 2]Checkpointing can be seen as related to migration, as migrating a process from one node to another can be implemented by first checkpointing the process, then restarting it on another node. Alternatively checkpointing can be considered asmigration to disk.

Single process space

[edit]

Some SSI systems provide the illusion that all processes are running on the same machine - the process management tools (e.g. "ps", "kill" onUnix like systems) operate on all processes in the cluster.

Single root

[edit]

Most SSI systems provide a single view of the file system. This may be achieved by a simpleNFS server, shared disk devices or even file replication.

The advantage of a single root view is that processes may be run on any available node and access needed files with no special precautions. If the cluster implements process migration a single root view enables direct accesses to the files from the node where the process is currently running.

Some SSI systems provide a way of "breaking the illusion", having some node-specific files even in a single root.HPTruCluster provides a "context dependent symbolic link" (CDSL) which points to different files depending on the node that accesses it.HPVMScluster provides a search list logical name with node specific files occluding cluster shared files where necessary. This capability may be necessary to deal withheterogeneous clusters, where not all nodes have the same configuration. In more complex configurations such as multiple nodes of multiple architectures over multiple sites, several local disks may combine to form the logical single root.

Single I/O space

[edit]

Some SSI systems allow all nodes to access the I/O devices (e.g. tapes, disks, serial lines and so on) of other nodes. There may be some restrictions on the kinds of accesses allowed (For example,OpenSSI can't mount disk devices from one node on another node).

Single IPC space

[edit]

Some SSI systems allow processes on different nodes to communicate usinginter-process communications mechanisms as if they were running on the same machine. On some SSI systems this can even includeshared memory (can be emulated in software withdistributed shared memory).

In most cases inter-node IPC will be slower than IPC on the same machine, possibly drastically slower for shared memory. Some SSI clusters include special hardware to reduce this slowdown.

Cluster IP address

[edit]

Some SSI systems provide a "cluster IP address", a single address visible from outside the cluster that can be used to contact the cluster as if it were one machine. This can be used for load balancing inbound calls to the cluster, directing them to lightly loaded nodes, or for redundancy, moving the cluster address from one machine to another as nodes join or leave the cluster.[note 3]

Examples

[edit]

Examples here vary from commercial platforms with scaling capabilities, to packages/frameworks for creating distributed systems, as well as those that actually implement a single system image.

SSI Properties of different clustering systems
NameProcess migrationProcess checkpointSingle process spaceSingle rootSingle I/O spaceSingle IPC spaceCluster IP address[t 1]Source ModelLatest release date[t 2]Supported OS
Amoeba[t 3]YesYesYesYesUnknownYesUnknownOpenJuly 30, 1996Native
AIXTCFUnknownUnknownUnknownYesUnknownUnknownUnknownClosedMarch 30, 1990[8]AIX PS/2 1.2
NonStop Guardian[t 4]YesYesYesYesYesYesYesClosedSeptember 2025NonStop OS
InfernoNoNoNoYesYesYesUnknownOpenAugust 13, 2025Native,Windows,Irix,Linux,OS X,FreeBSD,Solaris,Plan 9
KerrighedYesYesYesYesUnknownYesUnknownOpenJune 14, 2010Linux 2.6.30
LinuxPMI[t 5]YesYesNoYesNoNoUnknownOpenJune 18, 2006Linux 2.6.17
LOCUS[t 6]YesUnknownYesYesYesYes[t 7]UnknownClosed1988Native
MOSIXYesYesNoYesNoNoUnknownClosedOctober 24, 2017Linux
openMosix[t 8]YesYesNoYesNoNoUnknownOpenDecember 10, 2004Linux 2.4.26
OpenPMIx[t 9]YesYesNoYesNoNoUnknownOpenSeptember 26, 2025Linux
Open-Sharedroot[t 10]NoNoNoYesNoNoYesOpenSeptember 1, 2011[9]Linux
OpenSSIYesNoYesYesYesYesYesOpenFebruary 18, 2010Linux 2.6.10 (Debian,Fedora)
Plan 9No[10]NoNoYesYesYesYesOpenJanuary 10, 2015Native
SpriteYesUnknownNoYesYesNoUnknownOpen1992Native
TidalScale (nowHPE)YesNoYesYesYesYesYesClosedAugust 17, 2020Linux,FreeBSD
TruClusterNoUnknownNoYesNoNoYesClosedOctober 1, 2010Tru64
VMSclusterNoNoYesYesYesYesYesClosedNovember 20, 2024OpenVMS
z/VMYesNoYesNoNoYesUnknownClosedSeptember 30, 2024Native
UnixWare NonStop Clusters[t 11]YesNoYesYesYesYesYesClosedJune 2000UnixWare
  1. ^Many of theLinux based SSI clusters can use theLinux Virtual Server to implement a single cluster IP address
  2. ^Green means software is actively developed
  3. ^Amoeba development is carried forward by Dr. Stefan Bosse atBSS LabArchived 2009-02-03 at theWayback Machine
  4. ^Guardian90 TR90.8 Based on R&D by Tandem Computers c/o Andrea Borr at[1]
  5. ^LinuxPMI is a successor toopenMosix
  6. ^LOCUS was used to createIBMAIXTCF
  7. ^LOCUS usednamed pipes for IPC
  8. ^openMosix was a fork of MOSIX
  9. ^The OpenPMIx project is continuing development of the former openMosix code
  10. ^Open-Sharedroot is a shared root Cluster from ATIX
  11. ^UnixWare NonStop Clusters was a base forOpenSSI

See also

[edit]

Notes

[edit]
  1. ^for example it may be necessary to move long running processes off a node that is to be closed down for maintenance
  2. ^Checkpointing is particularly useful in clusters used forhigh-performance computing, avoiding lost work in case of a cluster or node restart.
  3. ^"leaving a cluster" is often a euphemism for crashing

References

[edit]
  1. ^Pfister, Gregory F. (1998),In search of clusters, Upper Saddle River, NJ: Prentice Hall PTR,ISBN 978-0-13-899709-0,OCLC 38300954
  2. ^Buyya, Rajkumar; Cortes, Toni; Jin, Hai (2001),"Single System Image"(PDF),International Journal of High Performance Computing Applications,15 (2): 124,doi:10.1177/109434200101500205,S2CID 38921084
  3. ^Healy, Philip; Lynn, Theo; Barrett, Enda; Morrison, John P. (2016),"Single system image: A survey"(PDF),Journal of Parallel and Distributed Computing,90–91:35–51,doi:10.1016/j.jpdc.2016.01.004,hdl:10468/4932
  4. ^Coulouris, George F; Dollimore, Jean; Kindberg, Tim (2005),Distributed systems: concepts and design, Addison Wesley, p. 223,ISBN 978-0-321-26354-4
  5. ^Bolosky, William J.; Draves, Richard P.; Fitzgerald, Robert P.; Fraser, Christopher W.; Jones, Michael B.; Knoblock, Todd B.; Rashid, Rick (1997-05-05), "Operating System Directions for the Next Millennium",6th Workshop on Hot Topics in Operating Systems (HotOS-VI), Cape Cod, MA, pp. 106–110,CiteSeerX 10.1.1.50.9538,doi:10.1109/HOTOS.1997.595191,ISBN 978-0-8186-7834-9,S2CID 15380352{{citation}}: CS1 maint: location missing publisher (link)
  6. ^Prabhu, C.S.R. (2009),Grid And Cluster Computing, Phi Learning, p. 256,ISBN 978-81-203-3428-1
  7. ^Smith, Jonathan M. (1988),"A survey of process migration mechanisms"(PDF),ACM SIGOPS Operating Systems Review,22 (3):28–40,CiteSeerX 10.1.1.127.8095,doi:10.1145/47671.47673,S2CID 6611633
  8. ^"AIX PS/2 OS".
  9. ^"Open-Sharedroot GitHub repository".GitHub.
  10. ^Pike, Rob; Presotto, Dave; Thompson, Ken; Trickey, Howard (1990), "Plan 9 from Bell Labs", In Proceedings of the Summer 1990 UKUUG Conference, p. 8,Process migration is also deliberately absent from Plan 9.{{citation}}:Missing or empty|title= (help)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Single_system_image&oldid=1313827811"
Category:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp