Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

OS-level virtualization

From Wikipedia, the free encyclopedia
(Redirected fromOperating-system-level virtualization)
Operating system virtualization paradigm
icon
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed.
Find sources: "OS-level virtualization" – news ·newspapers ·books ·scholar ·JSTOR
(November 2020) (Learn how and when to remove this message)

OS-level virtualization is anoperating system (OS)virtualization paradigm in which thekernel allows the existence of multiple isolateduser space instances, includingcontainers (LXC,Solaris Containers, AIXWPARs, HP-UX SRP Containers,Docker,Podman,Guix),zones (Solaris Containers),virtual private servers (OpenVZ),partitions,virtual environments (VEs),virtual kernels (DragonFly BSD), andjails (FreeBSD jail andchroot).[1] Such instances may look like real computers from the point of view of programs running in them. Acomputer program running on an ordinary operating system can see all resources (connected devices, files and folders,network shares, CPU power, quantifiable hardware capabilities) of that computer. Programs running inside acontainer can only see the container's contents and devices assigned to the container.

OnUnix-like operating systems, this feature can be seen as an advanced implementation of the standardchroot mechanism, which changes the apparent root folder for the current running process and its children. In addition to isolation mechanisms, the kernel often providesresource-management features to limit the impact of one container's activities on other containers. Linux containers are all based on the virtualization, isolation, and resource management mechanisms provided by theLinux kernel, notablyLinux namespaces andcgroups.[2]

Although the wordcontainer most commonly refers to OS-level virtualization, it is sometimes used to refer to fullervirtual machines operating in varying degrees of concert with the host OS,[citation needed] such asMicrosoft'sHyper-V containers.[citation needed] For an overview ofvirtualization since 1960, seeTimeline of virtualization technologies.

Operation

[edit]

On ordinary operating systems for personal computers, a computer program can see (even though it might not be able to access) all the system's resources. They include:

  • Hardware capabilities that can be employed, such as theCPU and the network connection
  • Data that can be read or written, such as files, folders andnetwork shares
  • Connectedperipherals it can interact with, such aswebcam,printer, scanner, or fax

The operating system may be able to allow or deny access to such resources based on which program requests them and theuser account in the context in which it runs. The operating system may also hide those resources, so that when the computer program enumerates them, they do not appear in the enumeration results. Nevertheless, from a programming point of view, the computer program has interacted with those resources and the operating system has managed an act of interaction.

With operating-system-virtualization, or containerization, it is possible to run programs within containers, to which only parts of these resources are allocated. A program expecting to see the whole computer, once run inside a container, can only see the allocated resources and believes them to be all that is available. Several containers can be created on each operating system, to each of which a subset of the computer's resources is allocated. Each container may contain any number of computer programs. These programs may run concurrently or separately, and may even interact with one another.

Containerization has similarities toapplication virtualization: In the latter, only one computer program is placed in an isolated container and the isolation applies to file system only.

Uses

[edit]

Operating-system-level virtualization is commonly used invirtual hosting environments, where it is useful for securely allocating finite hardware resources among a large number of mutually-distrusting users. System administrators may also use it for consolidating server hardware by moving services on separate hosts into containers on the one server.

Other typical scenarios include separating several programs to separate containers for improved security, hardware independence, and added resource management features.[3] The improved security provided by the use of a chroot mechanism, however, is not perfect.[4] Operating-system-level virtualization implementations capable oflive migration can also be used for dynamicload balancing of containers between nodes in a cluster.

Overhead

[edit]

Operating-system-level virtualization usually imposes less overhead thanfull virtualization because programs in OS-level virtual partitions use the operating system's normalsystem call interface and do not need to be subjected toemulation or be run in an intermediatevirtual machine, as is the case with full virtualization (such asVMware ESXi,QEMU, orHyper-V) andparavirtualization (such asXen orUser-mode Linux). This form of virtualization also does not require hardware support for efficient performance.

Flexibility

[edit]

Operating-system-level virtualization is not as flexible as other virtualization approaches since it cannot host a guest operating system different from the host one, or a different guest kernel. For example, withLinux, different distributions are fine, but other operating systems such as Windows cannot be hosted. Operating systems using variable input systematics are subject to limitations within the virtualized architecture. Adaptation methods including cloud-server relay analytics maintain the OS-level virtual environment within these applications.[5]

Solaris partially overcomes the limitation described above with itsbranded zones feature, which provides the ability to run an environment within a container that emulates an olderSolaris 8 or 9 version in a Solaris 10 host. Linux branded zones (referred to as "lx" branded zones) are also available onx86-based Solaris systems, providing a complete Linuxuser space and support for the execution of Linux applications; additionally, Solaris provides utilities needed to installRed Hat Enterprise Linux 3.x orCentOS 3.xLinux distributions inside "lx" zones.[6][7] However, in 2010 Linux branded zones were removed from Solaris; in 2014 they were reintroduced inIllumos, which is theopen source Solaris fork, supporting 32-bitLinux kernels.[8]

Storage

[edit]

Some implementations provide file-levelcopy-on-write (CoW) mechanisms. (Most commonly, a standard file system is shared between partitions, and those partitions that change the files automatically create their own copies.) This is easier to back up, more space-efficient and simpler to cache than the block-level copy-on-write schemes common on whole-system virtualizers. Whole-system virtualizers, however, can work with non-native file systems and create and roll back snapshots of the entire system state.

Implementations

[edit]

Actively Maintained / Developed Implementations

[edit]
MechanismOperating systemLicenseStart of developmentFeatures
File system isolationCopy on writeDisk quotasI/O rate limitingMemory limitsCPU quotasNetwork isolationNested virtualizationPartition checkpointing and live migrationRoot privilege isolation
chrootMostUNIX-like operating systemsVaries by operating system1982Partial[a]NoNoNoNoNoNoYesNoNo
DockerLinux,[10]Windows x64[11]macOS[12]Apache License 2.02013YesYesPartial[b]Yes(since 1.10)YesYesYesYesOnly in experimental mode withCRIU[1]Yes(since 1.10)
PodmanLinux,Windows,macOS,FreeBSDApache License 2.02018YesYesYes[14]YesYesYesYesYesYes[15]Yes
LXCLinuxGNU GPLv22008Yes[16]YesPartial[c]Partial[d]YesYesYesYesYesYes[16]
Apptainer (formerly Singularity[17])LinuxBSD Licence2015[18]Yes[19]YesYesNoNoNoNoNoNoYes[20]
OpenVZLinuxGNU GPLv22005YesYes[21]YesYes[e]YesYesYes[f]Partial[g]YesYes[h]
VirtuozzoLinux,WindowsTrialware2000[25]YesYesYesYes[i]YesYesYes[f]Partial[j]YesYes
Solaris Containers (Zones)illumos (OpenSolaris),
Solaris
CDDL,
Proprietary
2004YesYes (ZFS)YesPartial[k]YesYesYes[l][28][29]Partial[m]Partial[n][o]Yes[p]
FreeBSD jailFreeBSD,DragonFly BSDBSD License2000[31]YesYes (ZFS)Yes[q]YesYes[32]YesYes[33]YesPartial[34][35]Yes[36]
vkernelDragonFly BSDBSD Licence2006[37]Yes[38]Yes[38]?Yes[39]Yes[39]Yes[40]??Yes
WPARsAIXCommercialproprietary software2007YesNoYesYesYesYesYes[r]NoYes[42]?
iCore Virtual AccountsWindows XPFreeware2008YesNoYesNoNoNoNo?No?
SandboxieWindowsGNU GPLv32004YesYesPartialNoNoNoPartialNoNoYes
systemd-nspawnLinuxGNU LGPLv2.1+2010YesYesYes[43][44]Yes[43][44]Yes[43][44]Yes[43][44]Yes??Yes
TurboWindowsFreemium2012YesNoNoNoNoNoYesNoNoYes

Historical/Defunct Implementations

[edit]
MechanismOperating systemLicenseActively developed since or betweenFeatures
File system isolationCopy on writeDisk quotasI/O rate limitingMemory limitsCPU quotasNetwork isolationNested virtualizationPartition checkpointing and live migrationRoot privilege isolation
Linux-VServer
(security context)
Linux,Windows Server 2016GNU GPLv22001-2018YesYesYesYes[s]YesYesPartial[t]?NoPartial[u]
lmctfyLinuxApache License 2.02013–2015YesYesYesYes[s]YesYesPartial[t]?NoPartial[u]
sysjailOpenBSD,NetBSDBSD License2006–2009YesNoNoNoNoNoYesNoNo?
rkt (rocket)LinuxApache License 2.02014[46]–2018YesYesYesYesYesYesYes??Yes

See also

[edit]

Notes

[edit]
  1. ^Root user can easily escape from chroot. Chroot was never supposed to be used as a security mechanism.[9]
  2. ^For btrfs, overlay2, windowsfilter, and zfs storage drivers.[13]
  3. ^Disk quotas per container are possible when using separate partitions for each container with the help ofLVM, or when the underlying host filesystem is btrfs, in which case btrfs subvolumes are automatically used.
  4. ^I/O rate limiting is supported when usingBtrfs.
  5. ^Available since Linux kernel 2.6.18-028stable021. Implementation is based on CFQ disk I/O scheduler, but it is a two-level schema, so I/O priority is not per-process, but rather per-container.[22]
  6. ^abEach container can have its own IP addresses, firewall rules, routing tables and so on. Three different networking schemes are possible: route-based, bridge-based, and assigning a real network device (NIC) to a container.
  7. ^Docker containers can run inside OpenVZ containers.[23]
  8. ^Each container may have root access without possibly affecting other containers.[24]
  9. ^Available since version 4.0, January 2008.
  10. ^Docker containers can run inside Virtuozzo containers.[26]
  11. ^Yes with illumos[27]
  12. ^SeeSolaris network virtualization and resource control for more details.
  13. ^Only when top level is a KVM zone (illumos) or a kz zone (Oracle).
  14. ^Starting in Solaris 11.3 Beta, Solaris Kernel Zones may use live migration.
  15. ^Cold migration (shutdown-move-restart) is implemented.
  16. ^Non-global zones are restricted so they may not affect other zones via a capability-limiting approach. The global zone may administer the non-global zones.[30]
  17. ^Check the "allow.quotas" option and the "Jails and file systems" section on theFreeBSD jail man page for details.
  18. ^Available since TL 02.[41]
  19. ^abUsing theCFQ scheduler, there is a separate queue per guest.
  20. ^abNetworking is based on isolation, not virtualization.
  21. ^abA total of 14 user capabilities are considered safe within a container. The rest may cannot be granted to processes within that container without allowing that process to potentially interfere with things outside that container.[45]

References

[edit]
  1. ^Hogg, Scott (2014-05-26)."Software containers: Used more frequently than most realize".Network World. Network world, Inc. Retrieved2015-07-09.There are many other OS-level virtualization systems such as: Linux OpenVZ, Linux-VServer, FreeBSD Jails, AIX Workload Partitions (WPARs), HP-UX Containers (SRP), Solaris Containers, among others.
  2. ^Rami, Rosen."Namespaces and Cgroups, the basis of Linux Containers"(PDF). Retrieved18 August 2016.
  3. ^"Secure Bottlerocket deployments on Amazon EKS with KubeArmor | Containers".aws.amazon.com. 2022-10-20. Retrieved2023-06-20.
  4. ^Korff, Yanek; Hope, Paco; Potter, Bruce (2005).Mastering FreeBSD and OpenBSD security. O'Reilly Series. O'Reilly Media, Inc. p. 59.ISBN 0-596-00626-8.
  5. ^Huang, D. (2015). "Experiences in using os-level virtualization for block I/O".Proceedings of the 10th Parallel Data Storage Workshop(PDF). pp. 13–18.doi:10.1145/2834976.2834982.ISBN 978-1-4503-4008-3.S2CID 3867190.
  6. ^"System administration guide: Oracle Solaris containers-resource management and Oracle Solaris zones, Chapter 16: Introduction to Solaris zones".Oracle Corporation. 2010. Retrieved2014-09-02.
  7. ^"System administration guide: Oracle Solaris containers-resource management and Oracle Solaris zones, Chapter 31: About branded zones and the Linux branded zone".Oracle Corporation. 2010. Retrieved2014-09-02.
  8. ^Bryan Cantrill (2014-09-28)."The dream is alive! Running Linux containers on an illumos kernel".slideshare.net. Retrieved2014-10-10.
  9. ^"3.5. Limiting your program's environment".freebsd.org.
  10. ^"Docker drops LXC as default execution environment".InfoQ.
  11. ^"Install Docker desktop on Windows | Docker documentation".Docker. 9 February 2023.
  12. ^"Get started with Docker desktop for Mac".Docker documentation. December 6, 2019.
  13. ^"docker container run - Set storage driver options per container (--storage-opt)".docs.docker.com. 22 February 2024.
  14. ^"podman-volume-create — Podman documentation".docs.podman.io. Retrieved19 October 2025.
  15. ^"podman-container-checkpoint — Podman documentation".docs.podman.io. Retrieved19 October 2025.
  16. ^abGraber, Stéphane (1 January 2014)."LXC 1.0: Security features [6/10]". Retrieved12 February 2014.LXC now has support for user namespaces. [...] LXC is no longer running as root so even if an attacker manages to escape the container, he'd find himself having the privileges of a regular user on the host.
  17. ^"Community Announcement | Apptainer - Portable, Reproducible Containers".apptainer.org. 2021-11-30. Retrieved19 October 2025.
  18. ^"Sylabs brings Singularity containers into commercial HPC | Top 500 supercomputer sites".www.top500.org.
  19. ^"SIF — Containing your containers".www.sylabs.io. 14 March 2018.
  20. ^Kurtzer, Gregory M.; Sochat, Vanessa; Bauer, Michael W. (May 11, 2017)."Singularity: Scientific containers for mobility of compute".PLOS ONE.12 (5) e0177459.Bibcode:2017PLoSO..1277459K.doi:10.1371/journal.pone.0177459.PMC 5426675.PMID 28494014.
  21. ^Bronnikov, Sergey."Comparison on OpenVZ wiki page".OpenVZ Wiki. OpenVZ. Retrieved28 December 2018.
  22. ^"I/O priorities for containers".OpenVZ Virtuozzo Containers Wiki.
  23. ^"Docker inside CT".
  24. ^"Container".OpenVZ Virtuozzo Containers Wiki.
  25. ^"Initial public prerelease of Virtuozzo (named ASPcomplete at that time)".
  26. ^"Parallels Virtuozzo now provides native support for Docker". Archived fromthe original on 2016-05-13. Retrieved2015-06-03.
  27. ^Pijewski, Bill (March 1, 2011)."Our ZFS I/O Throttle".wdp.dtrace.org.
  28. ^Network virtualization and resource control (Crossbow) FAQArchived 2008-06-01 at theWayback Machine
  29. ^"Managing network virtualization and network resources in Oracle® Solaris 11.4".docs.oracle.com.
  30. ^Oracle Solaris 11.1 administration, Oracle Solaris zones, Oracle Solaris 10 zones and resource management E29024.pdf, pp. 356–360. Availablewithin an archive.
  31. ^"Contain your enthusiasm - Part two: Jails, zones, OpenVZ, and LXC".Jails were first introduced in FreeBSD 4.0 in 2000
  32. ^"Hierarchical resource limits - FreeBSD Wiki". Wiki.freebsd.org. 2012-10-27. Retrieved2014-01-15.
  33. ^Zec, Marko (2003-06-13)."Implementing a clonable network stack in the FreeBSD kernel"(PDF). usenix.org.
  34. ^"VPS for FreeBSD". Retrieved2016-02-20.
  35. ^"[Announcement] VPS // OS virtualization // alpha release". 31 August 2012. Retrieved2016-02-20.
  36. ^"3.5. Limiting your program's environment". Freebsd.org. Retrieved2014-01-15.
  37. ^Matthew Dillon (2006)."sys/vkernel.h".BSD cross reference.DragonFly BSD.
  38. ^ab"vkd(4) — Virtual kernel disc".DragonFly BSD.treats the disk image as copy-on-write.
  39. ^abSascha Wildner (2007-01-08)."vkernel, vcd, vkd, vke — virtual kernel architecture".DragonFly miscellaneous information manual.DragonFly BSD.
  40. ^"vkernel, vcd, vkd, vke - virtual kernel architecture".DragonFly On-Line Manual Pages.DragonFly BSD.
  41. ^"IBM fix pack information for: WPAR network isolation - United States".ibm.com. 21 July 2011.
  42. ^"Live application mobility in AIX 6.1".www.ibm.com. June 3, 2008.
  43. ^abcd"systemd-nspawn".www.freedesktop.org.
  44. ^abcd"2.3. Modifying control groups Red Hat Enterprise Linux 7".Red Hat Customer portal.
  45. ^"Paper - Linux-VServer".linux-vserver.org.
  46. ^Polvi, Alex."CoreOS is building a container runtime, rkt".CoreOS Blog. Archived fromthe original on 2019-04-01. Retrieved12 March 2019.

External links

[edit]
Hardware
(hypervisors)
Native
Hosted
Specialized
Independent
Tools
Operating
system
OS containers
Application containers
Virtual kernel architectures
Related kernel features
Orchestration
Desktop
Application
Network
See also
Retrieved from "https://en.wikipedia.org/w/index.php?title=OS-level_virtualization&oldid=1319342767"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp