Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Fragmentation (computing)

From Wikipedia, the free encyclopedia
Inefficient use of storage space
This article has multiple issues. Please helpimprove it or discuss these issues on thetalk page.(Learn how and when to remove these messages)
This articlemay be too technical for most readers to understand. Pleasehelp improve it tomake it understandable to non-experts, without removing the technical details.(September 2010) (Learn how and when to remove this message)
This article includes a list ofgeneral references, butit lacks sufficient correspondinginline citations. Please help toimprove this article byintroducing more precise citations.(April 2011) (Learn how and when to remove this message)
This articlehas an unclearcitation style. The references used may be made clearer with a different or consistent style ofcitation andfootnoting.(April 2011) (Learn how and when to remove this message)
(Learn how and when to remove this message)

Incomputer storage,fragmentation is a phenomenon in the computer system which involves the distribution of data in to smaller pieces which storage space, such ascomputer memory or ahard drive, is used inefficiently, reducing capacity or performance and often both. The exact consequences of fragmentation depend on the specific system of storage allocation in use and the particular form of fragmentation. In many cases, fragmentation leads to storage space being "wasted", and programs will tend to run inefficiently due to the shortage of memory.

Basic principle

[edit]

In main memory fragmentation, when a computer program requests blocks of memory from the computer system, the blocks are allocated in chunks. When the computer program is finished with a chunk, it can free it back to the system, making it available to later be allocated again to another or the same program. The size and the amount of time a chunk is held by a program varies. During its lifespan, a computer program can request and free many chunks of memory.

Fragmentation can occur when a block of memory is requested by a program, and is allocated to that program, but the program has not freed it.[1] This leads to theoretically "available", unused memory, being marked as allocated - which reduces the amount of globally available memory, making it harder for programs to request and access memory.

When a program is started, the free memory areas are long and contiguous. Over time and with use, the long contiguous regions become fragmented into smaller and smaller contiguous areas. Eventually, it may become impossible for the program to obtain large contiguous chunks of memory.

Types

[edit]

There are three different but related forms of fragmentation: external fragmentation, internal fragmentation, and data fragmentation, which can be present in isolation or conjunction. Fragmentation is often accepted in return for improvements in speed or simplicity. Analogous phenomena occur for other resources such as processors; see below.

Internal fragmentation

[edit]

Memory paging creates internal fragmentation because an entirepage frame will be allocated whether or not that much storage is needed.[2]Due to the rules governingmemory allocation, more computer memory is sometimesallocated than is needed. For example, memory can only be provided to programs in chunks (usually a multiple of 4 bytes), and as a result if a program requests perhaps 29 bytes, it will actually get a chunk of 32 bytes. When this happens, the excess memory goes to waste. In this scenario, the unusable memory, known asslack space, is contained within an allocated region. This arrangement, termed fixed partitions, suffers from inefficient memory use - any process, no matter how small, occupies an entire partition. This waste is calledinternal fragmentation.[3][4]

Unlike other types of fragmentation, internal fragmentation is difficult to reclaim; usually the best way to remove it is with a design change. For example, indynamic memory allocation,memory pools drastically cut internal fragmentation by spreading the space overhead over a larger number of objects.

External fragmentation

[edit]

External fragmentation arises when free memory is separated into small blocks and is interspersed by allocated memory. It is a weakness of certain storage allocation algorithms, when they fail to order memory used by programs efficiently. The result is that, although free storage is available, it is effectively unusable because it is divided into pieces that are too small individually to satisfy the demands of the application. The term "external" refers to the fact that the unusable storage is outside the allocated regions.

For example, consider a situation wherein a program allocates three continuous blocks of memory and then frees the middle block. The memory allocator can use this free block of memory for future allocations. However, it cannot use this block if the memory to be allocated is larger in size than this free block.

External fragmentation also occurs in file systems as many files of different sizes are created, change size, and are deleted. The effect is even worse if a file which is divided into many small pieces is deleted, because this leaves similarly small regions of free spaces.

Time0x00000x10000x20000x30000x40000x5000Comments
0Start with all memory available for storage.
1ABCAllocated three blocks A, B, and C, of size0x1000.
2ACFreed block B. Notice that the memory that B used cannot be included for a block larger than B's size.
3ACBlock C moved into block B's empty slot, allowing the remaining space to be used for a larger block of size0x4000.

Data fragmentation

[edit]
Main article:File system fragmentation

Data fragmentation occurs when a collection of data in memory is broken up into many pieces that are not close together. It is typically the result of attempting to insert a large object into storage that has already suffered external fragmentation.For example, files in afile system are usually managed in units calledblocks orclusters. When a file system is created, there is free space to store file blocks togethercontiguously. This allows for rapid sequential file reads and writes. However, as files are added, removed, and changed in size, the free space becomes externally fragmented, leaving only small holes in which to place new data. When a new file is written, or when an existing file is extended, the operating system puts the new data in new non-contiguous data blocks to fit into the available holes. The new data blocks are necessarily scattered, slowing access due toseek time androtational latency of the read/write head, and incurring additional overhead to manage additional locations. This is calledfile system fragmentation.

When writing a new file of a known size, if there are any empty holes that are larger than that file, the operating system can avoid data fragmentation by putting the file into any one of those holes. There are a variety of algorithms for selecting which of those potential holes to put the file; each of them is aheuristic approximate solution to thebin packing problem. The "best fit" algorithm chooses the smallest hole that is big enough. The "worst fit" algorithm chooses the largest hole. The "first-fit algorithm" chooses the first hole that is big enough. The "next fit" algorithm keeps track of where each file was written.The "next fit" algorithm is faster than "first fit," which is in turn faster than "best fit," which is the same speed as "worst fit".[5]

Just as compaction can eliminate external fragmentation, data fragmentation can be eliminated by rearranging data storage so that related pieces are close together. For example, the primary job of adefragmentation tool is to rearrange blocks on disk so that the blocks of each file are contiguous. Most defragmenting utilities also attempt to reduce or eliminate free space fragmentation. Some movinggarbage collectors, utilities that perform automatic memory management, will also move related objects close together (this is calledcompacting) to improve cache performance.

There are four kinds of systems that never experience data fragmentation—they always store every file contiguously. All four kinds have significant disadvantages compared to systems that allow at least some temporary data fragmentation:

  1. Simply write each filecontiguously. If there isn't already enough contiguous free space to hold the file, the system immediately fails to store the file—even when there are many little bits of free space from deleted files that add up to more than enough to store the file.
  2. If there isn't already enough contiguous free space to hold the file, use acopying collector to convert many little bits of free space into one contiguous free region big enough to hold the file. This takes a lot more time than breaking the file up into fragments and putting those fragments into the available free space.
  3. Write the file into any free block, throughfixed-size blocks storage. If a programmer picks a fixed block size too small, the system immediately fails to store some files—files larger than the block size—even when there are many free blocks that add up to more than enough to store the file. If a programmer picks a block size too big, a lot of space is wasted on internal fragmentation.
  4. Some systems avoid dynamic allocation entirely, pre-storing (contiguous) space for all possible files they will need—for example,MultiFinder pre-allocates a chunk of RAM to each application as it was started according to how much RAM that application's programmer claimed it would need.

Comparison

[edit]

Compared to external fragmentation, overhead and internal fragmentation account for little loss in terms of wasted memory and reduced performance. It is defined as:

External Memory Fragmentation=1Largest Block Of Free MemoryTotal Free Memory{\displaystyle {{\text{External Memory Fragmentation}}=1-}{\frac {\text{Largest Block Of Free Memory}}{\text{Total Free Memory}}}}

Fragmentation of 0% means that all the free memory is in a single large block; fragmentation is 90% (for example) when 100 MB free memory is present but largest free block of memory for storage is just 10 MB.

External fragmentation tends to be less of a problem in file systems than in primary memory (RAM) storage systems, because programs usually require their RAM storage requests to be fulfilled with contiguous blocks, but file systems typically are designed to be able to use any collection of available blocks (fragments) to assemble a file which logically appears contiguous. Therefore, if a highly fragmented file or many small files are deleted from a full volume and then a new file with size equal to the newly freed space is created, the new file will simply reuse the same fragments that were freed by the deletion. If what was deleted was one file, the new file will be just as fragmented as that old file was, but in any case there will be no barrier to using all the (highly fragmented) free space to create the new file. In RAM, on the other hand, the storage systems used often cannot assemble a large block to meet a request from small noncontiguous free blocks, and so the request cannot be fulfilled and the program cannot proceed to do whatever it needed that memory for (unless it can reissue the request as a number of smaller separate requests).

Problems

[edit]

Storage failure

[edit]

The most severe problem caused by fragmentation is causing a process or system to fail, due to premature resource exhaustion: if a contiguous block must be stored and cannot be stored, failure occurs. Fragmentation causes this to occur even if there is enough of the resource, but not acontiguous amount. For example, if a computer has 4 GiB of memory and 2 GiB are free, but the memory is fragmented in an alternating sequence of 1 MiB used, 1 MiB free, then a request for 1 contiguous GiB of memory cannot be satisfied even though 2 GiB total are free.

In order to avoid this, the allocator may, instead of failing, trigger a defragmentation (or memory compaction cycle) or other resource reclamation, such as a major garbage collection cycle, in the hope that it will then be able to satisfy the request. This allows the process to proceed, but can severely impact performance.

Performance degradation

[edit]

Fragmentation causes performance degradation for a number of reasons. Most basically, fragmentation increases the work required to allocate and access a resource. For example, on a hard drive or tape drive, sequential data reads are very fast, but seeking to a different address is slow, so reading or writing a fragmented file requires numerous seeks and is thus much slower, in addition to causing greater wear on the device.Further, if a resource is not fragmented, allocation requests can simply be satisfied by returning a single block from the start of the free area.However, if it is fragmented, the request requires either searching for a large enough free block, which may take a long time, or fulfilling the request by several smaller blocks (if this is possible), which results in this allocation being fragmented, and requiring additional overhead to manage the several pieces.

A subtler problem is that fragmentation may prematurely exhaust a cache, causingthrashing, due to caches holding blocks, not individual data. For example, suppose a program has aworking set of 256 KiB, and is running on a computer with a 256 KiB cache (say L2 instruction+data cache), so the entire working set fits in cache and thus executes quickly, at least in terms of cache hits. Suppose further that it has 64translation lookaside buffer (TLB) entries, each for a 4 KiBpage: each memory access requires a virtual-to-physical translation, which is fast if the page is in cache (here TLB). If the working set is unfragmented, then it will fit onto exactly 64 pages (thepage working set will be 64 pages), and all memory lookups can be served from cache. However, if the working set is fragmented, then it will not fit into 64 pages, and execution will slow due to thrashing: pages will be repeatedly added and removed from the TLB during operation. Thus cache sizing in system design must include margin to account for fragmentation.

Memory fragmentation is one of the most severe problems faced bysystem managers.[citation needed] Over time, it leads to degradation of system performance. Eventually, memory fragmentation may lead to complete loss of (application-usable) free memory.

Memory fragmentation is akernelprogramming level problem. Duringreal-time computing of applications, fragmentation levels can reach as high as 99%, and may lead to system crashes or other instabilities.[citation needed] This type of system crash can be difficult to avoid, as it is impossible to anticipate the critical rise in levels of memory fragmentation. However, while it may not be possible for a system to continue running all programs in the case of excessive memory fragmentation, a well-designed system should be able to recover from the critical fragmentation condition by moving in some memory blocks used by the system itself in order to enable consolidation of free memory into fewer, larger blocks, or, in the worst case, by terminating some programs to free their memory and then defragmenting the resulting sum total of free memory. This will at least avoid a true crash in the sense of system failure and allow the system to continue running some programs, save program data, etc.

Fragmentation is a phenomenon of system software design; different software will be susceptible to fragmentation to different degrees, and it is possible to design a system that will never be forced to shut down or kill processes as a result of memory fragmentation.

Analogous phenomena

[edit]

While fragmentation is best known as a problem in memory allocation, analogous phenomena occur for otherresources, notably processors.[6] For example, in a system that usestime-sharing forpreemptive multitasking, but that does not check if a process is blocked, a process that executes for part of itstime slice but then blocks and cannot proceed for the remainder of its time slice wastes time because of the resultinginternal fragmentation of time slices. More fundamentally, time-sharing itself causesexternal fragmentation of processes due to running them in fragmented time slices, rather than in a single unbroken run. The resulting cost ofprocess switching and increasedcache pressure from multiple processes using the same caches can result in degraded performance.

Inconcurrent systems, particularlydistributed systems, when a group of processes must interact in order to progress, if the processes are scheduled at separate times or on separate machines (fragmented across time or machines), the time spent waiting for each other or in communicating with each other can severely degrade performance. Instead, performant systems requirecoscheduling of the group.[6]

Someflash file systems have several different kinds of internal fragmentation involving "dead space" and "dark space".[7]

See also

[edit]

References

[edit]
  1. ^"CS360 Lecture notes -- Fragmentation".web.eecs.utk.edu. Retrieved2024-09-29.
  2. ^Null, Linda; Lobur, Julia (2006).The Essentials of Computer Organization and Architecture. Jones and Bartlett Publishers. p. 315.ISBN 9780763737696. RetrievedJul 15, 2021.
  3. ^"Partitioning, Partition Sizes and Drive Lettering". The PC Guide. April 17, 2001. Retrieved2012-01-20.
  4. ^"Switches: Sector copy". Symantec. 2001-01-14. Archived fromthe original on July 19, 2012. Retrieved2012-01-20.
  5. ^D. Samanta."Classic Data Structures"2004.p. 76
  6. ^abOusterhout, John K. (1982)."Scheduling Techniques for Concurrent Systems"(PDF).Proceedings of Third International Conference on Distributed Computing Systems. pp. 22–30.
  7. ^Hunter, Adrian (2008)."A Brief Introduction to the Design of UBIFS"(PDF). p. 8.

Sources

[edit]
Hardware
Virtual memory
Segmentation
Allocator
Manual means
Garbage
collection
Safety
Issues
Other
Authority control databases: NationalEdit this at Wikidata
Retrieved from "https://en.wikipedia.org/w/index.php?title=Fragmentation_(computing)&oldid=1286753577"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp