Hugetlbfs Reservation¶
Overview¶
Huge pages as described atHugeTLB Pages are typicallypreallocated for application use. These huge pages are instantiated in atask’s address space at page fault time if the VMA indicates huge pages areto be used. If no huge page exists at page fault time, the task is senta SIGBUS and often dies an unhappy death. Shortly after huge page supportwas added, it was determined that it would be better to detect a shortageof huge pages at mmap() time. The idea is that if there were not enoughhuge pages to cover the mapping, the mmap() would fail. This was firstdone with a simple check in the code at mmap() time to determine if therewere enough free huge pages to cover the mapping. Like most things in thekernel, the code has evolved over time. However, the basic idea was to‘reserve’ huge pages at mmap() time to ensure that huge pages would beavailable for page faults in that mapping. The description below attempts todescribe how huge page reserve processing is done in the v4.10 kernel.
Audience¶
This description is primarily targeted at kernel developers who are modifyinghugetlbfs code.
The Data Structures¶
- resv_huge_pages
- This is a global (per-hstate) count of reserved huge pages. Reservedhuge pages are only available to the task which reserved them.Therefore, the number of huge pages generally available is computedas (
free_huge_pages-resv_huge_pages). - Reserve Map
A reserve map is described by the structure:
struct resv_map { struct kref refs; spinlock_t lock; struct list_head regions; long adds_in_progress; struct list_head region_cache; long region_cache_count;};There is one reserve map for each huge page mapping in the system.The regions list within the resv_map describes the regions withinthe mapping. A region is described as:
struct file_region { struct list_head link; long from; long to;};The ‘from’ and ‘to’ fields of the file region structure are huge pageindices into the mapping. Depending on the type of mapping, aregion in the reserv_map may indicate reservations exist for therange, or reservations do not exist.
- Flags for MAP_PRIVATE Reservations
These are stored in the bottom bits of the reservation map pointer.
#defineHPAGE_RESV_OWNER (1UL<<0)- Indicates this task is the owner of the reservationsassociated with the mapping.
#defineHPAGE_RESV_UNMAPPED(1UL<<1)- Indicates task originally mapping this range (and creatingreserves) has unmapped a page from this task (the child)due to a failed COW.
- Page Flags
- The PagePrivate page flag is used to indicate that a huge pagereservation must be restored when the huge page is freed. Moredetails will be discussed in the “Freeing huge pages” section.
Reservation Map Location (Private or Shared)¶
A huge page mapping or segment is either private or shared. If private,it is typically only available to a single address space (task). If shared,it can be mapped into multiple address spaces (tasks). The location andsemantics of the reservation map is significantly different for the two typesof mappings. Location differences are:
- For private mappings, the reservation map hangs off the VMA structure.Specifically, vma->vm_private_data. This reserve map is created at thetime the mapping (mmap(MAP_PRIVATE)) is created.
- For shared mappings, the reservation map hangs off the inode. Specifically,inode->i_mapping->private_data. Since shared mappings are always backedby files in the hugetlbfs filesystem, the hugetlbfs code ensures each inodecontains a reservation map. As a result, the reservation map is allocatedwhen the inode is created.
Creating Reservations¶
Reservations are created when a huge page backed shared memory segment iscreated (shmget(SHM_HUGETLB)) or a mapping is created via mmap(MAP_HUGETLB).These operations result in a call to the routine hugetlb_reserve_pages():
int hugetlb_reserve_pages(struct inode *inode, long from, long to, struct vm_area_struct *vma, vm_flags_t vm_flags)
The first thing hugetlb_reserve_pages() does is check if the NORESERVEflag was specified in either the shmget() or mmap() call. If NORESERVEwas specified, then this routine returns immediately as no reservationsare desired.
The arguments ‘from’ and ‘to’ are huge page indices into the mapping orunderlying file. For shmget(), ‘from’ is always 0 and ‘to’ corresponds tothe length of the segment/mapping. For mmap(), the offset argument couldbe used to specify the offset into the underlying file. In such a case,the ‘from’ and ‘to’ arguments have been adjusted by this offset.
One of the big differences between PRIVATE and SHARED mappings is the wayin which reservations are represented in the reservation map.
- For shared mappings, an entry in the reservation map indicates a reservationexists or did exist for the corresponding page. As reservations areconsumed, the reservation map is not modified.
- For private mappings, the lack of an entry in the reservation map indicatesa reservation exists for the corresponding page. As reservations areconsumed, entries are added to the reservation map. Therefore, thereservation map can also be used to determine which reservations havebeen consumed.
For private mappings, hugetlb_reserve_pages() creates the reservation map andhangs it off the VMA structure. In addition, the HPAGE_RESV_OWNER flag is setto indicate this VMA owns the reservations.
The reservation map is consulted to determine how many huge page reservationsare needed for the current mapping/segment. For private mappings, this isalways the value (to - from). However, for shared mappings it is possible thatsome reservations may already exist within the range (to - from). See thesectionReservation Map Modificationsfor details on how this is accomplished.
The mapping may be associated with a subpool. If so, the subpool is consultedto ensure there is sufficient space for the mapping. It is possible that thesubpool has set aside reservations that can be used for the mapping. See thesectionSubpool Reservations for more details.
After consulting the reservation map and subpool, the number of needed newreservations is known. The routine hugetlb_acct_memory() is called to checkfor and take the requested number of reservations. hugetlb_acct_memory()calls into routines that potentially allocate and adjust surplus page counts.However, within those routines the code is simply checking to ensure thereare enough free huge pages to accommodate the reservation. If there are,the global reservation count resv_huge_pages is adjusted something like thefollowing:
if (resv_needed <= (resv_huge_pages - free_huge_pages)) resv_huge_pages += resv_needed;
Note that the global lock hugetlb_lock is held when checking and adjustingthese counters.
If there were enough free huge pages and the global count resv_huge_pageswas adjusted, then the reservation map associated with the mapping ismodified to reflect the reservations. In the case of a shared mapping, afile_region will exist that includes the range ‘from’ - ‘to’. For privatemappings, no modifications are made to the reservation map as lack of anentry indicates a reservation exists.
If hugetlb_reserve_pages() was successful, the global reservation count andreservation map associated with the mapping will be modified as required toensure reservations exist for the range ‘from’ - ‘to’.
Consuming Reservations/Allocating a Huge Page¶
Reservations are consumed when huge pages associated with the reservationsare allocated and instantiated in the corresponding mapping. The allocationis performed within the routine alloc_huge_page():
struct page *alloc_huge_page(struct vm_area_struct *vma, unsigned long addr, int avoid_reserve)
alloc_huge_page is passed a VMA pointer and a virtual address, so it canconsult the reservation map to determine if a reservation exists. In addition,alloc_huge_page takes the argument avoid_reserve which indicates reservesshould not be used even if it appears they have been set aside for thespecified address. The avoid_reserve argument is most often used in the caseof Copy on Write and Page Migration where additional copies of an existingpage are being allocated.
The helper routine vma_needs_reservation() is called to determine if areservation exists for the address within the mapping(vma). See the sectionReservation Map Helper Routines for detailedinformation on what this routine does.The value returned from vma_needs_reservation() is generally0 or 1. 0 if a reservation exists for the address, 1 if no reservation exists.If a reservation does not exist, and there is a subpool associated with themapping the subpool is consulted to determine if it contains reservations.If the subpool contains reservations, one can be used for this allocation.However, in every case the avoid_reserve argument overrides the use ofa reservation for the allocation. After determining whether a reservationexists and can be used for the allocation, the routine dequeue_huge_page_vma()is called. This routine takes two arguments related to reservations:
- avoid_reserve, this is the same value/argument passed to alloc_huge_page()
- chg, even though this argument is of type long only the values 0 or 1 arepassed to dequeue_huge_page_vma. If the value is 0, it indicates areservation exists (see the section “Memory Policy and Reservations” forpossible issues). If the value is 1, it indicates a reservation does notexist and the page must be taken from the global free pool if possible.
The free lists associated with the memory policy of the VMA are searched fora free page. If a page is found, the value free_huge_pages is decrementedwhen the page is removed from the free list. If there was a reservationassociated with the page, the following adjustments are made:
SetPagePrivate(page); /* Indicates allocating this page consumed * a reservation, and if an error is * encountered such that the page must be * freed, the reservation will be restored. */resv_huge_pages--; /* Decrement the global reservation count */
Note, if no huge page can be found that satisfies the VMA’s memory policyan attempt will be made to allocate one using the buddy allocator. Thisbrings up the issue of surplus huge pages and overcommit which is beyondthe scope reservations. Even if a surplus page is allocated, the samereservation based adjustments as above will be made: SetPagePrivate(page) andresv_huge_pages–.
After obtaining a new huge page, (page)->private is set to the value ofthe subpool associated with the page if it exists. This will be used forsubpool accounting when the page is freed.
The routine vma_commit_reservation() is then called to adjust the reservemap based on the consumption of the reservation. In general, this involvesensuring the page is represented within a file_region structure of the regionmap. For shared mappings where the reservation was present, an entryin the reserve map already existed so no change is made. However, if therewas no reservation in a shared mapping or this was a private mapping a newentry must be created.
It is possible that the reserve map could have been changed between the callto vma_needs_reservation() at the beginning of alloc_huge_page() and thecall to vma_commit_reservation() after the page was allocated. This wouldbe possible if hugetlb_reserve_pages was called for the same page in a sharedmapping. In such cases, the reservation count and subpool free page countwill be off by one. This rare condition can be identified by comparing thereturn value from vma_needs_reservation and vma_commit_reservation. If sucha race is detected, the subpool and global reserve counts are adjusted tocompensate. See the sectionReservation Map Helper Routines for moreinformation on these routines.
Instantiate Huge Pages¶
After huge page allocation, the page is typically added to the page tablesof the allocating task. Before this, pages in a shared mapping are addedto the page cache and pages in private mappings are added to an anonymousreverse mapping. In both cases, the PagePrivate flag is cleared. Therefore,when a huge page that has been instantiated is freed no adjustment is madeto the global reservation count (resv_huge_pages).
Freeing Huge Pages¶
Huge page freeing is performed by the routine free_huge_page(). This routineis the destructor for hugetlbfs compound pages. As a result, it is onlypassed a pointer to the page struct. When a huge page is freed, reservationaccounting may need to be performed. This would be the case if the page wasassociated with a subpool that contained reserves, or the page is being freedon an error path where a global reserve count must be restored.
The page->private field points to any subpool associated with the page.If the PagePrivate flag is set, it indicates the global reserve count shouldbe adjusted (see the sectionConsuming Reservations/Allocating a Huge Pagefor information on how these are set).
The routine first calls hugepage_subpool_put_pages() for the page. If thisroutine returns a value of 0 (which does not equal the value passed 1) itindicates reserves are associated with the subpool, and this newly free pagemust be used to keep the number of subpool reserves above the minimum size.Therefore, the global resv_huge_pages counter is incremented in this case.
If the PagePrivate flag was set in the page, the global resv_huge_pages counterwill always be incremented.
Subpool Reservations¶
There is a struct hstate associated with each huge page size. The hstatetracks all huge pages of the specified size. A subpool represents a subsetof pages within a hstate that is associated with a mounted hugetlbfsfilesystem.
When a hugetlbfs filesystem is mounted a min_size option can be specifiedwhich indicates the minimum number of huge pages required by the filesystem.If this option is specified, the number of huge pages corresponding tomin_size are reserved for use by the filesystem. This number is tracked inthe min_hpages field of a struct hugepage_subpool. At mount time,hugetlb_acct_memory(min_hpages) is called to reserve the specified number ofhuge pages. If they can not be reserved, the mount fails.
The routines hugepage_subpool_get/put_pages() are called when pages areobtained from or released back to a subpool. They perform all subpoolaccounting, and track any reservations associated with the subpool.hugepage_subpool_get/put_pages are passed the number of huge pages by whichto adjust the subpool ‘used page’ count (down for get, up for put). Normally,they return the same value that was passed or an error if not enough pagesexist in the subpool.
However, if reserves are associated with the subpool a return value lessthan the passed value may be returned. This return value indicates thenumber of additional global pool adjustments which must be made. For example,suppose a subpool contains 3 reserved huge pages and someone asks for 5.The 3 reserved pages associated with the subpool can be used to satisfy partof the request. But, 2 pages must be obtained from the global pools. Torelay this information to the caller, the value 2 is returned. The calleris then responsible for attempting to obtain the additional two pages fromthe global pools.
COW and Reservations¶
Since shared mappings all point to and use the same underlying pages, thebiggest reservation concern for COW is private mappings. In this case,two tasks can be pointing at the same previously allocated page. One taskattempts to write to the page, so a new page must be allocated so that eachtask points to its own page.
When the page was originally allocated, the reservation for that page wasconsumed. When an attempt to allocate a new page is made as a result ofCOW, it is possible that no free huge pages are free and the allocationwill fail.
When the private mapping was originally created, the owner of the mappingwas noted by setting the HPAGE_RESV_OWNER bit in the pointer to the reservationmap of the owner. Since the owner created the mapping, the owner owns allthe reservations associated with the mapping. Therefore, when a write faultoccurs and there is no page available, different action is taken for the ownerand non-owner of the reservation.
In the case where the faulting task is not the owner, the fault will fail andthe task will typically receive a SIGBUS.
If the owner is the faulting task, we want it to succeed since it owned theoriginal reservation. To accomplish this, the page is unmapped from thenon-owning task. In this way, the only reference is from the owning task.In addition, the HPAGE_RESV_UNMAPPED bit is set in the reservation map pointerof the non-owning task. The non-owning task may receive a SIGBUS if it laterfaults on a non-present page. But, the original owner of themapping/reservation will behave as expected.
Reservation Map Modifications¶
The following low level routines are used to make modifications to areservation map. Typically, these routines are not called directly. Rather,a reservation map helper routine is called which calls one of these low levelroutines. These low level routines are fairly well documented in the sourcecode (mm/hugetlb.c). These routines are:
long region_chg(struct resv_map *resv, long f, long t);long region_add(struct resv_map *resv, long f, long t);void region_abort(struct resv_map *resv, long f, long t);long region_count(struct resv_map *resv, long f, long t);
Operations on the reservation map typically involve two operations:
region_chg() is called to examine the reserve map and determine howmany pages in the specified range [f, t) are NOT currently represented.
The calling code performs global checks and allocations to determine ifthere are enough huge pages for the operation to succeed.
- If the operation can succeed, region_add() is called to actually modifythe reservation map for the same range [f, t) previously passed toregion_chg().
- If the operation can not succeed, region_abort is called for the samerange [f, t) to abort the operation.
Note that this is a two step process where region_add() and region_abort()are guaranteed to succeed after a prior call to region_chg() for the samerange. region_chg() is responsible for pre-allocating any data structuresnecessary to ensure the subsequent operations (specifically region_add()))will succeed.
As mentioned above, region_chg() determines the number of pages in the rangewhich are NOT currently represented in the map. This number is returned tothe caller. region_add() returns the number of pages in the range added tothe map. In most cases, the return value of region_add() is the same as thereturn value of region_chg(). However, in the case of shared mappings it ispossible for changes to the reservation map to be made between the calls toregion_chg() and region_add(). In this case, the return value of region_add()will not match the return value of region_chg(). It is likely that in suchcases global counts and subpool accounting will be incorrect and in need ofadjustment. It is the responsibility of the caller to check for this conditionand make the appropriate adjustments.
The routine region_del() is called to remove regions from a reservation map.It is typically called in the following situations:
- When a file in the hugetlbfs filesystem is being removed, the inode willbe released and the reservation map freed. Before freeing the reservationmap, all the individual file_region structures must be freed. In this caseregion_del is passed the range [0, LONG_MAX).
- When a hugetlbfs file is being truncated. In this case, all allocated pagesafter the new file size must be freed. In addition, any file_region entriesin the reservation map past the new end of file must be deleted. In thiscase, region_del is passed the range [new_end_of_file, LONG_MAX).
- When a hole is being punched in a hugetlbfs file. In this case, huge pagesare removed from the middle of the file one at a time. As the pages areremoved, region_del() is called to remove the corresponding entry from thereservation map. In this case, region_del is passed the range[page_idx, page_idx + 1).
In every case, region_del() will return the number of pages removed from thereservation map. In VERY rare cases, region_del() can fail. This can onlyhappen in the hole punch case where it has to split an existing file_regionentry and can not allocate a new structure. In this error case, region_del()will return -ENOMEM. The problem here is that the reservation map willindicate that there is a reservation for the page. However, the subpool andglobal reservation counts will not reflect the reservation. To handle thissituation, the routine hugetlb_fix_reserve_counts() is called to adjust thecounters so that they correspond with the reservation map entry that couldnot be deleted.
region_count() is called when unmapping a private huge page mapping. Inprivate mappings, the lack of a entry in the reservation map indicates thata reservation exists. Therefore, by counting the number of entries in thereservation map we know how many reservations were consumed and how many areoutstanding (outstanding = (end - start) - region_count(resv, start, end)).Since the mapping is going away, the subpool and global reservation countsare decremented by the number of outstanding reservations.
Reservation Map Helper Routines¶
Several helper routines exist to query and modify the reservation maps.These routines are only interested with reservations for a specific hugepage, so they just pass in an address instead of a range. In addition,they pass in the associated VMA. From the VMA, the type of mapping (privateor shared) and the location of the reservation map (inode or VMA) can bedetermined. These routines simply call the underlying routines describedin the section “Reservation Map Modifications”. However, they do take intoaccount the ‘opposite’ meaning of reservation map entries for private andshared mappings and hide this detail from the caller:
long vma_needs_reservation(struct hstate *h, struct vm_area_struct *vma, unsigned long addr)
This routine calls region_chg() for the specified page. If no reservationexists, 1 is returned. If a reservation exists, 0 is returned:
long vma_commit_reservation(struct hstate *h, struct vm_area_struct *vma, unsigned long addr)
This calls region_add() for the specified page. As in the case of region_chgand region_add, this routine is to be called after a previous call tovma_needs_reservation. It will add a reservation entry for the page. Itreturns 1 if the reservation was added and 0 if not. The return value shouldbe compared with the return value of the previous call tovma_needs_reservation. An unexpected difference indicates the reservationmap was modified between calls:
void vma_end_reservation(struct hstate *h, struct vm_area_struct *vma, unsigned long addr)
This calls region_abort() for the specified page. As in the case of region_chgand region_abort, this routine is to be called after a previous call tovma_needs_reservation. It will abort/end the in progress reservation addoperation:
long vma_add_reservation(struct hstate *h, struct vm_area_struct *vma, unsigned long addr)
This is a special wrapper routine to help facilitate reservation cleanupon error paths. It is only called from the routine restore_reserve_on_error().This routine is used in conjunction with vma_needs_reservation in an attemptto add a reservation to the reservation map. It takes into account thedifferent reservation map semantics for private and shared mappings. Hence,region_add is called for shared mappings (as an entry present in the mapindicates a reservation), and region_del is called for private mappings (asthe absence of an entry in the map indicates a reservation). See the section“Reservation cleanup in error paths” for more information on what needs tobe done on error paths.
Reservation Cleanup in Error Paths¶
As mentioned in the sectionReservation Map Helper Routines, reservationmap modifications are performed in two steps. First vma_needs_reservationis called before a page is allocated. If the allocation is successful,then vma_commit_reservation is called. If not, vma_end_reservation is called.Global and subpool reservation counts are adjusted based on success or failureof the operation and all is well.
Additionally, after a huge page is instantiated the PagePrivate flag iscleared so that accounting when the page is ultimately freed is correct.
However, there are several instances where errors are encountered after a hugepage is allocated but before it is instantiated. In this case, the pageallocation has consumed the reservation and made the appropriate subpool,reservation map and global count adjustments. If the page is freed at thistime (before instantiation and clearing of PagePrivate), then free_huge_pagewill increment the global reservation count. However, the reservation mapindicates the reservation was consumed. This resulting inconsistent statewill cause the ‘leak’ of a reserved huge page. The global reserve count willbe higher than it should and prevent allocation of a pre-allocated page.
The routine restore_reserve_on_error() attempts to handle this situation. Itis fairly well documented. The intention of this routine is to restorethe reservation map to the way it was before the page allocation. In thisway, the state of the reservation map will correspond to the global reservationcount after the page is freed.
The routine restore_reserve_on_error itself may encounter errors whileattempting to restore the reservation map entry. In this case, it willsimply clear the PagePrivate flag of the page. In this way, the globalreserve count will not be incremented when the page is freed. However, thereservation map will continue to look as though the reservation was consumed.A page can still be allocated for the address, but it will not use a reservedpage as originally intended.
There is some code (most notably userfaultfd) which can not callrestore_reserve_on_error. In this case, it simply modifies the PagePrivateso that a reservation will not be leaked when the huge page is freed.
Reservations and Memory Policy¶
Per-node huge page lists existed in struct hstate when git was first usedto manage Linux code. The concept of reservations was added some time later.When reservations were added, no attempt was made to take memory policyinto account. While cpusets are not exactly the same as memory policy, thiscomment in hugetlb_acct_memory sums up the interaction between reservationsand cpusets/memory policy:
/* * When cpuset is configured, it breaks the strict hugetlb page * reservation as the accounting is done on a global variable. Such * reservation is completely rubbish in the presence of cpuset because * the reservation is not checked against page availability for the * current cpuset. Application can still potentially OOM'ed by kernel * with lack of free htlb page in cpuset that the task is in. * Attempt to enforce strict accounting with cpuset is almost * impossible (or too ugly) because cpuset is too fluid that * task or memory node can be dynamically moved between cpusets. * * The change of semantics for shared hugetlb mapping with cpuset is * undesirable. However, in order to preserve some of the semantics, * we fall back to check against current free page availability as * a best attempt and hopefully to minimize the impact of changing * semantics that cpuset has. */
Huge page reservations were added to prevent unexpected page allocationfailures (OOM) at page fault time. However, if an application makes useof cpusets or memory policy there is no guarantee that huge pages will beavailable on the required nodes. This is true even if there are a sufficientnumber of global reservations.
Hugetlbfs regression testing¶
The most complete set of hugetlb tests are in the libhugetlbfs repository.If you modify any hugetlb related code, use the libhugetlbfs test suiteto check for regressions. In addition, if you add any new hugetlbfunctionality, please add appropriate tests to libhugetlbfs.
–Mike Kravetz, 7 April 2017