Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
forked fromtorvalds/linux

Commit63489f8

Browse files
mjkravetztorvalds
authored andcommitted
hugetlbfs: check for pgoff value overflow
A vma with vm_pgoff large enough to overflow a loff_t type whenconverted to a byte offset can be passed via the remap_file_pages systemcall. The hugetlbfs mmap routine uses the byte offset to calculatereservations and file size.A sequence such as: mmap(0x20a00000, 0x600000, 0, 0x66033, -1, 0); remap_file_pages(0x20a00000, 0x600000, 0, 0x20000000000000, 0);will result in the following when task exits/file closed, kernel BUG at mm/hugetlb.c:749! Call Trace: hugetlbfs_evict_inode+0x2f/0x40 evict+0xcb/0x190 __dentry_kill+0xcb/0x150 __fput+0x164/0x1e0 task_work_run+0x84/0xa0 exit_to_usermode_loop+0x7d/0x80 do_syscall_64+0x18b/0x190 entry_SYSCALL_64_after_hwframe+0x3d/0xa2The overflowed pgoff value causes hugetlbfs to try to set up a mappingwith a negative range (end < start) that leaves invalid state whichcauses the BUG.The previous overflow fix to this code was incomplete and did not takethe remap_file_pages system call into account.[mike.kravetz@oracle.com: v3] Link:http://lkml.kernel.org/r/20180309002726.7248-1-mike.kravetz@oracle.com[akpm@linux-foundation.org: include mmdebug.h][akpm@linux-foundation.org: fix -ve left shift count on sh]Link:http://lkml.kernel.org/r/20180308210502.15952-1-mike.kravetz@oracle.comFixes:045c7a3 ("hugetlbfs: fix offset overflow in hugetlbfs mmap")Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>Reported-by: Nic Losby <blurbdust@gmail.com>Acked-by: Michal Hocko <mhocko@suse.com>Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>Cc: Yisheng Xie <xieyisheng1@huawei.com>Cc: <stable@vger.kernel.org>Signed-off-by: Andrew Morton <akpm@linux-foundation.org>Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent2e517d6 commit63489f8

File tree

2 files changed

+21
-3
lines changed

2 files changed

+21
-3
lines changed

‎fs/hugetlbfs/inode.c‎

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -108,6 +108,16 @@ static void huge_pagevec_release(struct pagevec *pvec)
108108
pagevec_reinit(pvec);
109109
}
110110

111+
/*
112+
* Mask used when checking the page offset value passed in via system
113+
* calls. This value will be converted to a loff_t which is signed.
114+
* Therefore, we want to check the upper PAGE_SHIFT + 1 bits of the
115+
* value. The extra bit (- 1 in the shift value) is to take the sign
116+
* bit into account.
117+
*/
118+
#definePGOFF_LOFFT_MAX \
119+
(((1UL << (PAGE_SHIFT + 1)) - 1) << (BITS_PER_LONG - (PAGE_SHIFT + 1)))
120+
111121
staticinthugetlbfs_file_mmap(structfile*file,structvm_area_struct*vma)
112122
{
113123
structinode*inode=file_inode(file);
@@ -127,12 +137,13 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
127137
vma->vm_ops=&hugetlb_vm_ops;
128138

129139
/*
130-
*Offset passed to mmap (before page shift) could have been
131-
*negative when represented asa (l)off_t.
140+
*page based offset in vm_pgoff could be sufficiently large to
141+
*overflowa (l)off_t when converted to byte offset.
132142
*/
133-
if (((loff_t)vma->vm_pgoff<<PAGE_SHIFT)<0)
143+
if (vma->vm_pgoff&PGOFF_LOFFT_MAX)
134144
return-EINVAL;
135145

146+
/* must be huge page aligned */
136147
if (vma->vm_pgoff& (~huge_page_mask(h) >>PAGE_SHIFT))
137148
return-EINVAL;
138149

‎mm/hugetlb.c‎

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@
1818
#include<linux/bootmem.h>
1919
#include<linux/sysfs.h>
2020
#include<linux/slab.h>
21+
#include<linux/mmdebug.h>
2122
#include<linux/sched/signal.h>
2223
#include<linux/rmap.h>
2324
#include<linux/string_helpers.h>
@@ -4374,6 +4375,12 @@ int hugetlb_reserve_pages(struct inode *inode,
43744375
structresv_map*resv_map;
43754376
longgbl_reserve;
43764377

4378+
/* This should never happen */
4379+
if (from>to) {
4380+
VM_WARN(1,"%s called with a negative range\n",__func__);
4381+
return-EINVAL;
4382+
}
4383+
43774384
/*
43784385
* Only apply hugepage reservation if asked. At fault time, an
43794386
* attempt will be made for VM_NORESERVE to allocate a page

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp