Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitf19f5c4

Browse files
Sean Christophersontorvalds
Sean Christopherson
authored andcommitted
x86/speculation/l1tf: Exempt zeroed PTEs from inversion
It turns out that we should *not* invert all not-present mappings,because the all zeroes case is obviously special.clear_page() does not undergo the XOR logic to invert the address bits,i.e. PTE, PMD and PUD entries that have not been individually writtenwill have val=0 and so will trigger __pte_needs_invert(). As a result,{pte,pmd,pud}_pfn() will return the wrong PFN value, i.e. all ones(adjusted by the max PFN mask) instead of zero. A zeroed entry is okbecause the page at physical address 0 is reserved early in bootspecifically to mitigate L1TF, so explicitly exempt them from theinversion when reading the PFN.Manifested as an unexpected mprotect(..., PROT_NONE) failure when calledon a VMA that has VM_PFNMAP and was mmap'd to as something other thanPROT_NONE but never used. mprotect() sends the PROT_NONE request downprot_none_walk(), which walks the PTEs to check the PFNs.prot_none_pte_entry() gets the bogus PFN from pte_pfn() and returns-EACCES because it thinks mprotect() is trying to adjust a high MMIOaddress.[ This is a very modified version of Sean's original patch, but all credit goes to Sean for doing this and also pointing out that sometimes the __pte_needs_invert() function only gets the protection bits, not the full eventual pte. But zero remains special even in just protection bits, so that's ok. - Linus ]Fixes:f22cc87 ("x86/speculation/l1tf: Invert all not present mappings")Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>Acked-by: Andi Kleen <ak@linux.intel.com>Cc: Thomas Gleixner <tglx@linutronix.de>Cc: Josh Poimboeuf <jpoimboe@redhat.com>Cc: Michal Hocko <mhocko@suse.com>Cc: Vlastimil Babka <vbabka@suse.cz>Cc: Dave Hansen <dave.hansen@intel.com>Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parentb0e5c29 commitf19f5c4

File tree

1 file changed

+10
-1
lines changed

1 file changed

+10
-1
lines changed

‎arch/x86/include/asm/pgtable-invert.h‎

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,18 @@
44

55
#ifndef__ASSEMBLY__
66

7+
/*
8+
* A clear pte value is special, and doesn't get inverted.
9+
*
10+
* Note that even users that only pass a pgprot_t (rather
11+
* than a full pte) won't trigger the special zero case,
12+
* because even PAGE_NONE has _PAGE_PROTNONE | _PAGE_ACCESSED
13+
* set. So the all zero case really is limited to just the
14+
* cleared page table entry case.
15+
*/
716
staticinlinebool__pte_needs_invert(u64val)
817
{
9-
return !(val&_PAGE_PRESENT);
18+
returnval&&!(val&_PAGE_PRESENT);
1019
}
1120

1221
/* Get a mask to xor with the page table entry to get the correct pfn. */

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp