summaryrefslogtreecommitdiffstats
path: root/arch/x86
diff options
context:
space:
mode:
authorKai Huang <kai.huang@linux.intel.com>2019-01-15 17:28:40 +1300
committerPaolo Bonzini <pbonzini@redhat.com>2019-02-20 22:48:25 +0100
commit8acc0993e3f9a04a407ff1507dd329455f340121 (patch)
tree53ac2de1ea6674679a16f8e72634c542fc8f9b0e /arch/x86
parente0dfacbfe91ad7947f0c44829c0358ac2e17d3c6 (diff)
downloadlinux-0-day-8acc0993e3f9a04a407ff1507dd329455f340121.tar.gz
linux-0-day-8acc0993e3f9a04a407ff1507dd329455f340121.tar.xz
kvm, x86, mmu: Use kernel generic dynamic physical address mask
AMD's SME/SEV is no longer the only case which reduces supported physical address bits, since Intel introduced Multi-key Total Memory Encryption (MKTME), which repurposes high bits of physical address as keyID, thus effectively shrinks supported physical address bits. To cover both cases (and potential similar future features), kernel MM introduced generic dynamaic physical address mask instead of hard-coded __PHYSICAL_MASK in 'commit 94d49eb30e854 ("x86/mm: Decouple dynamic __PHYSICAL_MASK from AMD SME")'. KVM should use that too. Change PT64_BASE_ADDR_MASK to use kernel dynamic physical address mask when it is enabled, instead of sme_clr. PT64_DIR_BASE_ADDR_MASK is also deleted since it is not used at all. Signed-off-by: Kai Huang <kai.huang@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86')
-rw-r--r--arch/x86/kvm/mmu.c8
1 files changed, 5 insertions, 3 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index da9c42349b1f8..45eb988aa4119 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -109,9 +109,11 @@ module_param(dbg, bool, 0644);
(((address) >> PT32_LEVEL_SHIFT(level)) & ((1 << PT32_LEVEL_BITS) - 1))
-#define PT64_BASE_ADDR_MASK __sme_clr((((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1)))
-#define PT64_DIR_BASE_ADDR_MASK \
- (PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + PT64_LEVEL_BITS)) - 1))
+#ifdef CONFIG_DYNAMIC_PHYSICAL_MASK
+#define PT64_BASE_ADDR_MASK (physical_mask & ~(u64)(PAGE_SIZE-1))
+#else
+#define PT64_BASE_ADDR_MASK (((1ULL << 52) - 1) & ~(u64)(PAGE_SIZE-1))
+#endif
#define PT64_LVL_ADDR_MASK(level) \
(PT64_BASE_ADDR_MASK & ~((1ULL << (PAGE_SHIFT + (((level) - 1) \
* PT64_LEVEL_BITS))) - 1))