summaryrefslogtreecommitdiffstats
path: root/arch/x86
diff options
context:
space:
mode:
authorSean Christopherson <sean.j.christopherson@intel.com>2019-02-05 13:01:36 -0800
committerPaolo Bonzini <pbonzini@redhat.com>2019-02-20 22:48:48 +0100
commit24efe61f696c7b44a66c7d6a41c181b31aa338fc (patch)
treea5700f18eff710efc0b399e02705b4a8f827b037 /arch/x86
parent83cdb56864bcb1466b454f17fff47348ca7925a2 (diff)
downloadlinux-0-day-24efe61f696c7b44a66c7d6a41c181b31aa338fc.tar.gz
linux-0-day-24efe61f696c7b44a66c7d6a41c181b31aa338fc.tar.xz
KVM: x86/mmu: WARN if zapping a MMIO spte results in zapping children
Paolo expressed a concern that kvm_mmu_zap_mmio_sptes() could have a quadratic runtime[1], i.e. restarting the spte walk while zapping only MMIO sptes could result in re-walking large portions of the list over and over due to the non-MMIO sptes encountered before the restart not being removed. At the time, the concern was legitimate as the walk was restarted when any spte was zapped. But that is no longer the case as the walk is now restarted iff one or more children have been zapped, which is necessary because zapping children makes the active_mmu_pages list unstable. Furthermore, it should be impossible for an MMIO spte to have children, i.e. zapping an MMIO spte should never result in zapping children. In other words, kvm_mmu_zap_mmio_sptes() should never restart its walk, and so should always execute in linear time. WARN if this assertion fails. Although it should never be needed, leave the restart logic in place. In normal operation, the cost is at worst an extra CMP+Jcc, and if for some reason the list does become unstable, not restarting would likely crash KVM, or worse, the kernel. [1] https://patchwork.kernel.org/patch/10756589/#22452085 Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86')
-rw-r--r--arch/x86/kvm/mmu.c7
1 files changed, 5 insertions, 2 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 4b93fcdf08397..81618070367b0 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5871,8 +5871,11 @@ restart:
list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
if (!sp->mmio_cached)
continue;
- if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign) ||
- cond_resched_lock(&kvm->mmu_lock))
+ if (__kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list, &ign)) {
+ WARN_ON_ONCE(1);
+ goto restart;
+ }
+ if (cond_resched_lock(&kvm->mmu_lock))
goto restart;
}