summaryrefslogtreecommitdiffstats
path: root/arch/x86
diff options
context:
space:
mode:
authorSean Christopherson <sean.j.christopherson@intel.com>2019-02-05 13:01:32 -0800
committerPaolo Bonzini <pbonzini@redhat.com>2019-02-20 22:48:46 +0100
commit8a674adc11cd4cc59e51eaea6f0cc4b3d5710411 (patch)
tree1e4188486bc028bd6c05afecfe01e7d8548a81b9 /arch/x86
parent7390de1e99a70895721165d0ccd4a6e16482960a (diff)
downloadlinux-0-day-8a674adc11cd4cc59e51eaea6f0cc4b3d5710411.tar.gz
linux-0-day-8a674adc11cd4cc59e51eaea6f0cc4b3d5710411.tar.xz
KVM: x86/mmu: skip over invalid root pages when zapping all sptes
...to guarantee forward progress. When zapped, root pages are marked invalid and moved to the head of the active pages list until they are explicitly freed. Theoretically, having unzappable root pages at the head of the list could prevent kvm_mmu_zap_all() from making forward progress were a future patch to add a loop restart after processing a page, e.g. to drop mmu_lock on contention. Although kvm_mmu_prepare_zap_page() can theoretically take action on invalid pages, e.g. to zap unsync children, functionally it's not necessary (root pages will be re-zapped when freed) and practically speaking the odds of e.g. @unsync or @unsync_children becoming %true while zapping all pages is basically nil. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86')
-rw-r--r--arch/x86/kvm/mmu.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5cdeb88850f8f..c79ad7f31fdb2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5853,9 +5853,12 @@ void kvm_mmu_zap_all(struct kvm *kvm)
spin_lock(&kvm->mmu_lock);
restart:
- list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link)
+ list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
+ if (sp->role.invalid && sp->root_count)
+ continue;
if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list))
goto restart;
+ }
kvm_mmu_commit_zap_page(kvm, &invalid_list);
spin_unlock(&kvm->mmu_lock);