summaryrefslogtreecommitdiffstats
path: root/arch/x86
diff options
context:
space:
mode:
authorSean Christopherson <sean.j.christopherson@intel.com>2019-02-05 13:01:33 -0800
committerPaolo Bonzini <pbonzini@redhat.com>2019-02-20 22:48:46 +0100
commit5d6317ca4e61a3fa7528f832cd945c42fde8e67f (patch)
tree6bbcdb18a12decd98ce8aa9f38d0d9f4575ad2b0 /arch/x86
parent8a674adc11cd4cc59e51eaea6f0cc4b3d5710411 (diff)
downloadlinux-0-day-5d6317ca4e61a3fa7528f832cd945c42fde8e67f.tar.gz
linux-0-day-5d6317ca4e61a3fa7528f832cd945c42fde8e67f.tar.xz
KVM: x86/mmu: Voluntarily reschedule as needed when zapping all sptes
Call cond_resched_lock() when zapping all sptes to reschedule if needed or to release and reacquire mmu_lock in case of contention. There is no need to flush or zap when temporarily dropping mmu_lock as zapping all sptes is done only when the owning userspace VMM has exited or when the VM is being destroyed, i.e. there is no interplay with memslots or MMIO generations to worry about. Be paranoid and restart the walk if mmu_lock is dropped to avoid any potential issues with consuming a stale iterator. The overhead in doing so is negligible as at worst there will be a few root shadow pages at the head of the list, i.e. the iterator is essentially the head of the list already. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'arch/x86')
-rw-r--r--arch/x86/kvm/mmu.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c79ad7f31fdb2..fa153d771f475 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5856,7 +5856,8 @@ restart:
list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link) {
if (sp->role.invalid && sp->root_count)
continue;
- if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list))
+ if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list) ||
+ cond_resched_lock(&kvm->mmu_lock))
goto restart;
}