KVM: arm64: Avoid unnecessary unmap walk in MEM_RELINQUISH hypercall
If the mapping is determined to be not present in an earlier walk,
attempting the unmap is pointless.
Signed-off-by: Keir Fraser <keirf@google.com>
diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
index e5b6110..b02f226 100644
--- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
+++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
@@ -378,7 +378,7 @@ int __pkvm_guest_relinquish_to_host(struct pkvm_hyp_vcpu *vcpu,
ret = kvm_pgtable_walk(&vm->pgt, ipa, PAGE_SIZE, &walker);
/* Zap the guest stage2 pte. */
- if (!ret)
+ if (!ret && data.pa)
kvm_pgtable_stage2_unmap(&vm->pgt, ipa, PAGE_SIZE);
guest_unlock_component(vm);