1. 08 Nov, 2021 4 commits
    • Quentin Perret's avatar
      KVM: arm64: Fix host stage-2 finalization · 50a8d331
      Quentin Perret authored
      We currently walk the hypervisor stage-1 page-table towards the end of
      hyp init in nVHE protected mode and adjust the host page ownership
      attributes in its stage-2 in order to get a consistent state from both
      point of views. The walk is done on the entire hyp VA space, and expects
      to only ever find page-level mappings. While this expectation is
      reasonable in the half of hyp VA space that maps memory with a fixed
      offset (see the loop in pkvm_create_mappings_locked()), it can be
      incorrect in the other half where nothing prevents the usage of block
      mappings. For instance, on systems where memory is physically aligned at
      an address that happens to maps to a PMD aligned VA in the hyp_vmemmap,
      kvm_pgtable_hyp_map() will install block mappings when backing the
      hyp_vmemmap, which will later cause finalize_host_mappings() to fail.
      Furthermore, it should be noted that all pages backing the hyp_vmemmap
      are also mapped in the 'fixed offset range' of the hypervisor, which
      implies that finalize_host_mappings() will walk both aliases and update
      the host stage-2 attributes twice. The order in which this happens is
      unpredictable, though, since the hyp VA layout is highly dependent on
      the position of the idmap page, hence resulting in a fragile mess at
      best.
      
      In order to fix all of this, let's restrict the finalization walk to
      only cover memory regions in the 'fixed-offset range' of the hyp VA
      space and nothing else. This not only fixes a correctness issue, but
      will also result in a slighlty faster hyp initialization overall.
      
      Fixes: 2c50166c ("KVM: arm64: Mark host bss and rodata section as shared")
      Signed-off-by: default avatarQuentin Perret <qperret@google.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20211108154636.393384-1-qperret@google.com
      50a8d331
    • YueHaibing's avatar
      KVM: arm64: Change the return type of kvm_vcpu_preferred_target() · 08e873cb
      YueHaibing authored
      kvm_vcpu_preferred_target() always return 0 because kvm_target_cpu()
      never returns a negative error code.
      Signed-off-by: default avatarYueHaibing <yuehaibing@huawei.com>
      Reviewed-by: default avatarAlexandru Elisei <alexandru.elisei@arm.com>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20211105011500.16280-1-yuehaibing@huawei.com
      08e873cb
    • Randy Dunlap's avatar
      KVM: arm64: nvhe: Fix a non-kernel-doc comment · deacd669
      Randy Dunlap authored
      Do not use kernel-doc "/**" notation when the comment is not in
      kernel-doc format.
      
      Fixes this docs build warning:
      
      arch/arm64/kvm/hyp/nvhe/sys_regs.c:478: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst
          * Handler for protected VM restricted exceptions.
      Signed-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Cc: Fuad Tabba <tabba@google.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: kvmarm@lists.cs.columbia.edu
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20211106032529.15057-1-rdunlap@infradead.org
      deacd669
    • Mark Rutland's avatar
      KVM: arm64: Extract ESR_ELx.EC only · 8bb08411
      Mark Rutland authored
      Since ARMv8.0 the upper 32 bits of ESR_ELx have been RES0, and recently
      some of the upper bits gained a meaning and can be non-zero. For
      example, when FEAT_LS64 is implemented, ESR_ELx[36:32] contain ISS2,
      which for an ST64BV or ST64BV0 can be non-zero. This can be seen in ARM
      DDI 0487G.b, page D13-3145, section D13.2.37.
      
      Generally, we must not rely on RES0 bit remaining zero in future, and
      when extracting ESR_ELx.EC we must mask out all other bits.
      
      All C code uses the ESR_ELx_EC() macro, which masks out the irrelevant
      bits, and therefore no alterations are required to C code to avoid
      consuming irrelevant bits.
      
      In a couple of places the KVM assembly extracts ESR_ELx.EC using LSR on
      an X register, and so could in theory consume previously RES0 bits. In
      both cases this is for comparison with EC values ESR_ELx_EC_HVC32 and
      ESR_ELx_EC_HVC64, for which the upper bits of ESR_ELx must currently be
      zero, but this could change in future.
      
      This patch adjusts the KVM vectors to use UBFX rather than LSR to
      extract ESR_ELx.EC, ensuring these are robust to future additions to
      ESR_ELx.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Alexandru Elisei <alexandru.elisei@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Acked-by: default avatarWill Deacon <will@kernel.org>
      Signed-off-by: default avatarMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20211103110545.4613-1-mark.rutland@arm.com
      8bb08411
  2. 21 Oct, 2021 3 commits
  3. 18 Oct, 2021 12 commits
  4. 17 Oct, 2021 21 commits