- 25 Jun, 2021 1 commit
-
-
Marc Zyngier authored
Last minute fix for MTE, making sure the pages are flagged as MTE before they are released. * kvm-arm64/mmu/mte: KVM: arm64: Set the MTE tag bit before releasing the page Signed-off-by: Marc Zyngier <maz@kernel.org>
-
- 24 Jun, 2021 1 commit
-
-
Marc Zyngier authored
Setting a page flag without holding a reference to the page is living dangerously. In the tag-writing path, we drop the reference to the page by calling kvm_release_pfn_dirty(), and only then set the PG_mte_tagged bit. It would be safer to do it the other way round. Fixes: f0376edb ("KVM: arm64: Add ioctl to fetch/store tags in a guest") Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/87k0mjidwb.wl-maz@kernel.org
-
- 22 Jun, 2021 14 commits
-
-
Marc Zyngier authored
KVM/arm64 support for MTE, courtesy of Steven Price. It allows the guest to use memory tagging, and offers a new userspace API to save/restore the tags. * kvm-arm64/mmu/mte: KVM: arm64: Document MTE capability and ioctl KVM: arm64: Add ioctl to fetch/store tags in a guest KVM: arm64: Expose KVM_ARM_CAP_MTE KVM: arm64: Save/restore MTE registers KVM: arm64: Introduce MTE VM feature arm64: mte: Sync tags for pages where PTE is untagged Signed-off-by: Marc Zyngier <maz@kernel.org>
-
Steven Price authored
A new capability (KVM_CAP_ARM_MTE) identifies that the kernel supports granting a guest access to the tags, and provides a mechanism for the VMM to enable it. A new ioctl (KVM_ARM_MTE_COPY_TAGS) provides a simple way for a VMM to access the tags of a guest without having to maintain a PROT_MTE mapping in userspace. The above capability gates access to the ioctl. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-7-steven.price@arm.com
-
Steven Price authored
The VMM may not wish to have it's own mapping of guest memory mapped with PROT_MTE because this causes problems if the VMM has tag checking enabled (the guest controls the tags in physical RAM and it's unlikely the tags are correct for the VMM). Instead add a new ioctl which allows the VMM to easily read/write the tags from guest memory, allowing the VMM's mapping to be non-PROT_MTE while the VMM can still read/write the tags for the purpose of migration. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-6-steven.price@arm.com
-
Steven Price authored
It's now safe for the VMM to enable MTE in a guest, so expose the capability to user space. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-5-steven.price@arm.com
-
Steven Price authored
Define the new system registers that MTE introduces and context switch them. The MTE feature is still hidden from the ID register as it isn't supported in a VM yet. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-4-steven.price@arm.com
-
Steven Price authored
Add a new VM feature 'KVM_ARM_CAP_MTE' which enables memory tagging for a VM. This will expose the feature to the guest and automatically tag memory pages touched by the VM as PG_mte_tagged (and clear the tag storage) to ensure that the guest cannot see stale tags, and so that the tags are correctly saved/restored across swap. Actually exposing the new capability to user space happens in a later patch. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> [maz: move VM_SHARED sampling into the critical section] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-3-steven.price@arm.com
-
Steven Price authored
A KVM guest could store tags in a page even if the VMM hasn't mapped the page with PROT_MTE. So when restoring pages from swap we will need to check to see if there are any saved tags even if !pte_tagged(). However don't check pages for which pte_access_permitted() returns false as these will not have been swapped out. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210621111716.37157-2-steven.price@arm.com
-
Marc Zyngier authored
Selftest updates from Andrew Jones, fixing the sysgreg list expectations by dealing with multiple configurations, such as with or without a PMU. * kvm-arm64/selftest/sysreg-list-fix: KVM: arm64: Update MAINTAINERS to include selftests KVM: arm64: selftests: get-reg-list: Split base and pmu registers KVM: arm64: selftests: get-reg-list: Remove get-reg-list-sve KVM: arm64: selftests: get-reg-list: Provide config selection option KVM: arm64: selftests: get-reg-list: Prepare to run multiple configs at once KVM: arm64: selftests: get-reg-list: Introduce vcpu configs
-
Marc Zyngier authored
As the KVM/arm64 selftests are routed via the kvmarm tree, add the relevant references to the MAINTAINERS file. Suggested-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210622070732.zod7gaqhqo344vg6@gator
-
Andrew Jones authored
Since KVM commit 11663111 ("KVM: arm64: Hide PMU registers from userspace when not available") the get-reg-list* tests have been failing with ... ... There are 74 missing registers. The following lines are missing registers: ... where the 74 missing registers are all PMU registers. This isn't a bug in KVM that the selftest found, even though it's true that a KVM userspace that wasn't setting the KVM_ARM_VCPU_PMU_V3 VCPU flag, but still expecting the PMU registers to be in the reg-list, would suddenly no longer have their expectations met. In that case, the expectations were wrong, though, so that KVM userspace needs to be fixed, and so does this selftest. The fix for this selftest is to pull the PMU registers out of the base register sublist into their own sublist and then create new, pmu-enabled vcpu configs which can be tested. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Ricardo Koller <ricarkol@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210531103344.29325-6-drjones@redhat.com
-
Andrew Jones authored
Now that we can easily run the test for multiple vcpu configs, let's merge get-reg-list and get-reg-list-sve into just get-reg-list. We also add a final change to make it more possible to run multiple tests, which is to fork the test, rather than directly run it. That allows a test to fail, but subsequent tests can still run. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Ricardo Koller <ricarkol@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210531103344.29325-5-drjones@redhat.com
-
Andrew Jones authored
Add a new command line option that allows the user to select a specific configuration, e.g. --config=sve will give the sve config. Also provide help text and the --help/-h options. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Ricardo Koller <ricarkol@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210531103344.29325-4-drjones@redhat.com
-
Andrew Jones authored
We don't want to have to create a new binary for each vcpu config, so prepare to run the test for multiple vcpu configs in a single binary. We do this by factoring out the test from main() and then looping over configs. When given '--list' we still never print more than a single reg-list for a single vcpu config though, because it would be confusing otherwise. No functional change intended. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Ricardo Koller <ricarkol@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210531103344.29325-3-drjones@redhat.com
-
Andrew Jones authored
We already break register lists into sublists that get selected based on vcpu config. However, since we only had two configs (vregs and sve), we didn't structure the code very well to manage them. Restructure it now to more cleanly handle register sublists that are dependent on the vcpu config. This patch has no intended functional change (except for the vcpu config name now being prepended to all output). Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Ricardo Koller <ricarkol@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210531103344.29325-2-drjones@redhat.com
-
- 18 Jun, 2021 9 commits
-
-
Marc Zyngier authored
arm64 cache management function cleanup from Fuad Tabba, shared with the arm64 tree. * arm64/for-next/caches: arm64: Rename arm64-internal cache maintenance functions arm64: Fix cache maintenance function comments arm64: sync_icache_aliases to take end parameter instead of size arm64: __clean_dcache_area_pou to take end parameter instead of size arm64: __clean_dcache_area_pop to take end parameter instead of size arm64: __clean_dcache_area_poc to take end parameter instead of size arm64: __flush_dcache_area to take end parameter instead of size arm64: dcache_by_line_op to take end parameter instead of size arm64: __inval_dcache_area to take end parameter instead of size arm64: Fix comments to refer to correct function __flush_icache_range arm64: Move documentation of dcache_by_line_op arm64: assembler: remove user_alt arm64: Downgrade flush_icache_range to invalidate arm64: Do not enable uaccess for invalidate_icache_range arm64: Do not enable uaccess for flush_icache_range arm64: Apply errata to swsusp_arch_suspend_exit arm64: assembler: add conditional cache fixups arm64: assembler: replace `kaddr` with `addr` Signed-off-by: Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Fixes for the PMUv3 emulation of PMCR_EL0: - Don't spuriously reset the cycle counter when resetting other counters - Force PMCR_EL0 to become effective after having restored it * kvm-arm64/pmu-fixes: KVM: arm64: Restore PMU configuration on first run KVM: arm64: Don't zero the cycle count register when PMCR_EL0.P is set
-
Marc Zyngier authored
Restoring a guest with an active virtual PMU results in no perf counters being instanciated on the host side. Not quite what you'd expect from a restore. In order to fix this, force a writeback of PMCR_EL0 on the first run of a vcpu (using a new request so that it happens once the vcpu has been loaded). This will in turn create all the host-side counters that were missing. Reported-by: Jinank Jain <jinankj@amazon.de> Tested-by: Jinank Jain <jinankj@amazon.de> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/87wnrbylxv.wl-maz@kernel.org Link: https://lore.kernel.org/r/b53dfcf9bbc4db7f96154b1cd5188d72b9766358.camel@amazon.de
-
Alexandru Elisei authored
According to ARM DDI 0487G.a, page D13-3895, setting the PMCR_EL0.P bit to 1 has the following effect: "Reset all event counters accessible in the current Exception level, not including PMCCNTR_EL0, to zero." Similar behaviour is described for AArch32 on page G8-7022. Make it so. Fixes: c01d6a18 ("KVM: arm64: pmu: Only handle supported event counters") Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210618105139.83795-1-alexandru.elisei@arm.com
-
Marc Zyngier authored
Cache maintenance updates from Yanan Wang, moving the CMOs down into the page-table code. This ensures that we only issue them when actually performing a mapping rather than upfront. * kvm-arm64/mmu/stage2-cmos: KVM: arm64: Move guest CMOs to the fault handlers KVM: arm64: Tweak parameters of guest cache maintenance functions KVM: arm64: Introduce mm_ops member for structure stage2_attr_data KVM: arm64: Introduce two cache maintenance callbacks
-
Yanan Wang authored
We currently uniformly perform CMOs of D-cache and I-cache in function user_mem_abort before calling the fault handlers. If we get concurrent guest faults(e.g. translation faults, permission faults) or some really unnecessary guest faults caused by BBM, CMOs for the first vcpu are necessary while the others later are not. By moving CMOs to the fault handlers, we can easily identify conditions where they are really needed and avoid the unnecessary ones. As it's a time consuming process to perform CMOs especially when flushing a block range, so this solution reduces much load of kvm and improve efficiency of the stage-2 page table code. We can imagine two specific scenarios which will gain much benefit: 1) In a normal VM startup, this solution will improve the efficiency of handling guest page faults incurred by vCPUs, when initially populating stage-2 page tables. 2) After live migration, the heavy workload will be resumed on the destination VM, however all the stage-2 page tables need to be rebuilt at the moment. So this solution will ease the performance drop during resuming stage. Reviewed-by: Fuad Tabba <tabba@google.com> Signed-off-by: Yanan Wang <wangyanan55@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210617105824.31752-5-wangyanan55@huawei.com
-
Yanan Wang authored
Adjust the parameter "kvm_pfn_t pfn" of __clean_dcache_guest_page and __invalidate_icache_guest_page to "void *va", which paves the way for converting these two guest CMO functions into callbacks in structure kvm_pgtable_mm_ops. No functional change. Reviewed-by: Fuad Tabba <tabba@google.com> Signed-off-by: Yanan Wang <wangyanan55@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210617105824.31752-4-wangyanan55@huawei.com
-
Yanan Wang authored
Also add a mm_ops member for structure stage2_attr_data, since we will move I-cache maintenance for guest stage-2 to the permission path and as a result will need mm_ops for some callbacks. Reviewed-by: Fuad Tabba <tabba@google.com> Signed-off-by: Yanan Wang <wangyanan55@huawei.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210617105824.31752-3-wangyanan55@huawei.com
-
Yanan Wang authored
To prepare for performing CMOs for guest stage-2 in the fault handlers in pgtable.c, here introduce two cache maintenance callbacks in struct kvm_pgtable_mm_ops. We also adjust the comment alignment for the existing part but make no real content change at all. Reviewed-by: Fuad Tabba <tabba@google.com> Signed-off-by: Yanan Wang <wangyanan55@huawei.com> [maz: fixed up comments and renamed callbacks] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210617105824.31752-2-wangyanan55@huawei.com
-
- 14 Jun, 2021 7 commits
-
-
Marc Zyngier authored
Guest self-hosted debug tests from Ricardo Koller * kvm-arm64/selftest/debug: KVM: selftests: Add aarch64/debug-exceptions test KVM: selftests: Add exception handling support for aarch64 KVM: selftests: Move GUEST_ASSERT_EQ to utils header KVM: selftests: Introduce UCALL_UNHANDLED for unhandled vector reporting KVM: selftests: Complete x86_64/sync_regs_test ucall KVM: selftests: Rename vm_handle_exception
-
Ricardo Koller authored
Covers fundamental tests for debug exceptions. The guest installs and handle its debug exceptions itself, without KVM_SET_GUEST_DEBUG. Signed-off-by: Ricardo Koller <ricarkol@google.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210611011020.3420067-7-ricarkol@google.com
-
Ricardo Koller authored
Add the infrastructure needed to enable exception handling in aarch64 selftests. The exception handling defaults to an unhandled-exception handler which aborts the test, just like x86. These handlers can be overridden by calling vm_install_exception_handler(vector) or vm_install_sync_handler(vector, ec). The unhandled exception reporting from the guest is done using the ucall type introduced in a previous commit, UCALL_UNHANDLED. The exception handling code is inspired on kvm-unit-tests. Signed-off-by: Ricardo Koller <ricarkol@google.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210611011020.3420067-6-ricarkol@google.com
-
Ricardo Koller authored
Move GUEST_ASSERT_EQ to a common header, kvm_util.h, for other architectures and tests to use. Also modify __GUEST_ASSERT so it can be reused to implement GUEST_ASSERT_EQ. Signed-off-by: Ricardo Koller <ricarkol@google.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210611011020.3420067-5-ricarkol@google.com
-
Ricardo Koller authored
x86, the only arch implementing exception handling, reports unhandled vectors using port IO at a specific port number. This replicates what ucall already does. Introduce a new ucall type, UCALL_UNHANDLED, for guests to report unhandled exceptions. Then replace the x86 unhandled vector exception reporting to use it instead of port IO. This new ucall type will be used in the next commits by arm64 to report unhandled vectors as well. Tested: Forcing a page fault in the ./x86_64/xapic_ipi_test halter_guest_code() shows this: $ ./x86_64/xapic_ipi_test ... Unexpected vectored event in guest (vector:0xe) Signed-off-by: Ricardo Koller <ricarkol@google.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210611011020.3420067-4-ricarkol@google.com
-
Ricardo Koller authored
The guest in sync_regs_test does raw ucalls by directly accessing the ucall IO port. It makes these ucalls without setting %rdi to a `struct ucall`, which is what a ucall uses to pass messages. The issue is that if the host did a get_ucall (the receiver side), it would try to access the `struct ucall` at %rdi=0 which would lead to an error ("No mapping for vm virtual address, gva: 0x0"). This issue is currently benign as there is no get_ucall in sync_regs_test; however, that will change in the next commit as it changes the unhandled exception reporting mechanism to use ucalls. In that case, every vcpu_run is followed by a get_ucall to check if the guest is trying to report an unhandled exception. Fix this in advance by setting %rdi to a UCALL_NONE struct ucall for the sync_regs_test guest. Tested with gcc-[8,9,10], and clang-[9,11]. Signed-off-by: Ricardo Koller <ricarkol@google.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210611011020.3420067-3-ricarkol@google.com
-
Ricardo Koller authored
Rename the vm_handle_exception function to a name that indicates more clearly that it installs something: vm_install_exception_handler. Reported-by: kernel test robot <oliver.sang@intel.com> Suggested-by: Marc Zyngier <maz@kernel.org> Suggested-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Ricardo Koller <ricarkol@google.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210611011020.3420067-2-ricarkol@google.com
-
- 11 Jun, 2021 8 commits
-
-
Marc Zyngier authored
Host stage-2 optimisations from Quentin Perret * kvm-arm64/mmu/reduce-vmemmap-overhead: KVM: arm64: Use less bits for hyp_page refcount KVM: arm64: Use less bits for hyp_page order KVM: arm64: Remove hyp_pool pointer from struct hyp_page KVM: arm64: Unify MMIO and mem host stage-2 pools KVM: arm64: Remove list_head from hyp_page KVM: arm64: Use refcount at hyp to check page availability KVM: arm64: Move hyp_pool locking out of refcount helpers
-
Quentin Perret authored
The hyp_page refcount is currently encoded on 4 bytes even though we never need to count that many objects in a page. Make it 2 bytes to save some space in the vmemmap. As overflows are more likely to happen as well, make sure to catch those with a BUG in the increment function. Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210608114518.748712-8-qperret@google.com
-
Quentin Perret authored
The hyp_page order is currently encoded on 4 bytes even though it is guaranteed to be smaller than this. Make it 2 bytes to reduce the hyp vmemmap overhead. Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210608114518.748712-7-qperret@google.com
-
Quentin Perret authored
Each struct hyp_page currently contains a pointer to a hyp_pool struct where the page should be freed if its refcount reaches 0. However, this information can always be inferred from the context in the EL2 code, so drop the pointer to save a few bytes in the vmemmap. Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210608114518.748712-6-qperret@google.com
-
Quentin Perret authored
We currently maintain two separate memory pools for the host stage-2, one for pages used in the page-table when mapping memory regions, and the other to map MMIO regions. The former is large enough to map all of memory with page granularity and the latter can cover an arbitrary portion of IPA space, but allows to 'recycle' pages. However, this split makes accounting difficult to manage as pages at intermediate levels of the page-table may be used to map both memory and MMIO regions. Simplify the scheme by merging both pools into one. This means we can now hit the -ENOMEM case in the memory abort path, but we're still guaranteed forward-progress in the worst case by unmapping MMIO regions. On the plus side this also means we can usually map a lot more MMIO space at once if memory ranges happen to be mapped with block mappings. Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210608114518.748712-5-qperret@google.com
-
Quentin Perret authored
The list_head member of struct hyp_page is only needed when the page is attached to a free-list, which by definition implies the page is free. As such, nothing prevents us from using the page itself to store the list_head, hence reducing the size of the vmemmap. Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210608114518.748712-4-qperret@google.com
-
Quentin Perret authored
The hyp buddy allocator currently checks the struct hyp_page list node to see if a page is available for allocation or not when trying to coalesce memory. Now that decrementing the refcount and attaching to the buddy tree is done in the same critical section, we can rely on the refcount of the buddy page to be in sync, which allows to replace the list node check by a refcount check. This will ease removing the list node from struct hyp_page later on. Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210608114518.748712-3-qperret@google.com
-
Quentin Perret authored
The hyp_page refcount helpers currently rely on the hyp_pool lock for serialization. However, this means the refcounts can't be changed from the buddy allocator core as it already holds the lock, which means pages have to go through odd transient states. For example, when a page is freed, its refcount is set to 0, and the lock is transiently released before the page can be attached to a free list in the buddy tree. This is currently harmless as the allocator checks the list node of each page to see if it is available for allocation or not, but it means the page refcount can't be trusted to represent the state of the page even if the pool lock is held. In order to fix this, remove the pool locking from the refcount helpers, and move all the logic to the buddy allocator. This will simplify the removal of the list node from struct hyp_page in a later patch. Signed-off-by: Quentin Perret <qperret@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210608114518.748712-2-qperret@google.com
-