- 07 Mar, 2024 3 commits
-
-
Oliver Upton authored
* kvm-arm64/vm-configuration: (29 commits) : VM configuration enforcement, courtesy of Marc Zyngier : : Userspace has gained the ability to control the features visible : through the ID registers, yet KVM didn't take this into account as the : effective feature set when determing trap / emulation behavior. This : series adds: : : - Mechanism for testing the presence of a particular CPU feature in the : guest's ID registers : : - Infrastructure for computing the effective value of VNCR-backed : registers, taking into account the RES0 / RES1 bits for a particular : VM configuration : : - Implementation of 'fine-grained UNDEF' controls that shadow the FGT : register definitions. KVM: arm64: Don't initialize idreg debugfs w/ preemption disabled KVM: arm64: Fail the idreg iterator if idregs aren't initialized KVM: arm64: Make build-time check of RES0/RES1 bits optional KVM: arm64: Add debugfs file for guest's ID registers KVM: arm64: Snapshot all non-zero RES0/RES1 sysreg fields for later checking KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU is not advertised to the guest KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is not advertised to the guest KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2 KVM: arm64: Move existing feature disabling over to FGU infrastructure KVM: arm64: Propagate and handle Fine-Grained UNDEF bits KVM: arm64: Add Fine-Grained UNDEF tracking information KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap() KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker KVM: arm64: Register AArch64 system register entries with the sysreg xarray KVM: arm64: Always populate the trap configuration xarray KVM: arm64: nv: Move system instructions to their own sys_reg_desc array KVM: arm64: Drop the requirement for XARRAY_MULTI KVM: arm64: nv: Turn encoding ranges into discrete XArray stores ... Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
* kvm-arm64/misc: : Miscellaneous updates : : - Fix handling of features w/ nonzero safe values in set_id_regs : selftest : : - Cleanup the unused kern_hyp_va() asm macro : : - Differentiate nVHE and hVHE in boot-time message : : - Several selftests cleanups : : - Drop bogus return value from kvm_arch_create_vm_debugfs() : : - Make save/restore of SPE and TRBE control registers affect EL1 state : in hVHE mode : : - Typos KVM: arm64: Fix TRFCR_EL1/PMSCR_EL1 access in hVHE mode KVM: selftests: aarch64: Remove unused functions from vpmu test KVM: arm64: Fix typos KVM: Get rid of return value from kvm_arch_create_vm_debugfs() KVM: selftests: Print timer ctl register in ISTATUS assertion KVM: selftests: Fix GUEST_PRINTF() format warnings in ARM code KVM: arm64: removed unused kern_hyp_va asm macro KVM: arm64: add comments to __kern_hyp_va KVM: arm64: print Hyp mode KVM: arm64: selftests: Handle feature fields with nonzero minimum value correctly Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
* kvm-arm64/feat_e2h0: : Support for FEAT_E2H0, courtesy of Marc Zyngier : : As described in the cover letter: : : Since ARMv8.1, the architecture has grown the VHE feature, which makes : EL2 a superset of EL1. With ARMv9.5 (and retroactively allowed from : ARMv8.1), the architecture allows implementations to have VHE as the : *only* implemented behaviour, meaning that HCR_EL2.E2H can be : implemented as RES1. As a follow-up, HCR_EL2.NV1 can also be : implemented as RES0, making the VHE-ness of the architecture : recursive. : : This series adds support for detecting the architectural feature of E2H : being RES1, leveraging the existing infrastructure for handling : out-of-spec CPUs that are VHE-only. Additionally, the (incomplete) NV : infrastructure in KVM is updated to enforce E2H=1 for guest hypervisors : on implementations that do not support NV1. arm64: cpufeatures: Fix FEAT_NV check when checking for FEAT_NV1 arm64: cpufeatures: Only check for NV1 if NV is present arm64: cpufeatures: Add missing ID_AA64MMFR4_EL1 to __read_sysreg_by_encoding() KVM: arm64: Handle Apple M2 as not having HCR_EL2.NV1 implemented KVM: arm64: Force guest's HCR_EL2.E2H RES1 when NV1 is not implemented KVM: arm64: Expose ID_AA64MMFR4_EL1 to guests arm64: Treat HCR_EL2.E2H as RES1 when ID_AA64MMFR4_EL1.E2H0 is negative arm64: cpufeature: Detect HCR_EL2.NV1 being RES0 arm64: cpufeature: Add ID_AA64MMFR4_EL1 handling arm64: sysreg: Add layout for ID_AA64MMFR4_EL1 arm64: cpufeature: Correctly display signed override values arm64: cpufeatures: Correctly handle signed values arm64: Add macro to compose a sysreg field value Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
-
- 01 Mar, 2024 1 commit
-
-
Marc Zyngier authored
When running in hVHE mode, EL1 accesses are performed with the EL12 accessor, as we run with HCR_EL2.E2H=1. Unfortunately, both PMSCR_EL1 and TRFCR_EL1 are used with the EL1 accessor, meaning that we actually affect the EL2 state. Duh. Switch to using the {read,write}_sysreg_el1() helpers that will do the right thing in all circumstances. Note that the 'Fixes:' tag doesn't represent the point where the bug was introduced (there is no such point), but the first practical point where the hVHE feature is usable. Cc: James Clark <james.clark@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Fixes: 38cba550 ("KVM: arm64: Force HCR_E2H in guest context when ARM64_KVM_HVHE is set") Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20240229145417.3606279-1-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
- 29 Feb, 2024 1 commit
-
-
Raghavendra Rao Ananta authored
vpmu_counter_access's disable_counter() carries a bug that disables all the counters that are enabled, instead of just the requested one. Fortunately, it's not an issue as there are no callers of it. Hence, instead of fixing it, remove the definition entirely. Remove enable_counter() as it's unused as well. Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20231122221526.2750966-1-rananta@google.comSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
- 27 Feb, 2024 2 commits
-
-
Oliver Upton authored
Testing KVM with DEBUG_ATOMIC_SLEEP enabled doesn't get far before hitting the first splat: BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:1578 in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 13062, name: vgic_lpi_stress preempt_count: 1, expected: 0 2 locks held by vgic_lpi_stress/13062: #0: ffff080084553240 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0xc0/0x13f0 #1: ffff800080485f08 (&kvm->arch.config_lock){+.+.}-{3:3}, at: kvm_arch_vcpu_ioctl+0xd60/0x1788 CPU: 19 PID: 13062 Comm: vgic_lpi_stress Tainted: G W O 6.8.0-dbg-DEV #1 Call trace: dump_backtrace+0xf8/0x148 show_stack+0x20/0x38 dump_stack_lvl+0xb4/0xf8 dump_stack+0x18/0x40 __might_resched+0x248/0x2a0 __might_sleep+0x50/0x88 down_write+0x30/0x150 start_creating+0x90/0x1a0 __debugfs_create_file+0x5c/0x1b0 debugfs_create_file+0x34/0x48 kvm_reset_sys_regs+0x120/0x1e8 kvm_reset_vcpu+0x148/0x270 kvm_arch_vcpu_ioctl+0xddc/0x1788 kvm_vcpu_ioctl+0xb6c/0x13f0 __arm64_sys_ioctl+0x98/0xd8 invoke_syscall+0x48/0x108 el0_svc_common+0xb4/0xf0 do_el0_svc+0x24/0x38 el0_svc+0x54/0x128 el0t_64_sync_handler+0x68/0xc0 el0t_64_sync+0x1a8/0x1b0 kvm_reset_vcpu() disables preemption as it needs to unload vCPU state from the CPU to twiddle with it, which subsequently explodes when taking the parent inode's rwsem while creating the idreg debugfs file. Fix it by moving the initialization to kvm_arch_create_vm_debugfs(). Fixes: 89176658 ("KVM: arm64: Add debugfs file for guest's ID registers") Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240227094115.1723330-3-oliver.upton@linux.devSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Oliver Upton authored
Return an error to userspace if the VM's ID register values haven't been initialized in preparation for changing the debugfs file initialization order. Reviewed-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240227094115.1723330-2-oliver.upton@linux.devSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
- 24 Feb, 2024 1 commit
-
-
Bjorn Helgaas authored
Fix typos, most reported by "codespell arch/arm64". Only touches comments, no code changes. Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: James Morse <james.morse@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Zenghui Yu <yuzenghui@huawei.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.linux.dev Reviewed-by: Zenghui Yu <yuzenghui@huawei.com> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/r/20240103231605.1801364-6-helgaas@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
- 23 Feb, 2024 1 commit
-
-
Oliver Upton authored
The general expectation with debugfs is that any initialization failure is nonfatal. Nevertheless, kvm_arch_create_vm_debugfs() allows implementations to return an error and kvm_create_vm_debugfs() allows that to fail VM creation. Change to a void return to discourage architectures from making debugfs failures fatal for the VM. Seems like everyone already had the right idea, as all implementations already return 0 unconditionally. Acked-by: Marc Zyngier <maz@kernel.org> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Link: https://lore.kernel.org/r/20240216155941.2029458-1-oliver.upton@linux.devSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
- 22 Feb, 2024 1 commit
-
-
Marc Zyngier authored
In order to ease the transition towards a state of absolute paranoia where all RES0/RES1 bits gets checked against what KVM know of them, make the checks optional and guarded by a config symbol (CONFIG_KVM_ARM64_RES_BITS_PARANOIA) default to n. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/kvm/87frxka7ud.wl-maz@kernel.org/Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
-
- 19 Feb, 2024 26 commits
-
-
Marc Zyngier authored
Debugging ID register setup can be a complicated affair. Give the kernel hacker a way to dump that state in an easy to parse way. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-27-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
As KVM now strongly relies on accurately handling the RES0/RES1 bits on a number of paths, add a compile-time checker that will blow in the face of the innocent bystander, should they try to sneak in an update that changes any of these RES0/RES1 fields. It is expected that such an update will come with the relevant KVM update if needed. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-26-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
We unconditionally enable FEAT_MOPS, which is obviously wrong. So let's only do that when it is advertised to the guest. Which means we need to rely on a per-vcpu HCRX_EL2 shadow register. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20240214131827.2856277-25-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
No AMU? No AMU! IF we see an AMU-related trap, let's turn it into an UNDEF! Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-24-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
As part of the ongoing effort to honor the guest configuration, add the necessary checks to make PIR_EL1 and co UNDEF if not advertised to the guest, and avoid context switching them. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20240214131827.2856277-23-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
Outer Shareable and Range TLBI instructions shouldn't be made available to the guest if they are not advertised. Use FGU to disable those, and set HCR_EL2.TLBIOS in the case the host doesn't have FGT. Note that in that later case, we cannot efficiently disable TLBI Range instructions, as this would require to trap all TLBIs. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20240214131827.2856277-22-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
The way we save/restore HFG[RW]TR_EL2 can now be simplified, and the Ampere erratum hack is the only thing that still stands out. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-21-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
We already trap a bunch of existing features for the purpose of disabling them (MAIR2, POR, ACCDATA, SME...). Let's move them over to our brand new FGU infrastructure. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-20-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
In order to correctly honor our FGU bits, they must be converted into a set of FGT bits. They get merged as part of the existing FGT setting. Similarly, the UNDEF injection phase takes place when handling the trap. This results in a bit of rework in the FGT macros in order to help with the code generation, as burying per-CPU accesses in macros results in a lot of expansion, not to mention the vcpu->kvm access on nvhe (kern_hyp_va() is not optimisation-friendly). Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-19-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
In order to efficiently handle system register access being disabled, and this resulting in an UNDEF exception being injected, we introduce the (slightly dubious) concept of Fine-Grained UNDEF, modeled after the architectural Fine-Grained Traps. For each FGT group, we keep a 64 bit word that has the exact same bit assignment as the corresponding FGT register, where a 1 indicates that trapping this register should result in an UNDEF exception being reinjected. So far, nothing populates this information, nor sets the corresponding trap bits. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-18-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
__check_nv_sr_forward() is not specific to NV anymore, and does a lot more. Rename it to triage_sysreg_trap(), making it plain that its role is to handle where an exception is to be handled. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-17-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
Since we always start sysreg/sysinsn handling by searching the xarray, use it as the source of the index in the correct sys_reg_desc array. This allows some cleanup, such as moving the handling of unknown sysregs in a single location. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-16-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
In order to reduce the number of lookups that we have to perform when handling a sysreg, register each AArch64 sysreg descriptor with the global xarray. The index of the descriptor is stored as a 10 bit field in the data word. Subsequent patches will retrieve and use the stored index. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-15-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
As we are going to rely more and more on the global xarray that contains the trap configuration, always populate it, even in the non-NV case. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-14-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
As NV results in a bunch of system instructions being trapped, it makes sense to pull the system instructions into their own little array, where they will eventually be joined by AT, TLBI and a bunch of other CMOs. Based on an initial patch by Jintack Lim. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-13-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
Now that we don't use xa_store_range() anymore, drop the added complexity of XARRAY_MULTI for KVM. It is likely still pulled in by other bits of the kernel though. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-12-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
In order to be able to store different values for member of an encoding range, replace xa_store_range() calls with discrete xa_store() calls and an encoding iterator. We end-up using a bit more memory, but we gain some flexibility that we will make use of shortly. Take this opportunity to tidy up the error handling path. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20240214131827.2856277-11-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
Negative trap bits are a massive pain. They are, on the surface, indistinguishable from RES0 bits. Do you trap? or do you ignore? Thankfully, we now have the right infrastructure to check for RES0 bits as long as the register is backed by VNCR, which is the case for the FGT registers. Use that information as a discriminant when handling a trap that is potentially caused by a FGT. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-10-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
There is no reason to have separate FGT group identifiers for the debug fine grain trapping. The sole requirement is to provide the *names* so that the SR_FGF() macro can do its magic of picking the correct bit definition. So let's alias HDFGWTR_GROUP and HDFGRTR_GROUP. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-9-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
Now that we have the infrastructure to enforce a sanitised register value depending on the VM configuration, drop the helper that only used the architectural RES0 value. Reviewed-by: Joey Gouly <joey.gouly@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-8-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
Just like its little friends, HCRX_EL2 gets the feature set treatment when backed by VNCR. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-7-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
Fine Grained Traps are controlled by a whole bunch of features. Each one of them must be checked and the corresponding masks computed so that we don't let the guest apply traps it shouldn't be using. This takes care of HFG[IRW]TR_EL2, HDFG[RW]TR_EL2, and HAFGRTR_EL2. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20240214131827.2856277-6-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
We can now start making use of our sanitising masks by setting them to values that depend on the guest's configuration. First up are VTTBR_EL2, VTCR_EL2, VMPIDR_EL2 and HCR_EL2. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-5-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
VNCR-backed "registers" are actually only memory. Which means that there is zero control over what the guest can write, and that it is the hypervisor's job to actually sanitise the content of the backing store. Yeah, this is fun. In order to preserve some form of sanity, add a repainting mechanism that makes use of a per-VM set of RES0/RES1 masks, one pair per VNCR register. These masks get applied on access to the backing store via __vcpu_sys_reg(), ensuring that the state that is consumed by KVM is correct. So far, nothing populates these masks, but stay tuned. Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Joey Gouly <joey.gouly@arm.com> Link: https://lore.kernel.org/r/20240214131827.2856277-4-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
In order to make it easier to check whether a particular feature is exposed to a guest, add a new set of helpers, with kvm_has_feat() being the most useful. Let's start making use of them in the PMU code (courtesy of Oliver). Follow-up changes will introduce additional use patterns. Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Co-developed--by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240214131827.2856277-3-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
Despite having the control bits for FEAT_SPECRES and FEAT_PACM, the ID registers fields are either incomplete or missing. Fix it. Reviewed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20240214131827.2856277-2-maz@kernel.orgSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
- 15 Feb, 2024 1 commit
-
-
Marc Zyngier authored
Using this_cpu_has_cap() has the potential to go wrong when used system-wide on a preemptible kernel. Instead, use the __system_matches_cap() helper when checking for FEAT_NV in the FEAT_NV1 probing helper. Fixes: 3673d01a ("arm64: cpufeatures: Only check for NV1 if NV is present") Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/kvmarm/86bk8k5ts3.wl-maz@kernel.org/Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
-
- 13 Feb, 2024 1 commit
-
-
Oliver Upton authored
Zenghui noted that the test assertion for the ISTATUS bit is printing the current timer value instead of the control register in the case of failure. While the assertion is sound, printing CNT isn't informative. Change things around to actually print the CTL register value instead. Reported-by: Zenghui Yu <yuzenghui@huawei.com> Closes: https://lore.kernel.org/kvmarm/3188e6f1-f150-f7d0-6c2b-5b7608b0b012@huawei.com/Reviewed-by: Zenghui Yu <zenghui.yu@linux.dev> Link: https://lore.kernel.org/r/20240212210932.3095265-2-oliver.upton@linux.devSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
- 12 Feb, 2024 2 commits
-
-
Sean Christopherson authored
Fix a pile of -Wformat warnings in the KVM ARM selftests code, almost all of which are benign "long" versus "long long" issues (selftests are 64-bit only, and the guest printf code treats "ll" the same as "l"). The code itself isn't problematic, but the warnings make it impossible to build ARM selftests with -Werror, which does detect real issues from time to time. Opportunistically have GUEST_ASSERT_BITMAP_REG() interpret set_expected, which is a bool, as an unsigned decimal value, i.e. have it print '0' or '1' instead of '0x0' or '0x1'. Signed-off-by: Sean Christopherson <seanjc@google.com> Tested-by: Zenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20240202234603.366925-1-seanjc@google.comSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-
Joey Gouly authored
The last usage of this macro was removed in: commit 5dc33bd1 ("KVM: arm64: nVHE: Pass pointers consistently to hyp-init") Signed-off-by: Joey Gouly <joey.gouly@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Oliver Upton <oliver.upton@linux.dev> Acked-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20240208105422.3444159-3-joey.gouly@arm.comSigned-off-by: Oliver Upton <oliver.upton@linux.dev>
-