- 20 Oct, 2024 8 commits
-
-
Sean Christopherson authored
Ignore nCR3[4:0] when loading PDPTEs from memory for nested SVM, as bits 4:0 of CR3 are ignored when PAE paging is used, and thus VMRUN doesn't enforce 32-byte alignment of nCR3. In the absolute worst case scenario, failure to ignore bits 4:0 can result in an out-of-bounds read, e.g. if the target page is at the end of a memslot, and the VMM isn't using guard pages. Per the APM: The CR3 register points to the base address of the page-directory-pointer table. The page-directory-pointer table is aligned on a 32-byte boundary, with the low 5 address bits 4:0 assumed to be 0. And the SDM's much more explicit: 4:0 Ignored Note, KVM gets this right when loading PDPTRs, it's only the nSVM flow that is broken. Fixes: e4e517b4 ("KVM: MMU: Do not unconditionally read PDPTE from guest memory") Reported-by: Kirk Swidowski <swidowski@google.com> Cc: Andy Nguyen <theflow@google.com> Cc: 3pvd <3pvd@google.com> Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20241009140838.1036226-1-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Maxim Levitsky authored
Reset the segment cache after segment initialization in vmx_vcpu_reset() to harden KVM against caching stale/uninitialized data. Without the recent fix to bypass the cache in kvm_arch_vcpu_put(), the following scenario is possible: - vCPU is just created, and the vCPU thread is preempted before SS.AR_BYTES is written in vmx_vcpu_reset(). - When scheduling out the vCPU task, kvm_arch_vcpu_in_kernel() => vmx_get_cpl() reads and caches '0' for SS.AR_BYTES. - vmx_vcpu_reset() => seg_setup() configures SS.AR_BYTES, but doesn't invoke vmx_segment_cache_clear() to invalidate the cache. As a result, KVM retains a stale value in the cache, which can be read, e.g. via KVM_GET_SREGS. Usually this is not a problem because the VMX segment cache is reset on each VM-Exit, but if the userspace VMM (e.g KVM selftests) reads and writes system registers just after the vCPU was created, _without_ modifying SS.AR_BYTES, userspace will write back the stale '0' value and ultimately will trigger a VM-Entry failure due to incorrect SS segment type. Invalidating the cache after writing the VMCS doesn't address the general issue of cache accesses from IRQ context being unsafe, but it does prevent KVM from clobbering the VMCS, i.e. mitigates the harm done _if_ KVM has a bug that results in an unsafe cache access. Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com> Fixes: 2fb92db1 ("KVM: VMX: Cache vmcs segment fields") [sean: rework changelog to account for previous patch] Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20241009175002.1118178-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Massage the documentation for KVM_X86_QUIRK_SLOT_ZAP_ALL to call out that it applies to moved memslots as well as deleted memslots, to avoid KVM's "fast zap" terminology (which has no meaning for userspace), and to reword the documented targeted zap behavior to specifically say that KVM _may_ zap a subset of all SPTEs. As evidenced by the fix to zap non-leafs SPTEs with gPTEs, formally documenting KVM's exact internal behavior is risky and unnecessary. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20241009192345.1148353-4-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
Add a lockdep assertion in kvm_unmap_gfn_range() to ensure that either mmu_invalidate_in_progress is elevated, or that the range is being zapped due to memslot removal (loosely detected by slots_lock being held). Zapping SPTEs without mmu_invalidate_{in_progress,seq} protection is unsafe as KVM's page fault path snapshots state before acquiring mmu_lock, and thus can create SPTEs with stale information if vCPUs aren't forced to retry faults (due to seeing an in-progress or past MMU invalidation). Memslot removal is a special case, as the memslot is retrieved outside of mmu_invalidate_seq, i.e. doesn't use the "standard" protections, and instead relies on SRCU synchronization to ensure any in-flight page faults are fully resolved before zapping SPTEs. Signed-off-by: Sean Christopherson <seanjc@google.com> Message-ID: <20241009192345.1148353-3-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Sean Christopherson authored
When performing a targeted zap on memslot removal, zap only MMU pages that shadow guest PTEs, as zapping all SPs that "match" the gfn is inexact and unnecessary. Furthermore, for_each_gfn_valid_sp() arguably shouldn't exist, because it doesn't do what most people would it expect it to do. The "round gfn for level" adjustment that is done for direct SPs (no gPTE) means that the exact gfn comparison will not get a match, even when a SP does "cover" a gfn, or was even created specifically for a gfn. For memslot deletion specifically, KVM's behavior will vary significantly based on the size and alignment of a memslot, and in weird ways. E.g. for a 4KiB memslot, KVM will zap more SPs if the slot is 1GiB aligned than if it's only 4KiB aligned. And as described below, zapping SPs in the aligned case overzaps for direct MMUs, as odds are good the upper-level SPs are serving other memslots. To iterate over all potentially-relevant gfns, KVM would need to make a pass over the hash table for each level, with the gfn used for lookup rounded for said level. And then check that the SP is of the correct level, too, e.g. to avoid over-zapping. But even then, KVM would massively overzap, as processing every level is all but guaranteed to zap SPs that serve other memslots, especially if the memslot being removed is relatively small. KVM could mitigate that issue by processing only levels that can be possible guest huge pages, i.e. are less likely to be re-used for other memslot, but while somewhat logical, that's quite arbitrary and would be a bit of a mess to implement. So, zap only SPs with gPTEs, as the resulting behavior is easy to describe, is predictable, and is explicitly minimal, i.e. KVM only zaps SPs that absolutely must be zapped. Cc: Yan Zhao <yan.y.zhao@intel.com> Signed-off-by: Sean Christopherson <seanjc@google.com> Reviewed-by: Yan Zhao <yan.y.zhao@intel.com> Tested-by: Yan Zhao <yan.y.zhao@intel.com> Message-ID: <20241009192345.1148353-2-seanjc@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Kirill A. Shutemov authored
AMD SEV-SNP and Intel TDX have limited access to MTRR: either it is not advertised in CPUID or it cannot be programmed (on TDX, due to #VE on CR0.CD clear). This results in guests using uncached mappings where it shouldn't and pmd/pud_set_huge() failures due to non-uniform memory type reported by mtrr_type_lookup(). Override MTRR state, making it WB by default as the kernel does for Hyper-V guests. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Suggested-by: Binbin Wu <binbin.wu@intel.com> Cc: Juergen Gross <jgross@suse.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Juergen Gross <jgross@suse.com> Message-ID: <20241015095818.357915-1-kirill.shutemov@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Dr. David Alan Gilbert authored
The last use of kvm_vcpu_gfn_to_pfn_atomic was removed by commit 1bbc60d0 ("KVM: x86/mmu: Remove MMU auditing") Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Message-ID: <20241001141354.18009-3-linux@treblig.org> [Adjust Documentation/virt/kvm/locking.rst. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Dr. David Alan Gilbert authored
The last use of kvm_vcpu_gfn_to_pfn was removed by commit b1624f99 ("KVM: Remove kvm_vcpu_gfn_to_page() and kvm_vcpu_gpa_to_page()") Remove it. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Message-ID: <20241001141354.18009-2-linux@treblig.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- 06 Oct, 2024 3 commits
-
-
Paolo Bonzini authored
Merge tag 'kvmarm-fixes-6.12-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm64 fixes for 6.12, take #1 - Fix pKVM error path on init, making sure we do not change critical system registers as we're about to fail - Make sure that the host's vector length is at capped by a value common to all CPUs - Fix kvm_has_feat*() handling of "negative" features, as the current code is pretty broken - Promote Joey to the status of official reviewer, while James steps down -- hopefully only temporarly
-
Paolo Bonzini authored
Guard them with CONFIG_KVM_X86_COMMON rather than the two vendor modules. In practice this has no functional change, because CONFIG_KVM_X86_COMMON is set if and only if at least one vendor-specific module is being built. However, it is cleaner to specify CONFIG_KVM_X86_COMMON for functions that are used in kvm.ko. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Fixes: 590b09b1 ("KVM: x86: Register "emergency disable" callbacks when virt is enabled") Fixes: 6d55a942 ("x86/reboot: Unconditionally define cpu_emergency_virt_cb typedef") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
kvm.ko is nothing but library code shared by kvm-intel.ko and kvm-amd.ko. It provides no functionality on its own and it is unnecessary unless one of the vendor-specific module is compiled. In particular, /dev/kvm is not created until one of kvm-intel.ko or kvm-amd.ko is loaded. Use CONFIG_KVM to decide if it is built-in or a module, but use the vendor-specific modules for the actual decision on whether to build it. This also fixes a build failure when CONFIG_KVM_INTEL and CONFIG_KVM_AMD are both disabled. The cpu_emergency_register_virt_callback() function is called from kvm.ko, but it is only defined if at least one of CONFIG_KVM_INTEL and CONFIG_KVM_AMD is provided. Fixes: 590b09b1 ("KVM: x86: Register "emergency disable" callbacks when virt is enabled") Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
- 03 Oct, 2024 2 commits
-
-
Paolo Bonzini authored
As was tried in commit 4e103134 ("KVM: x86/mmu: Zap only the relevant pages when removing a memslot"), all shadow pages, i.e. non-leaf SPTEs, need to be zapped. All of the accounting for a shadow page is tied to the memslot, i.e. the shadow page holds a reference to the memslot, for all intents and purposes. Deleting the memslot without removing all relevant shadow pages, as is done when KVM_X86_QUIRK_SLOT_ZAP_ALL is disabled, results in NULL pointer derefs when tearing down the VM. Reintroduce from that commit the code that walks the whole memslot when there are active shadow MMU pages. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Marc Zyngier authored
Oliver reports that the kvm_has_feat() helper is not behaviing as expected for negative feature. On investigation, the main issue seems to be caused by the following construct: #define get_idreg_field(kvm, id, fld) \ (id##_##fld##_SIGNED ? \ get_idreg_field_signed(kvm, id, fld) : \ get_idreg_field_unsigned(kvm, id, fld)) where one side of the expression evaluates as something signed, and the other as something unsigned. In retrospect, this is totally braindead, as the compiler converts this into an unsigned expression. When compared to something that is 0, the test is simply elided. Epic fail. Similar issue exists in the expand_field_sign() macro. The correct way to handle this is to chose between signed and unsigned comparisons, so that both sides of the ternary expression are of the same type (bool). In order to keep the code readable (sort of), we introduce new comparison primitives taking an operator as a parameter, and rewrite the kvm_has_feat*() helpers in terms of these primitives. Fixes: c62d7a23 ("KVM: arm64: Add feature checking helpers") Reported-by: Oliver Upton <oliver.upton@linux.dev> Tested-by: Oliver Upton <oliver.upton@linux.dev> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20241002204239.2051637-1-maz@kernel.orgSigned-off-by: Marc Zyngier <maz@kernel.org>
-
- 01 Oct, 2024 4 commits
-
-
Mark Brown authored
The recent addition of support for testing with the x86 specific quirk KVM_X86_QUIRK_SLOT_ZAP_ALL disabled in the generic memslot tests broke the build of the KVM selftests for all other architectures: In file included from include/kvm_util.h:8, from include/memstress.h:13, from memslot_modification_stress_test.c:21: memslot_modification_stress_test.c: In function ‘main’: memslot_modification_stress_test.c:176:38: error: ‘KVM_X86_QUIRK_SLOT_ZAP_ALL’ undeclared (first use in this function) 176 | KVM_X86_QUIRK_SLOT_ZAP_ALL); | ^~~~~~~~~~~~~~~~~~~~~~~~~~ Add __x86_64__ guard defines to avoid building the relevant code on other architectures. Fixes: 61de4c34 ("KVM: selftests: Test memslot move in memslot_perf_test with quirk disabled") Fixes: 218f6415 ("KVM: selftests: Allow slot modification stress test with quirk disabled") Reported-by: Aishwarya TCV <aishwarya.tcv@arm.com> Signed-off-by: Mark Brown <broonie@kernel.org> Message-ID: <20240930-kvm-build-breakage-v1-1-866fad3cc164@kernel.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-
Marc Zyngier authored
It has been a while since James had any significant bandwidth to review KVM/arm64 patches. But in the meantime, Joey has stepped up and did a really good job reviewing some terrifying patch series. Having talked with the interested parties, it appears that James is unlikely to have time for KVM in the near future, and that Joey is willing to take more responsibilities. So let's appoint Joey as an official reviewer, and give James some breathing space, as well as my personal thanks. I'm sure he will be back one way or another! Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Zenghui Yu <yuzenghui@huawei.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Oliver Upton <oliver.upton@linux.dev> Acked-by: Joey Gouly <joey.gouly@arm.com> Acked-by: Zenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20240927104956.1223658-1-maz@kernel.orgSigned-off-by: Marc Zyngier <maz@kernel.org>
-
Mark Brown authored
When pKVM saves and restores the host floating point state on a SVE system, it programs the vector length in ZCR_EL2.LEN to be whatever the maximum VL for the PE is. But it uses a buffer allocated with kvm_host_sve_max_vl, the maximum VL shared by all PEs in the system. This means that if we run on a system where the maximum VLs are not consistent, we will overflow the buffer on PEs which support larger VLs. Since the host will not currently attempt to make use of non-shared VLs, fix this by explicitly setting the EL2 VL to be the maximum shared VL when we save and restore. This will enforce the limit on host VL usage. Should we wish to support asymmetric VLs, this code will need to be updated along with the required changes for the host: https://lore.kernel.org/r/20240730-kvm-arm64-fix-pkvm-sve-vl-v6-0-cae8a2e0bd66@kernel.org Fixes: b5b99556 ("KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM") Signed-off-by: Mark Brown <broonie@kernel.org> Tested-by: Fuad Tabba <tabba@google.com> Reviewed-by: Fuad Tabba <tabba@google.com> Link: https://lore.kernel.org/r/20240912-kvm-arm64-limit-guest-vl-v2-1-dd2c29cb2ac9@kernel.org [maz: added punctuation to the commit message] Signed-off-by: Marc Zyngier <maz@kernel.org>
-
Vincent Donnefort authored
On an error, hyp_vcpu will be accessed while this memory has already been relinquished to the host and unmapped from the hypervisor. Protect the CPTR assignment with an early return. Fixes: b5b99556 ("KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM") Reviewed-by: Oliver Upton <oliver.upton@linux.dev> Signed-off-by: Vincent Donnefort <vdonnefort@google.com> Link: https://lore.kernel.org/r/20240919110500.2345927-1-vdonnefort@google.comSigned-off-by: Marc Zyngier <maz@kernel.org>
-
- 29 Sep, 2024 12 commits
-
-
Linus Torvalds authored
-
Linus Torvalds authored
The cpu_emergency_register_virt_callback() function is used unconditionally by the x86 kvm code, but it is declared (and defined) conditionally: #if IS_ENABLED(CONFIG_KVM_INTEL) || IS_ENABLED(CONFIG_KVM_AMD) void cpu_emergency_register_virt_callback(cpu_emergency_virt_cb *callback); ... leading to a build error when neither KVM_INTEL nor KVM_AMD support is enabled: arch/x86/kvm/x86.c: In function ‘kvm_arch_enable_virtualization’: arch/x86/kvm/x86.c:12517:9: error: implicit declaration of function ‘cpu_emergency_register_virt_callback’ [-Wimplicit-function-declaration] 12517 | cpu_emergency_register_virt_callback(kvm_x86_ops.emergency_disable_virtualization_cpu); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ arch/x86/kvm/x86.c: In function ‘kvm_arch_disable_virtualization’: arch/x86/kvm/x86.c:12522:9: error: implicit declaration of function ‘cpu_emergency_unregister_virt_callback’ [-Wimplicit-function-declaration] 12522 | cpu_emergency_unregister_virt_callback(kvm_x86_ops.emergency_disable_virtualization_cpu); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Fix the build by defining empty helper functions the same way the old cpu_emergency_disable_virtualization() function was dealt with for the same situation. Maybe we could instead have made the call sites conditional, since the callers (kvm_arch_{en,dis}able_virtualization()) have an empty weak fallback. I'll leave that to the kvm people to argue about, this at least gets the build going for that particular config. Fixes: 590b09b1 ("KVM: x86: Register "emergency disable" callbacks when virt is enabled") Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Sean Christopherson <seanjc@google.com> Cc: Kai Huang <kai.huang@intel.com> Cc: Chao Gao <chao.gao@intel.com> Cc: Farrah Chen <farrah.chen@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jassibrar/mailboxLinus Torvalds authored
Pull mailbox updates from Jassi Brar: - fix kconfig dependencies (mhu-v3, omap2+) - use devie name instead of genereic imx_mu_chan as interrupt name (imx) - enable sa8255p and qcs8300 ipc controllers (qcom) - Fix timeout during suspend mode (bcm2835) - convert to use use of_property_match_string (mailbox) - enable mt8188 (mediatek) - use devm_clk_get_enabled helpers (spreadtrum) - fix device-id typo (rockchip) * tag 'mailbox-v6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/jassibrar/mailbox: mailbox, remoteproc: omap2+: fix compile testing dt-bindings: mailbox: qcom-ipcc: Document QCS8300 IPCC dt-bindings: mailbox: qcom-ipcc: document the support for SA8255p dt-bindings: mailbox: mtk,adsp-mbox: Add compatible for MT8188 mailbox: Use of_property_match_string() instead of open-coding mailbox: bcm2835: Fix timeout during suspend mode mailbox: sprd: Use devm_clk_get_enabled() helpers mailbox: rockchip: fix a typo in module autoloading mailbox: imx: use device name in interrupt name mailbox: ARM_MHU_V3 should depend on ARM64
-
Linus Torvalds authored
Merge tag 'i2c-for-6.12-rc1-additional_fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux Pull i2c fixes from Wolfram Sang: - fix DesignWare driver ENABLE-ABORT sequence, ensuring ABORT can always be sent when needed - check for PCLK in the SynQuacer controller as an optional clock, allowing ACPI to directly provide the clock rate - KEBA driver Kconfig dependency fix - fix XIIC driver power suspend sequence * tag 'i2c-for-6.12-rc1-additional_fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux: i2c: xiic: Fix pm_runtime_set_suspended() with runtime pm enabled i2c: keba: I2C_KEBA should depend on KEBA_CP500 i2c: synquacer: Deal with optional PCLK correctly i2c: designware: fix controller is holding SCL low while ENABLE bit is disabled
-
git://git.infradead.org/users/hch/dma-mappingLinus Torvalds authored
Pull dma-mapping fix from Christoph Hellwig: - handle chained SGLs in the new tracing code (Christoph Hellwig) * tag 'dma-mapping-6.12-2024-09-29' of git://git.infradead.org/users/hch/dma-mapping: dma-mapping: fix DMA API tracing for chained scatterlists
-
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsiLinus Torvalds authored
Pull more SCSI updates from James Bottomley: "These are mostly minor updates. There are two drivers (lpfc and mpi3mr) which missed the initial pull and a core change to retry a start/stop unit which affect suspend/resume" * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (32 commits) scsi: lpfc: Update lpfc version to 14.4.0.5 scsi: lpfc: Support loopback tests with VMID enabled scsi: lpfc: Revise TRACE_EVENT log flag severities from KERN_ERR to KERN_WARNING scsi: lpfc: Ensure DA_ID handling completion before deleting an NPIV instance scsi: lpfc: Fix kref imbalance on fabric ndlps from dev_loss_tmo handler scsi: lpfc: Restrict support for 32 byte CDBs to specific HBAs scsi: lpfc: Update phba link state conditional before sending CMF_SYNC_WQE scsi: lpfc: Add ELS_RSP cmd to the list of WQEs to flush in lpfc_els_flush_cmd() scsi: mpi3mr: Update driver version to 8.12.0.0.50 scsi: mpi3mr: Improve wait logic while controller transitions to READY state scsi: mpi3mr: Update MPI Headers to revision 34 scsi: mpi3mr: Use firmware-provided timestamp update interval scsi: mpi3mr: Enhance the Enable Controller retry logic scsi: sd: Fix off-by-one error in sd_read_block_characteristics() scsi: pm8001: Do not overwrite PCI queue mapping scsi: scsi_debug: Remove a useless memset() scsi: pmcraid: Convert comma to semicolon scsi: sd: Retry START STOP UNIT commands scsi: mpi3mr: A performance fix scsi: ufs: qcom: Update MODE_MAX cfg_bw value ...
-
git://evilpiepirate.org/bcachefsLinus Torvalds authored
Pull more bcachefs updates from Kent Overstreet: "Assorted minor syzbot fixes, and for bigger stuff: Fix two disk accounting rewrite bugs: - Disk accounting keys use the version field of bkey so that journal replay can tell which updates have been applied to the btree. This is set in the transaction commit path, after we've gotten our journal reservation (and our time ordering), but the BCH_TRANS_COMMIT_skip_accounting_apply flag that journal replay uses was incorrectly skipping this for new updates generated prior to journal replay. This fixes the underlying cause of an assertion pop in disk_accounting_read. - A couple of fixes for disk accounting + device removal. Checking if acocunting replicas entries were marked in the superblock was being done at the wrong point, when deltas in the journal could still zero them out, and then additionally we'd try to add a missing replicas entry to the superblock without checking if it referred to an invalid (removed) device. A whole slew of repair fixes: - fix infinite loop in propagate_key_to_snapshot_leaves(), this fixes an infinite loop when repairing a filesystem with many snapshots - fix incorrect transaction restart handling leading to occasional "fsck counted ..." warnings - fix warning in __bch2_fsck_err() for bkey fsck errors - check_inode() in fsck now correctly checks if the filesystem was clean - there shouldn't be pending logged ops if the fs was clean, we now check for this - remove_backpointer() doesn't remove a dirent that doesn't actually point to the inode - many more fsck errors are AUTOFIX" * tag 'bcachefs-2024-09-28' of git://evilpiepirate.org/bcachefs: (35 commits) bcachefs: check_subvol_path() now prints subvol root inode bcachefs: remove_backpointer() now checks if dirent points to inode bcachefs: dirent_points_to_inode() now warns on mismatch bcachefs: Fix lost wake up bcachefs: Check for logged ops when clean bcachefs: BCH_FS_clean_recovery bcachefs: Convert disk accounting BUG_ON() to WARN_ON() bcachefs: Fix BCH_TRANS_COMMIT_skip_accounting_apply bcachefs: Check for accounting keys with bversion=0 bcachefs: rename version -> bversion bcachefs: Don't delete unlinked inodes before logged op resume bcachefs: Fix BCH_SB_ERRS() so we can reorder bcachefs: Fix fsck warnings from bkey validation bcachefs: Move transaction commit path validation to as late as possible bcachefs: Fix disk accounting attempting to mark invalid replicas entry bcachefs: Fix unlocked access to c->disk_sb.sb in bch2_replicas_entry_validate() bcachefs: Fix accounting read + device removal bcachefs: bch_accounting_mode bcachefs: fix transaction restart handling in check_extents(), check_dirents() bcachefs: kill inode_walker_entry.seen_this_pos ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Ingo Molnar: "Fix TDX MMIO #VE fault handling, and add two new Intel model numbers for 'Pantherlake' and 'Diamond Rapids'" * tag 'x86-urgent-2024-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/cpu: Add two Intel CPU model numbers x86/tdx: Fix "in-kernel MMIO" check
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull locking updates from Ingo Molnar: "lockdep: - Fix potential deadlock between lockdep and RCU (Zhiguo Niu) - Use str_plural() to address Coccinelle warning (Thorsten Blum) - Add debuggability enhancement (Luis Claudio R. Goncalves) static keys & calls: - Fix static_key_slow_dec() yet again (Peter Zijlstra) - Handle module init failure correctly in static_call_del_module() (Thomas Gleixner) - Replace pointless WARN_ON() in static_call_module_notify() (Thomas Gleixner) <linux/cleanup.h>: - Add usage and style documentation (Dan Williams) rwsems: - Move is_rwsem_reader_owned() and rwsem_owner() under CONFIG_DEBUG_RWSEMS (Waiman Long) atomic ops, x86: - Redeclare x86_32 arch_atomic64_{add,sub}() as void (Uros Bizjak) - Introduce the read64_nonatomic macro to x86_32 with cx8 (Uros Bizjak)" Signed-off-by: Ingo Molnar <mingo@kernel.org> * tag 'locking-urgent-2024-09-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: locking/rwsem: Move is_rwsem_reader_owned() and rwsem_owner() under CONFIG_DEBUG_RWSEMS jump_label: Fix static_key_slow_dec() yet again static_call: Replace pointless WARN_ON() in static_call_module_notify() static_call: Handle module init failure correctly in static_call_del_module() locking/lockdep: Simplify character output in seq_line() lockdep: fix deadlock issue between lockdep and rcu lockdep: Use str_plural() to fix Coccinelle warning cleanup: Add usage and style documentation lockdep: suggest the fix for "lockdep bfs error:-1" on print_bfs_bug locking/atomic/x86: Redeclare x86_32 arch_atomic64_{add,sub}() as void locking/atomic/x86: Introduce the read64_nonatomic macro to x86_32 with cx8
-
git://git.kernel.org/pub/scm/linux/kernel/git/jlawall/linuxLinus Torvalds authored
Pull coccinelle updates from Julia Lawall: "Extend string_choices.cocci to use more available helpers Ten patches from Hongbo Li extending string_choices.cocci with the complete set of functions offered by include/linux/string_choices.h. One patch from myself reducing the number of redundant cases that are checked by Coccinelle, giving a small performance improvement" * tag 'cocci-for-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/jlawall/linux: Reduce Coccinelle choices in string_choices.cocci coccinelle: Remove unnecessary parentheses for only one possible change. coccinelle: Add rules to find str_yes_no() replacements coccinelle: Add rules to find str_on_off() replacements coccinelle: Add rules to find str_write_read() replacements coccinelle: Add rules to find str_read_write() replacements coccinelle: Add rules to find str_enable{d}_disable{d}() replacements coccinelle: Add rules to find str_lo{w}_hi{gh}() replacements coccinelle: Add rules to find str_hi{gh}_lo{w}() replacements coccinelle: Add rules to find str_false_true() replacements coccinelle: Add rules to find str_true_false() replacements
-
Linus Torvalds authored
Merge tag 'linux_kselftest-next-6.12-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest Pull kselftest fix from Shuah Khan: "One urgent fix to vDSO as automated testing is failing due to this bug" * tag 'linux_kselftest-next-6.12-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: selftests: vDSO: align stack for O2-optimized memcpy
-
Ingo Molnar authored
Merge all pending locking commits into a single branch. Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
- 28 Sep, 2024 11 commits
-
-
Julia Lawall authored
The isomorphism neg_if_exp negates the test of a ?: conditional, making it unnecessary to have an explicit case for a negated test with the branches inverted. At the same time, we can disable neg_if_exp in cases where a different API function may be more suitable for a negated test. Finally, in the non-patch cases, E matches an expression with parentheses around it, so there is no need to mention () explicitly in the pattern. The () are still needed in the patch cases, because we want to drop them, if they are present. Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
-
Hongbo Li authored
The parentheses are only needed if there is a disjunction, ie a set of possible changes. If there is only one pattern, we can remove these parentheses. Just like the format: - x + y not: ( - x + y ) Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
-
Hongbo Li authored
As other rules done, we add rules for str_yes_no() to check the relative opportunities. Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
-
Hongbo Li authored
As other rules done, we add rules for str_on_off() to check the relative opportunities. Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
-
Hongbo Li authored
As other rules done, we add rules for str_write_read() to check the relative opportunities. Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
-
Hongbo Li authored
As other rules done, we add rules for str_read_write() to check the relative opportunities. Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
-
Hongbo Li authored
As other rules done, we add rules for str_enable{d}_ disable{d}() to check the relative opportunities. Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
-
Hongbo Li authored
As other rules done, we add rules for str_lo{w}_hi{gh}() to check the relative opportunities. Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
-
Hongbo Li authored
As other rules done, we add rules for str_hi{gh}_lo{w}() to check the relative opportunities. Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
-
Hongbo Li authored
As done with str_true_false(), add checks for str_false_true() opportunities. A simple test can find over 9 cases currently exist in the tree. Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
-
Hongbo Li authored
After str_true_false() has been introduced in the tree, we can add rules for finding places where str_true_false() can be used. A simple test can find over 10 locations. Signed-off-by: Hongbo Li <lihongbo22@huawei.com> Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
-