- 13 May, 2022 8 commits
-
-
Lu Baolu authored
As domain->force_snooping only impacts the devices attached with the domain, there's no need to check against all IOMMU units. On the other hand, force_snooping could be set on a domain no matter whether it has been attached or not, and once set it is an immutable flag. If no device attached, the operation always succeeds. Then this empty domain can be only attached to a device of which the IOMMU supports snoop control. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20220508123525.1973626-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20220510023407.2759143-7-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
In the attach_dev callback of the default domain ops, if the domain has been set force_snooping, but the iommu hardware of the device does not support SC(Snoop Control) capability, the callback should block it and return a corresponding error code. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20220508123525.1973626-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20220510023407.2759143-6-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
PRQ overflow may cause I/O throughput congestion, resulting in unnecessary degradation of I/O performance. Appropriately increasing the length of PRQ can greatly reduce the occurrence of PRQ overflow. The count of maximum page requests that can be generated in parallel by a PCIe device is statically defined in the Outstanding Page Request Capacity field of the PCIe ATS configure space. The new length of PRQ is calculated by summing up the value of Outstanding Page Request Capacity register across all devices where Page Requests are supported on the real PR-capable platform (Intel Sapphire Rapids). The result is round to the nearest higher power of 2. The PRQ length is also double sized as the VT-d IOMMU driver only updates the Page Request Queue Head Register (PQH_REG) after processing the entire queue. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20220421113558.3504874-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20220510023407.2759143-5-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Fold dmar_insert_one_dev_info() into domain_add_dev_info() which is its only caller. No intentional functional impact. Suggested-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220416120423.879552-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20220510023407.2759143-4-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
The dmar_insert_one_dev_info() returns the pass-in domain on success and NULL on failure. This doesn't make much sense. Change it to an integer. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220416120423.879552-1-baolu.lu@linux.intel.com Link: https://lore.kernel.org/r/20220510023407.2759143-3-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Muhammad Usama Anjum authored
dev_iommu_priv_get() is being used at the top of this function which dereferences dev. Dev cannot be NULL after this. Remove the validity check on dev and simplify the code. Signed-off-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Link: https://lore.kernel.org/r/20220313150337.593650-1-usama.anjum@collabora.comSigned-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20220510023407.2759143-2-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
Originally, creating the dma_ranges resource list in pre-sorted fashion was the simplest and most efficient way to enforce the order required by iova_reserve_pci_windows(). However since then at least one PCI host driver is now re-sorting the list for its own probe-time processing, which doesn't seem entirely unreasonable, so that basic assumption no longer holds. Make iommu-dma robust and get the sort order it needs by explicitly sorting, which means we can also save the effort at creation time and just build the list in whatever natural order the DT had. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/35661036a7e4160850895f9b37f35408b6a29f2f.1652091160.git.robin.murphy@arm.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Yunfei Wang authored
The data type of the return value of the iommu_map_sg_atomic is ssize_t, but the data type of iova size is size_t, e.g. one is int while the other is unsigned int. When iommu_map_sg_atomic return value is compared with iova size, it will force the signed int to be converted to unsigned int, if iova map fails and iommu_map_sg_atomic return error code is less than 0, then (ret < iova_len) is false, which will to cause not do free iova, and the master can still successfully get the iova of map fail, which is not expected. Therefore, we need to check the return value of iommu_map_sg_atomic in two cases according to whether it is less than 0. Fixes: ad8f36e4 ("iommu: return full error code from iommu_map_sg[_atomic]()") Signed-off-by: Yunfei Wang <yf.wang@mediatek.com> Cc: <stable@vger.kernel.org> # 5.15.* Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Miles Chen <miles.chen@mediatek.com> Link: https://lore.kernel.org/r/20220507085204.16914-1-yf.wang@mediatek.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
- 28 Apr, 2022 11 commits
-
-
Jason Gunthorpe authored
IOMMU_CACHE means that normal DMAs do not require any additional coherency mechanism and is the basic uAPI that VFIO exposes to userspace. For instance VFIO applications like DPDK will not work if additional coherency operations are required. Therefore check IOMMU_CAP_CACHE_COHERENCY like vdpa & usnic do before allowing an IOMMU backed VFIO device to be created. Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Alex Williamson <alex.williamson@redhat.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/4-v3-2cf356649677+a32-intel_no_snoop_jgg@nvidia.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
While the comment was correct that this flag was intended to convey the block no-snoop support in the IOMMU, it has become widely implemented and used to mean the IOMMU supports IOMMU_CACHE as a map flag. Only the Intel driver was different. Now that the Intel driver is using enforce_cache_coherency() update the comment to make it clear that IOMMU_CAP_CACHE_COHERENCY is only about IOMMU_CACHE. Fix the Intel driver to return true since IOMMU_CACHE always works. The two places that test this flag, usnic and vdpa, are both assigning userspace pages to a driver controlled iommu_domain and require IOMMU_CACHE behavior as they offer no way for userspace to synchronize caches. Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/3-v3-2cf356649677+a32-intel_no_snoop_jgg@nvidia.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
IOMMU_CACHE means "normal DMA to this iommu_domain's IOVA should be cache coherent" and is used by the DMA API. The definition allows for special non-coherent DMA to exist - ie processing of the no-snoop flag in PCIe TLPs - so long as this behavior is opt-in by the device driver. The flag is mainly used by the DMA API to synchronize the IOMMU setting with the expected cache behavior of the DMA master. eg based on dev_is_dma_coherent() in some case. For Intel IOMMU IOMMU_CACHE was redefined to mean 'force all DMA to be cache coherent' which has the practical effect of causing the IOMMU to ignore the no-snoop bit in a PCIe TLP. x86 platforms are always IOMMU_CACHE, so Intel should ignore this flag. Instead use the new domain op enforce_cache_coherency() which causes every IOPTE created in the domain to have the no-snoop blocking behavior. Reconfigure VFIO to always use IOMMU_CACHE and call enforce_cache_coherency() to operate the special Intel behavior. Remove the IOMMU_CACHE test from Intel IOMMU. Ultimately VFIO plumbs the result of enforce_cache_coherency() back into the x86 platform code through kvm_arch_register_noncoherent_dma() which controls if the WBINVD instruction is available in the guest. No other archs implement kvm_arch_register_noncoherent_dma() nor are there any other known consumers of VFIO_DMA_CC_IOMMU that might be affected by the user visible result change on non-x86 archs. Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Acked-by: Alex Williamson <alex.williamson@redhat.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/2-v3-2cf356649677+a32-intel_no_snoop_jgg@nvidia.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Jason Gunthorpe authored
This new mechanism will replace using IOMMU_CAP_CACHE_COHERENCY and IOMMU_CACHE to control the no-snoop blocking behavior of the IOMMU. Currently only Intel and AMD IOMMUs are known to support this feature. They both implement it as an IOPTE bit, that when set, will cause PCIe TLPs to that IOVA with the no-snoop bit set to be treated as though the no-snoop bit was clear. The new API is triggered by calling enforce_cache_coherency() before mapping any IOVA to the domain which globally switches on no-snoop blocking. This allows other implementations that might block no-snoop globally and outside the IOPTE - AMD also documents such a HW capability. Leave AMD out of sync with Intel and have it block no-snoop even for in-kernel users. This can be trivially resolved in a follow up patch. Only VFIO needs to call this API because it does not have detailed control over the device to avoid requesting no-snoop behavior at the device level. Other places using domains with real kernel drivers should simply avoid asking their devices to set the no-snoop bit. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/1-v3-2cf356649677+a32-intel_no_snoop_jgg@nvidia.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Rob Herring authored
There's no need to show consumer side in provider examples. The ones used here are undocumented or undocumented in schemas which results in warnings. Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20220422192104.2591462-1-robh@kernel.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Mario Limonciello authored
Bit 1 of the IVFS IVInfo field indicates that IOMMU has been used for pre-boot DMA protection. Export this capability to allow other places in the kernel to be able to check for it on AMD systems. Link: https://www.amd.com/system/files/TechDocs/48882_IOMMU.pdfReviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/ce7627fa1c596878ca6515dd9d4381a45b6ee38c.1650878781.git.robin.murphy@arm.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
Between me trying to get rid of iommu_present() and Mario wanting to support the AMD equivalent of DMAR_PLATFORM_OPT_IN, scrutiny has shown that the iommu_dma_protection attribute is being far too optimistic. Even if an IOMMU might be present for some PCI segment in the system, that doesn't necessarily mean it provides translation for the device(s) we care about. Furthermore, all that DMAR_PLATFORM_OPT_IN really does is tell us that memory was protected before the kernel was loaded, and prevent the user from disabling the intel-iommu driver entirely. While that lets us assume kernel integrity, what matters for actual runtime DMA protection is whether we trust individual devices, based on the "external facing" property that we expect firmware to describe for Thunderbolt ports. It's proven challenging to determine the appropriate ports accurately given the variety of possible topologies, so while still not getting a perfect answer, by putting enough faith in firmware we can at least get a good bit closer. If we can see that any device near a Thunderbolt NHI has all the requisites for Kernel DMA Protection, chances are that it *is* a relevant port, but moreover that implies that firmware is playing the game overall, so we'll use that to assume that all Thunderbolt ports should be correctly marked and thus will end up fully protected. CC: Mario Limonciello <mario.limonciello@amd.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/b153f208bc9eafab5105bad0358b77366509d2d4.1650878781.git.robin.murphy@arm.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
VT-d's dmar_platform_optin() actually represents a combination of properties fairly well standardised by Microsoft as "Pre-boot DMA Protection" and "Kernel DMA Protection"[1]. As such, we can provide interested consumers with an abstracted capability rather than driver-specific interfaces that won't scale. We name it for the former aspect since that's what external callers are most likely to be interested in; the latter is for the IOMMU layer to handle itself. [1] https://docs.microsoft.com/en-us/windows-hardware/design/device-experiences/oem-kernel-dma-protectionSuggested-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/d6218dff2702472da80db6aec2c9589010684551.1650878781.git.robin.murphy@arm.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Robin Murphy authored
iommu_capable() only really works for systems where all IOMMU instances are completely homogeneous, and all devices are IOMMU-mapped. Implement the new variant which will be able to give a more accurate answer for whichever device the caller is actually interested in, and even more so once all the external users have been converted and we can reliably pass the device pointer through the internal driver interface too. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/8407eb9586677995b7a9fd70d0fd82d85929a9bb.1650878781.git.robin.murphy@arm.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Mario Limonciello authored
If the IOMMU is in use and an untrusted device is connected to an external facing port but the address requested isn't page aligned will cause the kernel to attempt to use bounce buffers. If for some reason the bounce buffers have not been allocated this is a problem that should be made apparent to the user. Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220404204723.9767-3-mario.limonciello@amd.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Mario Limonciello authored
Previously the AMD IOMMU would only enable SWIOTLB in certain circumstances: * IOMMU in passthrough mode * SME enabled This logic however doesn't work when an untrusted device is plugged in that doesn't do page aligned DMA transactions. The expectation is that a bounce buffer is used for those transactions. This fails like this: swiotlb buffer is full (sz: 4096 bytes), total 0 (slots), used 0 (slots) That happens because the bounce buffers have been allocated, followed by freed during startup but the bounce buffering code expects that all IOMMUs have left it enabled. Remove the criteria to set up bounce buffers on AMD systems to ensure they're always available for supporting untrusted devices. Fixes: 82612d66 ("iommu: Allow the dma-iommu api to use bounce buffers") Suggested-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220404204723.9767-2-mario.limonciello@amd.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
- 24 Apr, 2022 8 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull scheduler fix from Borislav Petkov: - Fix a corner case when calculating sched runqueue variables That fix also removes a check for a zero divisor in the code, without mentioning it. Vincent clarified that it's ok after I whined about it: https://lore.kernel.org/all/CAKfTPtD2QEyZ6ADd5WrwETMOX0XOwJGnVddt7VHgfURdqgOS-Q@mail.gmail.com/ * tag 'sched_urgent_for_v5.18_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/pelt: Fix attach_entity_load_avg() corner case
-
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linuxLinus Torvalds authored
Pull powerpc fixes from Michael Ellerman: - Partly revert a change to our timer_interrupt() that caused lockups with high res timers disabled. - Fix a bug in KVM TCE handling that could corrupt kernel memory. - Two commits fixing Power9/Power10 perf alternative event selection. Thanks to Alexey Kardashevskiy, Athira Rajeev, David Gibson, Frederic Barrat, Madhavan Srinivasan, Miguel Ojeda, and Nicholas Piggin. * tag 'powerpc-5.18-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/perf: Fix 32bit compile powerpc/perf: Fix power10 event alternatives powerpc/perf: Fix power9 event alternatives KVM: PPC: Fix TCE handling for VFIO powerpc/time: Always set decrementer in timer_interrupt()
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull perf fixes from Borislav Petkov: - Add Sapphire Rapids CPU support - Fix a perf vmalloc-ed buffer mapping error (PERF_USE_VMALLOC in use) * tag 'perf_urgent_for_v5.18_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/cstate: Add SAPPHIRERAPIDS_X CPU support perf/core: Fix perf_mmap fail when CONFIG_PERF_USE_VMALLOC enabled
-
git://git.kernel.org/pub/scm/linux/kernel/git/ras/rasLinus Torvalds authored
Pull EDAC fix from Borislav Petkov: - Read the reported error count from the proper register on synopsys_edac * tag 'edac_urgent_for_v5.18_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras: EDAC/synopsys: Read the error count from the correct register
-
Linus Torvalds authored
Since commit 559089e0 ("vmalloc: replace VM_NO_HUGE_VMAP with VM_ALLOW_HUGE_VMAP"), the use of hugepage mappings for vmalloc is an opt-in strategy, because it caused a number of problems that weren't noticed until x86 enabled it too. One of the issues was fixed by Nick Piggin in commit 3b8000ae ("mm/vmalloc: huge vmalloc backing pages should be split rather than compound"), but I'm still worried about page protection issues, and VM_FLUSH_RESET_PERMS in particular. However, like the hash table allocation case (commit f2edd118: "page_alloc: use vmalloc_huge for large system hash"), the use of kvmalloc() should be safe from any such games, since the returned pointer might be a SLUB allocation, and as such no user should reasonably be using it in any odd ways. We also know that the allocations are fairly large, since it falls back to the vmalloc case only when a kmalloc() fails. So using a hugepage mapping seems both safe and relevant. This patch does show a weakness in the opt-in strategy: since the opt-in flag is in the 'vm_flags', not the usual gfp_t allocation flags, very few of the usual interfaces actually expose it. That's not much of an issue in this case that already used one of the fairly specialized low-level vmalloc interfaces for the allocation, but for a lot of other vmalloc() users that might want to opt in, it's going to be very inconvenient. We'll either have to fix any compatibility problems, or expose it in the gfp flags (__GFP_COMP would have made a lot of sense) to allow normal vmalloc() users to use hugepage mappings. That said, the cases that really matter were probably already taken care of by the hash tabel allocation. Link: https://lore.kernel.org/all/20220415164413.2727220-1-song@kernel.org/ Link: https://lore.kernel.org/all/CAHk-=whao=iosX1s5Z4SF-ZGa-ebAukJoAdUJFk5SPwnofV+Vg@mail.gmail.com/ Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Paul Menzel <pmenzel@molgen.mpg.de> Cc: Song Liu <songliubraving@fb.com> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Song Liu authored
Use vmalloc_huge() in alloc_large_system_hash() so that large system hash (>= PMD_SIZE) could benefit from huge pages. Note that vmalloc_huge only allocates huge pages for systems with HAVE_ARCH_HUGE_VMALLOC. Signed-off-by: Song Liu <song@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Rik van Riel <riel@surriel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.samba.org/ksmbdLinus Torvalds authored
Pull ksmbd server fixes from Steve French: - cap maximum sector size reported to avoid mount problems - reference count fix - fix filename rename race * tag '5.18-rc3-ksmbd-fixes' of git://git.samba.org/ksmbd: ksmbd: set fixed sector size to FS_SECTOR_SIZE_INFORMATION ksmbd: increment reference count of parent fp ksmbd: remove filename in ksmbd_file
-
- 23 Apr, 2022 13 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arcLinus Torvalds authored
Pull ARC fixes from Vineet Gupta: - Assorted fixes * tag 'arc-5.18-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: ARC: remove redundant READ_ONCE() in cmpxchg loop ARC: atomic: cleanup atomic-llsc definitions arc: drop definitions of pgd_index() and pgd_offset{, _k}() entirely ARC: dts: align SPI NOR node name with dtschema ARC: Remove a redundant memset() ARC: fix typos in comments ARC: entry: fix syscall_trace_exit argument
-
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsiLinus Torvalds authored
Pull SCSI fix from James Bottomley: "One fix for an information leak caused by copying a buffer to userspace without checking for error first in the sr driver" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: sr: Do not leak information in ioctl
-
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tipLinus Torvalds authored
Pull xen fixes from Juergen Gross: "A simple cleanup patch and a refcount fix for Xen on Arm" * tag 'for-linus-5.18-rc4-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: arm/xen: Fix some refcount leaks xen: Convert kmap() to kmap_local_page()
-
git://anongit.freedesktop.org/drm/drmLinus Torvalds authored
Pull more drm fixes from Dave Airlie: "Maarten was away, so Maxine stepped up and sent me the drm-fixes merge, so no point leaving it for another week. The big change is an OF revert around bridge/panels, it may have some driver fallout, but hopefully this revert gets them shook out in the next week easier. Otherwise it's a bunch of locking/refcounts across drivers, a radeon dma_resv logic fix and some raspberry pi panel fixes. panel: - revert of patch that broke panel/bridge issues dma-buf: - remove unused header file. amdgpu: - partial revert of locking change radeon: - fix dma_resv logic inversion panel: - pi touchscreen panel init fixes vc4: - build fix - runtime pm refcount fix vmwgfx: - refcounting fix" * tag 'drm-fixes-2022-04-23' of git://anongit.freedesktop.org/drm/drm: drm/amdgpu: partial revert "remove ctx->lock" v2 Revert "drm: of: Lookup if child node has panel or bridge" Revert "drm: of: Properly try all possible cases for bridge/panel detection" drm/vc4: Use pm_runtime_resume_and_get to fix pm_runtime_get_sync() usage drm/vmwgfx: Fix gem refcounting and memory evictions drm/vc4: Fix build error when CONFIG_DRM_VC4=y && CONFIG_RASPBERRYPI_FIRMWARE=m drm/panel/raspberrypi-touchscreen: Initialise the bridge in prepare drm/panel/raspberrypi-touchscreen: Avoid NULL deref if not initialised dma-buf-map: remove renamed header file drm/radeon: fix logic inversion in radeon_sync_resv
-
git://git.kernel.org/pub/scm/linux/kernel/git/dtor/inputLinus Torvalds authored
Pull input fixes from Dmitry Torokhov: - a new set of keycodes to be used by marine navigation systems - minor fixes to omap4-keypad and cypress-sf drivers * tag 'input-for-v5.18-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input: Input: add Marine Navigation Keycodes Input: omap4-keypad - fix pm_runtime_get_sync() error checking Input: cypress-sf - register a callback to disable the regulators
-
git://git.kernel.dk/linux-blockLinus Torvalds authored
Pull block fixes from Jens Axboe: "Just two small regression fixes for bcache" * tag 'block-5.18-2022-04-22' of git://git.kernel.dk/linux-block: bcache: fix wrong bdev parameter when calling bio_alloc_clone() in do_bio_hook() bcache: put bch_bio_map() back to correct location in journal_write_unlocked()
-
git://git.kernel.dk/linux-blockLinus Torvalds authored
Pull io_uring fixes from Jens Axboe: "Just two small fixes - one fixing a potential leak for the iovec for larger requests added in this cycle, and one fixing a theoretical leak with CQE_SKIP and IOPOLL" * tag 'io_uring-5.18-2022-04-22' of git://git.kernel.dk/linux-block: io_uring: fix leaks on IOPOLL and CQE_SKIP io_uring: free iovec if file assignment fails
-
Linus Torvalds authored
Merge tag 'perf-tools-fixes-for-v5.18-2022-04-22' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux Pull perf tools fixes from Arnaldo Carvalho de Melo: - Fix header include for LLVM >= 14 when building with libclang. - Allow access to 'data_src' for auxtrace in 'perf script' with ARM SPE perf.data files, fixing processing data with such attributes. - Fix error message for test case 71 ("Convert perf time to TSC") on s390, where it is not supported. * tag 'perf-tools-fixes-for-v5.18-2022-04-22' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: perf test: Fix error message for test case 71 on s390, where it is not supported perf report: Set PERF_SAMPLE_DATA_SRC bit for Arm SPE event perf script: Always allow field 'data_src' for auxtrace perf clang: Fix header include for LLVM >= 14
-
Randy Dunlap authored
Add a struct page forward declaration to cacheflush_32.h. Fixes this build warning: CC drivers/crypto/xilinx/zynqmp-sha.o In file included from arch/sparc/include/asm/cacheflush.h:11, from include/linux/cacheflush.h:5, from drivers/crypto/xilinx/zynqmp-sha.c:6: arch/sparc/include/asm/cacheflush_32.h:38:37: warning: 'struct page' declared inside parameter list will not be visible outside of this definition or declaration 38 | void sparc_flush_page_to_ram(struct page *page); Exposed by commit 0e03b8fd ("crypto: xilinx - Turn SHA into a tristate and allow COMPILE_TEST") but not Fixes: that commit because the underlying problem is older. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kernel test robot <lkp@intel.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: David S. Miller <davem@davemloft.net> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: sparclinux@vger.kernel.org Acked-by: Sam Ravnborg <sam@ravnborg.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://anongit.freedesktop.org/drm/drm-miscDave Airlie authored
Two fixes for the raspberrypi panel initialisation, one fix for a logic inversion in radeon, a build and pm refcounting fix for vc4, two reverts for drm_of_get_bridge that caused a number of regression and a locking regression for amdgpu. Signed-off-by: Dave Airlie <airlied@redhat.com> From: Maxime Ripard <maxime@cerno.tech> Link: https://patchwork.freedesktop.org/patch/msgid/20220422084403.2xrhf3jusdej5yo4@houat
-
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4Linus Torvalds authored
Pull ext4 fixes from Ted Ts'o: "Fix some syzbot-detected bugs, as well as other bugs found by I/O injection testing. Change ext4's fallocate to consistently drop set[ug]id bits when an fallocate operation might possibly change the user-visible contents of a file. Also, improve handling of potentially invalid values in the the s_overhead_cluster superblock field to avoid ext4 returning a negative number of free blocks" * tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: jbd2: fix a potential race while discarding reserved buffers after an abort ext4: update the cached overhead value in the superblock ext4: force overhead calculation if the s_overhead_cluster makes no sense ext4: fix overhead calculation to account for the reserved gdt blocks ext4, doc: fix incorrect h_reserved size ext4: limit length to bitmap_maxbytes - blocksize in punch_hole ext4: fix use-after-free in ext4_search_dir ext4: fix bug_on in start_this_handle during umount filesystem ext4: fix symlink file size not match to file content ext4: fix fallocate to use file_modified to update permissions consistently
-
git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libataLinus Torvalds authored
Pull ATA fix from Damien Le Moal: "A single fix to avoid a NULL pointer dereference in the pata_marvell driver with adapters not supporting DMA, from Zheyu" * tag 'ata-5.18-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata: ata: pata_marvell: Check the 'bmdma_addr' beforing reading
-
git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds authored
Pull kvm fixes from Paolo Bonzini: "The main and larger change here is a workaround for AMD's lack of cache coherency for encrypted-memory guests. I have another patch pending, but it's waiting for review from the architecture maintainers. RISC-V: - Remove 's' & 'u' as valid ISA extension - Do not allow disabling the base extensions 'i'/'m'/'a'/'c' x86: - Fix NMI watchdog in guests on AMD - Fix for SEV cache incoherency issues - Don't re-acquire SRCU lock in complete_emulated_io() - Avoid NULL pointer deref if VM creation fails - Fix race conditions between APICv disabling and vCPU creation - Bugfixes for disabling of APICv - Preserve BSP MSR_KVM_POLL_CONTROL across suspend/resume selftests: - Do not use bitfields larger than 32-bits, they differ between GCC and clang" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: kvm: selftests: introduce and use more page size-related constants kvm: selftests: do not use bitfields larger than 32-bits for PTEs KVM: SEV: add cache flush to solve SEV cache incoherency issues KVM: SVM: Flush when freeing encrypted pages even on SME_COHERENT CPUs KVM: SVM: Simplify and harden helper to flush SEV guest page(s) KVM: selftests: Silence compiler warning in the kvm_page_table_test KVM: x86/pmu: Update AMD PMC sample period to fix guest NMI-watchdog x86/kvm: Preserve BSP MSR_KVM_POLL_CONTROL across suspend/resume KVM: SPDX style and spelling fixes KVM: x86: Skip KVM_GUESTDBG_BLOCKIRQ APICv update if APICv is disabled KVM: x86: Pend KVM_REQ_APICV_UPDATE during vCPU creation to fix a race KVM: nVMX: Defer APICv updates while L2 is active until L1 is active KVM: x86: Tag APICv DISABLE inhibit, not ABSENT, if APICv is disabled KVM: Initialize debugfs_dentry when a VM is created to avoid NULL deref KVM: Add helpers to wrap vcpu->srcu_idx and yell if it's abused KVM: RISC-V: Use kvm_vcpu.srcu_idx, drop RISC-V's unnecessary copy KVM: x86: Don't re-acquire SRCU lock in complete_emulated_io() RISC-V: KVM: Restrict the extensions that can be disabled RISC-V: KVM: Remove 's' & 'u' as valid ISA extension
-